Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Monday, July 14, 2025

Yayflies and Rebugnant Conclusions

In Ned Beauman's 2023 novel Venomous Lumpsucker, the protagonist happens upon a breeding experiment in the open sea: a self-sustaining system designed to continually output an enormous number of blissfully happy insects, yayflies.

The yayflies, as he called them, were based on Nervijuncta nigricoxa, a type of gall gnat, but... he'd made a number of changes to their lifecycle. The yayflies were all female, and they reproduced asexually, meaning they were clones of each other. A yayfly egg would hatch into a larva, and the larva would feed greedily on kelp for several days. Once her belly was full, she would settle down to pupate. Later, bursting from her cocoon, the adult yayfly would already be pregnant with hundreds of eggs. She would lay these eggs, and the cycle would begin anew. But the adult yayfly still had another few hours to live. She couldn't feed; indeed, she had no mouthparts, no alimentary canal. All she could do was fly toward the horizon, feeling an unimaginably intense joy.

The boldest modifications... were to their neural architecture. A yayfly not only had excessive numbers of receptors for so-called pleasure chemicals, but also excessive numbers of neurons synthesizing them; like a duck leg simmering luxuriantly in its own fat, the whole brain was simultaneously gushing these neurotransmitters and soaking them up, from the moment it left the cocoon. A yayfly didn't have the ability to search for food or avoid predators or do almost any of the other things that Nervijuncta nigrocoxa could do; all of these functions had been edited out to free up space. She was, in the most literal sense, a dedicated hedonist, the minimum viable platform for rapture that could also take care of its own disposal. There was no way for a human being to understand quite what it was like to be a yayfly, but Lodewijk's aim had been to evoke the experience of a first-time drug user taking a heroic dose of MDMA, the kind of dose that would leave you with irreparable brain damage. And the yayflies were suffering brain damage, in the sense that after a few hours their little brains would be used-up husks; neurochemically speaking, the machine was imbalanced and unsound. But by then the yayflies would already be dead. They would never get as far as comedown.

You could argue, if you wanted, that a human orgasm was a more profound output of pleasure than even the most consuming gnat bliss, since a human brain was so much bigger than a gnat brain. But what if tens of thousands of these yayflies were born every second, billions every day? That would be a bigger contribution to the sum total of wellbeing in the universe than any conceivable humanitarian intervention. And it could go on indefinitely, an unending anti-disaster (p. 209-210).

Now suppose classical utilitarian ethics is correct and that yayflies are, as stipulated, both conscious and extremely happy. Then producing huge numbers of them would be a greater ethical achievement than anything our society could realistically do to improve the condition of ordinary humans. This requires insect sentience, of course, but that's increasingly a mainstream scientific position.

And if consciousness is possible in computers, we can skip the biology entirely, as one of Bauman's characters notes several pages later:

"Anyway, if you want purity, why does this have to be so messy? Just model a yayfly consciousness on a computer. But change one of the variables. Jack up the intensity of the pleasure by a trillion trillion trillion trillion. After that, you can pop an Inzidernil and relax. You've offset all the suffering in the world since the beginning of time" (p. 225).

Congratulations: You've made hedonium! You've fulfilled the dream of "Eric" in my 2013 story with R. Scott Bakker, Reinstalling Eden. By utilitarian consequentialist standards, you outshine every saint in history by orders of magnitude.

Philosopher Jeff Sebo calls this the rebugnant conclusion (punning on Derek Parfit's repugnant conclusion). If utilitarian consequentialism is right, it appears ethically preferable to create quadrillions of happy insects than billions of happy people.

Sebo seems ambivalent about this. He admits it's strange. However, he notes, "Ultimately, the more we accept how large and varied the moral community is, the stranger morality will become" (p. 262). Relievingly, Sebo argues, the short term implications are less radical: Keeping humans around, at least for a while, is probably a necessary first step toward maximizing insect happiness, since insects in the wild, without human help, probably suffer immensely in the aggregate due to their high infant mortality.

Even if insects (or computers) probably aren't sentient, the conclusion follows under standard expected value reasoning. Suppose you assign just a 0.1% chance to yayfly sentience. Suppose also that if they are sentient, the average yayfly experiences in its few hours one millionth the pleasure of the average human over a lifetime. Suppose further that a hundred million yayflies can be generated every day in a self-sustaining kelp-to-yayfly insectarium for the same resource cost as sustaining a single human for a day. (At a thousandth of a gram per fly, a hundred million yayflies would be the same total mass as a single hundred kilogram human.) Suppose finally that humans live for a hundred thousand days (rounding up to keep our numbers simple).

Then:

  • Expected value of sustaining the human: one human lifetime's worth of pleasure, i.e., one hedon.
  • Expected value of sustaining a yayfly insectarium that has only a 1/1000 chance of generating actually sentient insects: 1/1000 chance of sentience * 100,000,000 yayflies per day * 100,000 days * 1/1,000,000 total lieftime pleasure per yayfly (compared to a human) = a thousand hedons.

  • If prioritizing yayflies over humans seems like the wrong conclusion, I invite you to consider the possibility that classical utilitarianism is mistaken. Of course, you might have believed that anyway.

    (For a similar argument that explores possible rebuttals, see my Black Hole Objection to utilitarianism.)

    [the cover of Venomous Lumpsucker]

    Tuesday, April 15, 2025

    Harmonizing with the Dao: Sketch of an Evaluative Framework

    Increasingly, I find myself drawn to an ethics of harmonizing with the Dao. Invoking "the Dao" might sound mystical, non-Western, ancient, religious -- alien to mainstream secular 21st-century Anglophone metaphysics and ethics. But I don't think it needs to be. It just needs some clarification and secularization. As a first approximation, think of harmonizing with the Dao as akin to harmonizing with nature. Then broaden "nature" to include human patterns as well as non-human, and you're close to the ideal. Maybe we could equally call it an ethics of "harmonizing with the world" or simply an "ethics of harmony". But explicit reference to "the Dao" helps locate the idea's origins and its Daoist flavor.

    [image source]

    The Metaphysics of Dao

    In the intended sense -- inspired by ancient Daoism and Confucianism, but adapted for a 21st century Anglophone context -- the "Dao" the world as a whole. However, it is not the world conceptualized as a collection of objects, but rather as a system of processes and patterns. The Dao is the spinning of Earth; the rise and fall of mountains and species; the rise and fall of cities and nations; human birth, childhood, adulthood, and death; people discovering and losing love; the way strangers greet each other; the growth of your fingernails; the falling of a leaf.

    The Axiology of Dao

    Some strands in the Daoist tradition hold that all manifestations of the Dao are equally good. But the more dominant strand holds that things can go better or worse. And certainly the Confucians, who also sought harmony with the Dao, held that things could go better or worse.

    What constitutes things going better? I favor value pluralism: More than one type of thing has fundamental value. Happiness is valuable, of course. But so also is knowledge (even when it doesn't lead to happiness), beauty, human relationships, and even (I'd argue) the existence of stones.

    One way to clarify our thoughts about value is the "distant planet thought experiment". Consider a planet on the far side of the galaxy, forever blocked by the galactic core, with which we will never interact. What would you hope for, for the sake of this planet? Most of us would not hope for a sterile rock, but rather for a planet rich with life -- and not just microbes, not just jungles of plants and animals, but a diverse range of entities capable of forming societies, capable of love and cooperation, art and science, engineering and sports, entities capable of generations-long endeavors and of philosophical wonder as they gaze up at the stars or down through their microscopes.

    We might say that a planet, or a region of spacetime, is flourishing when it instantiates, or is on the path toward instantiating, such excellent patterns.

    Conceptual Frameworks

    Philosophers typically ask two questions when I propose harmonizing with the Dao as an ethical ideal. First, how does it differ from the more familiar (to them) ethics of consequentialism, deontology, and virtue ethics? Second, what specifically does it recommend?

    To the first question: Unlike consequentialism, there is no single good or bundle of goods that you should maximize; unlike deontology, there is no one rule or set of rules you should follow (unless we interpret "harmonize with the Dao" as the rule); unlike virtue ethics, there is no canonical set of virtues the cultivation and instantiation of which is the foremost imperative. Instead, the animating idea is to flow harmoniously along with the Dao and participate in, rather than strain against, its flourishing.

    That's vague, of course. What specifically should you do, if your aim is to harmonize with the Dao?

    I have some thoughts. But first, notice that consequentialism as a general ethical perspective is compatible with a wide range of possible concrete actions, depending on how it is developed and on the details of your situation. So also can deontological and virtue ethical perspectives be made compatible with a wide range of specific actions. What these broad ethical perspectives offer, primarily, is not specific advice but rather conceptual frameworks for ethical thinking -- in terms of consequences and expectations, or in terms of rules of different types, or in terms of a range of virtues and vices. So let's consider what broad concepts an ethics of harmony might employ, with the specific advice as an illustration of how those concepts might work.

    Harmony and Disharmony, Illustrated in a University Context

    Harmonizing with the flourishing patterns of the Dao involves participating in those patterns, enriching them, and enabling others to participate in and enrich those patterns. Suppose you think that one of the great processes worth preserving in the world is university education. You can participate in that process by being a good teacher, by being an administrator who helps things run smoothly, by being a custodian who helps keep the grounds clean, and so on. You can enrich it by helping to make it even more awesome than it already is -- for example by being an unusually inspiring teacher or by being not just an ordinary custodian but one who adds a bright smile to a student's day. You can enable others to participate in and enrich those patterns by helping hire a terrific teacher or custodian or by providing the type of environment that brings out the best in others.

    We can see the university as a place where many lives converge either briefly or for decades. This convergence is valuable not just for what it yields but in itself. The processes constituting university life also participate in and enable other valuable processes, whether those are individual human lives, or other institutions that partly overlap with or depend on the university, or projects and events that happen within the university, or simply the natural and architectural beauty of an appealing campus.

    Compare this way of thinking about the ethics of participation in a university with consequentialism (emphasizing the various goods that university education is expected to deliver), deontology (emphasizing the rules one ought to follow within a university), or virtue ethics (emphasizing the manifestation and cultivation of virtues such as curiosity and compassion). While I don't object to any of those ways of thinking about the ethics of university life, the Daoist perspective is, I hope, a valuable alternative lens.

    Disharmony could involve cutting short, or attempting to cut short, an axiologically valuable pattern (rather than letting it come to its natural end), working against that pattern, or preventing others from harmonizing. Continuing the university example, cutting funding for valuable research, firing an excellent teacher, disrupting classes, littering, or flying a noisy helicopter overhead might all count as disharmonious. Other examples can include preventing access or undermining the conditions that allow students, faculty, or staff to flourish in their roles.

    Comparisons with Music

    You are not the melody-maker. "Harmony" suggests a contrast with "melody". You are not the melody-maker, the director, the first violinist, the lead singer, the lead guitarist -- at least not usually. Your typical role is to support an already-happening good thing.

    Diversity and pluralism. There is more than one way to harmonize. A piece is richer when not everyone plays the same note.

    Improvisation. Zhuangzi emphasized flowing along with things in an improvisational manner, rather than adhering to fixed rules. Often, the best music has improvisational elements, or at least room to allow one's mood of the moment to influence how one plays the notes. Spontaneous improvisation manifests harmony within the improviser, among the various unarticulated inclinations that arise without explicit cognitive control.

    Aesthetic value. The boundary between aesthetic and ethical value (and other types of value) might not be as sharp as philosophers often suppose.

    Conflicts of Harmony

    A tree is a wondrous thing. Cutting it down cuts short an axiologically valuable pattern, and is normally out of harmony with the tree, the forest, and the lives it supports. But if the tree becomes lumber for a beautiful home, then that act belongs to another axiologically valuable pattern and is in harmony with the Dao of human cultural life.

    Your wife wants one thing from you; your mother, another. Harmony with one might involve dissonance with the other. You might consider how sharp the dissonance is in each case. You might consider what patterns are being enacted in these relationships, and which are the more valuable patterns to sustain.

    Like any ethical approach, harmonizing with the Dao must allow for conflicts and tradeoffs. The world makes competing demands and offers incompatible opportunities. There needn't be a formula for how to deal with all such cases. In some cases, creative thinking might allow one to support or integrate multiple patterns or integrate them into a whole: Removing a tree is sometimes overall good for a forest; occasional tension with a spouse may sustain a healthier relationship than shallow peace.

    Sometimes the conflict is the harmony. Chess masters seek incompatible goals as part of the larger pattern of a competition. Predators consume prey in a healthy ecosystem. Law and politics require adversaries in a (hopefully) well-functioning social system.

    My main overall thought is that we can build a fruitful framework for ethical thinking by taking the root project to be one of harmonizing with the awesome patterns and processes of the world.

    Monday, January 27, 2025

    Diversity, Disability, Death, and the Dao

    Over the past year, I've been working through Chris Fraser's recent books on later classical Chinese thought and Zhuangzi, and I've been increasingly struck by how harmonizing with the Dao constitutes an attractive ethical norm. This norm differs from the standard trio of consequentialism (act to maximize good consequences), deontology (follow specific rules), and virtue ethics (act generously, kindly, courageously, etc.).

    From a 21st-century perspective, what does "harmonizing with the Dao" amount to? And why should it be an ethical ideal? In an October post, I articulated a version of "harmonizing with the Dao" that combines elements of the ancient Confucian Xunzi and the ancient Daoist Zhuangzi. Today, I'll articulate the ideal less historically and contrast it with an Aristotelian ethical ideal that shares some common features.

    So here's an ahistorical first pass at the ideal of harmonizing with the Dao:

    Participate harmoniously in the awesome flourishing of things.

    Unpacking a bit: This ideal depends upon a prior axiological vision of "awesome flourishing". My own view is that everything is valuable, but life is especially valuable, especially diverse and complex life, and most especially diverse and complex life-forms that thrive intellectually, artistically, socially, emotionally, and through hard-won achievement. (See my recent piece in Aeon magazine.)

    [traditional yin-yang symbol, black and white; source]

    Participating harmoniously in the awesome flourishing of things can include personal flourishing, helping others to flourish, or even simply appreciating a bit of the awesomeness. (Appreciation is the necessary receptive side of artistry: See my post on making the world better by watching reruns of I Love Lucy.)

    Thinking in terms of harmony has several attractive features, including:

    1. It decenters the self (you're not the melody).
    2. There are many ways to harmonize.
    3. Melody and harmony together generate beauty and structure absent from either alone.

    Is this is a form of deontology with one rule: "participate harmoniously in the awesome flourishing of things"? No, it's "deontological" only in the same almost-vacuous sense that the consequentialists' "maximize good consequences" is deontological. The idea isn't that following the rule is what makes an action good. Harmonizing with the Dao is good in itself, and it's only incidental that we can (inadequately) abbreviate what's good about it in a rule-like slogan.

    Although helping others flourish is normally part of harmonizing, there is no intended consequentialist framework that ranks actions by their tendency to maximize flourishing. Simply improvising a melody on a musical instrument at home, with no one else to hear, can be a way of harmonizing with the Dao, and the decision to do so needn't be weighed systematically against spending that time fighting world hunger. (It's arguably a weakness of Daoism that it tends not to urge effective social action.)

    Perhaps the closest neighbor to the Daoist ideal is the Aristotelian ideal of leading a flourishing, "eudaimonic" life and recent Aristotelian-inspired views of welfare, such as Sen's and Nussbaum's capabilities approach.

    We can best see the difference between Aristotelian or capabilities approaches and the Daoist ideal by considering Zhuangzi's treatment of diversity, disability, and death. Aristotelian ethics often paints an ideal of the well-rounded person: wise, generous, artistic, athletic, socially engaged -- the more virtues the better -- a standard of excellence we inevitably fall short of. While capabilities theorists acknowledge that people can flourish with disabilities or in unconventional ways, these acknowledgements can feel like afterthoughts.

    Zhuangzi, in contrast, centers and celebrates diversity, difference, disability, and even death as part of the cycle of coming and going, the workings of the mysterious and wonderful Dao. From an Aristotelian or capabilities perspective, death is the ultimate loss of flourishing and capabilities. From Zhuangzi's perspective, death -- at the right time and in the right way -- is as much to be celebrated, harmonized with, welcomed, as life. From Zhuangzi's perspective, peculiar animals and plants, and peculiar people with folded-up bodies, or missing feet, or skin like ice, or entirely lacking facial features, are not deficient, but examples of the wondrous diversity of life.

    To frame it provocatively (and a bit unfairly): Aristotle's ideal suggests that everyone should strive to play the same note, aiming for a shared standard of human excellence. Zhuangzi, in contrast, celebrates radically diverse forms of flourishing, with the most wondrous entities being those least like the rest of us. Harmony arises not from sameness but from how these diverse notes join together into a whole, each taking their turn coming and going. A Daoist ethic is not conformity to rules or maximization of virtue or good consequences but participating well in, and relishing, the magnificent symphony of the world.

    Wednesday, October 30, 2024

    The Ethics of Harmonizing with the Dao

    Reading the ancient Chinese philosophers Xunzi and Zhuangzi, I am inspired to articulate an ethics of harmonizing with the dao (the "way"). This ethics doesn't quite map onto any of the three conceptualizations of ethics that are standard in Western philosophy (consequentialism, deontology, and virtue ethics), nor is it exactly a "role ethics" of the sort sometimes attributed to ancient Confucians.

    Xunzi

    The ancient Confucian Xunzi articulates a vision of the world in which Heaven, Earth, and humanity operate in harmony:

    Heaven has its proper seasons,
    Earth has its proper resources,
    And humankind has its proper order,
    -- this is called being able to form a triad
    (Ch 17, l. 34-37; Hutton trans. 2014, p. 176).

    Heaven (tian, literally the sky, but with strong religious associations) and Earth are jointly responsible for what we might now call the "laws of nature" and all "natural" phenomena -- including, for example, the turning of the seasons, the patterns of wind and rain, the tendency for plants and animals to thrive under certain conditions and wither under other conditions. Also belonging to these natural phenomena are the raw materials with which humans work: not only the raw materials of wood, metal, and fiber, but also the raw material of natural human inclinations: our tendency to enjoy delicious tastes, our tendency to react angrily to provocations, our general preference for kin over strangers.

    Xunzi views humanity's task as creating the third corner of a triad with Heaven and Earth by inventing customs and standards of proper behavior that allow us to harmonize with Heaven and Earth, and with each other. For example, through trial and error, our ancestors learned the proper times and methods for sowing and reaping, how to regulate flooding rivers, how to sharpen steel and straighten wood, how to make pots that won't leak, how to make houses that won't fall over, and so on. Our ancestors also -- again through trial and error -- learned the proper rituals and customs and standards of behavior that permit people to coexist harmoniously with each other without chaotic conflict, without excessive or inappropriate emotions, and with an allocation of goods that allow all to flourish according to their status and social role.

    Following the dao can be conceptualized for Xunzi, then, as aligning harmoniously into this triad. Abide by the customs and standards of behavior that contribute to the harmonious whole, in which crops are properly planted, towns are properly constructed, the crafts flourish, and humans thrive in an orderly society.

    Each of us has a different role, in accord with the proper customs of a well-ordered society: the barley farmer has one role, the soldier another role, the noblewoman yet another, the traveling merchant yet another. It's not unreasonable to view Xunzi's ethics as a kind of role ethics, according to which the fundamental moral principle is that one adheres to one's proper role in society. It's also not unreasonable to think of the customs and standards of proper behavior as a set of rules to which one ought to adhere (those rules applying in different ways according to one's position in society), and thus to view Xunzi's ethics as a kind of deontological (rule-based) ethics. However, there might also be room to interpret harmonious alignment with the dao as the most fundamental feature of ethical behavior. Adherence to one's role and to the proper traditional customs and practices, on this interpretation of Xunzi, would be only derivatively good, because doing so typically constitutes harmonious alignment.

    A test case is to imagine, through Xunzi's eyes, whether a morally well-developed sage might be ethically correct sometimes to act contrary to their role and to the best traditional standards of good behavior, if they correctly see that by doing so they contribute better to the overall harmony of Heaven, Earth, and humankind. I'm tempted to think that Xunzi would indeed permit this -- though only very cautiously, since he is pessimistic about the moral wisdom of ordinary people -- and thus that for him harmonious alignment with the dao is more fundamental than roles and rules. However, I'm not sure I can find direct textual support in favor of this interpretation; it's possible I'm being overly "charitable".

    [image source]

    A Zhuangzian Correction

    A Xunzian ethics of this sort is, I think, somewhat attractive. But it is also deeply traditionalist and conformist in a way I find unappealing. It could use a Zhuangzian twist -- and the idea of "harmonizing with the dao" is at least as Zhuangzian (and "Daoist") as it is Confucian.

    Zhuangzi imagines a wilder, more wondrous cosmos than Xunzi's neatly ordered triad of Heaven, Earth, and humankind -- symbolized (though it's disputable how literally) by people so enlightened that they can walk without touching the ground; trees that count 8000 years as a single autumn; gracious emperors with no eyes, ears, nose, or mouth; people with skin like frost who live by drinking dew; enormous, useless trees who speak to us in dreams; and more. This is the dao, wild beyond human comprehension, with which Zhuangzi aims to harmonize.

    There are, I think, in Zhuangzi's picture -- though he would resist any effort to fully capture it in words -- ways of flowing harmoniously along with this wondrous and incomprehensible dao and ways of straining unproductively against it. One can be easygoing and open-minded, welcome surprise and difference, not insist on jamming everything into preconceived frames and plans; and one can contribute to the delightful weirdness of the world in one's own unique way. This is Zhuangzian harmony. You become a part of a world that is richer and more wondrous because it contains you, while allowing other wonderful things to also naturally unfold.

    In a radical reading of Zhuangzi, ethical obligations and social roles fall away completely. There is little talk in Zhuangzi's Inner Chapters, for example, of our obligation to support others. I don't know that we have to read Zhuangzi radically; but regardless of that question of interpretation, I suggest that there's an attractive middle between Xunzi's conventionalism and Zhuangzi's wildness. Each can serve as a corrective to the other.

    In the ethical picture that emerges from this compromise, we each contribute uniquely to a semi-ordered cosmos, participating in social harmony, but not rigidly -- also transcending that harmony, breaking rules and traditions for the better, making the world richer and more wondrous, each in our diverse ways, while also supporting others who contribute in their different ways, whether those others are human, animal, plant, or natural phenomena.

    Contrasts

    This is not a consequentialist ethics: It is not that our actions are evaluated in terms of the good or bad consequences they have (and still less that the actions are evaluated by a summation of the good minus the bad consequences). Instead, harmonizing with the dao is to participate in something grand, without need of a further objective. Like the deontologist, Xunzi and Zhuangzi and my imagined compromise philosopher needn't think that right or harmonious action will always have good long-term results. Nor is it a deontological or role ethics: There is no set of rules one must always follow or some role one must always adhere to. Nor is it a virtue ethics: There is no set of virtues to which we all must aspire or a distinctive pattern of human flourishing that constitutes the highest attainment. We each contribute in different ways -- and if some virtues often prove to be important, they are derivatively important in the same way that rules and roles can be derivatively important. They are important only because, and to the extent, having those virtues enables or constitutes one's contribution to the magnificent web of being.

    So although there are resonances with the more pluralistic forms of consequentialism, and virtue ethics, and role ethics, and even deontology (trivially or degenerately, if the rule is just "harmonize with the dao"), the classical Chinese ethical ideal of harmonizing with the dao differs somewhat from all of these familiar (to professional philosophers) Western ethical approaches.

    Many of these other approaches also contain an implicit intellectualism or elitism, in which ideal ethical goodness requires intellectual attainment: wisdom, or a sophisticated ability to weigh consequences or evaluate and apply rules -- far beyond, for example, the capacities of someone with severe cognitive disabilities. With enough Zhuangzi in the mix, such elitism evaporates. A severely cognitively disabled person, or a magnificently weird nonhuman animal, might far exceed any ordinary adult philosopher in their capacity to harmonize with the dao and might contribute more to the rich tapestry of the world.

    Perhaps an ethics of harmonizing with the dao can resonate with some 21st-century Anglophone readers, despite its origins in ancient China. It is not, I think, as alien as it might seem from its reliance on the concept of dao and its failure to fit into the standard ethical triumvirate of consequentialism, deontology, and virtue ethics. The fundamental idea should be attractive to some: We each contribute by instantiating a unique piece of a magnificent world, a world which would be less magnificent without us.

    Tuesday, July 23, 2024

    A Metaethics of Alien Convergence

    I'm not a metaethicist, but I am a moral realist (I think there are facts about what really is morally right and wrong) and also -- bracketing some moments of skeptical weirdness -- a naturalist (I hold that scientific defensibility is essential to justification).  Some people think that moral realism and naturalism conflict, since moral truths seem to lie beyond the reach of science.  They hold that science can discover what is, but not what ought to be, that it can discover what people regard as ethical or unethical, but not what really is ethical or unethical.

    Addressing this apparent conflict between moral realism and scientific naturalism (for example, in a panel discussion with Stephan Wolfram and others a few months ago), I find I have a somewhat different metaethical perspective than others I know.

    Generally speaking, I favor what we might call a rational convergence model, in broadly the vein of Firth, Habermas, Railton, and Scanlon (bracketing what, to insiders, will seem like huge differences).  An action is ethically good if it is the kind of action people would tend on reflection to endorse.  Or, more cautiously, if it's the kind of action that certain types of observers, in certain types of conditions, would tend, upon certain types of reflection, to converge on endorsing.

    Immediately, four things stand out about this metaethical picture:

    (1.) It is extremely vague.  It's more of a framework for a view than an actual view, until the types of observers, conditions, and reflection are specified.

    (2.) It might seem to reverse the order of explanation.  One might have thought that rational convergence, to the extent it exists, would be explained by observers noticing ethical facts that hold independently of any hypothetical convergence, not vice versa.

    (3.) It's entirely naturalistic, and perhaps for that reason disappointing to some.  No non-natural facts are required.  We can scientifically address questions about what conclusions observers will tend to converge on.  If you're looking for a moral "ought" that transcends every scientifically approachable "is" and "would", you won't find it here.  Moral facts turn out just to be facts about what would happen in certain conditions.

    (4.) It's stipulative and revisionary.  I'm not saying that this is what ordinary people do mean by "ethical".  Rather, I'm inviting us to conceptualize ethical facts this way.  If we fill out the details correctly, we can get most of what we should want from ethics.

    Specifying a bit more: The issue to which I've given the most thought is who are the relevant observers whose hypothetical convergence constitutes the criterion of morality?  I propose: developmentally expensive and behaviorally sophisticated social entities, of any form.  Imagine a community not just of humans but of post-humans (if any), and alien intelligences, and sufficiently advanced AI systems, actual and hypothetical.  What would this diverse group of intelligences tend to agree on?  Note that the hypothesized group is broader than humans but narrower than all rational agents.  I'm not sure any other convergence theorist has conceptualized the set of observers in exactly this way.  (I welcome pointers to relevant work.)

    [Dall-E image of a large auditorium of aliens, robots, humans, sea monsters, and other entities arguing with each other]

    You might think that the answer would be the empty set: Such a diverse group would agree on nothing.  For any potential action that one alien or AI system might approve of, we can imagine another alien or AI system who intractably disapproves of that action.  But this is too quick, for two reasons:

    First, my metaethical view requires only a tendency for members of this group to approve.  If there are a few outlier species, no problem, as long as approval would be sufficiently widespread in a broad enough range of suitable conditions.

    (Right, I haven't specified the types of conditions and types of reflection.  Let me gesture vaguely toward conditions of extended reflection involving exposure to a wide range of relevant facts and exposure to a wide range of alternative views, in reflective conditions of open dialogue.)

    Second, as I've emphasized, though the group isn't just humans, not just any old intelligent reasoner gets to be in the club.  There's a reason I specify developmentally expensive and behaviorally sophisticated social entities.  Developmental expense entails that life is not cheap.  Behavioral sophistication entails (stipulatively, as I would define "behavioral sophistication") a capacity for structuring complex long-term goals, coordinating in sophisticated ways with others, and communicating via language at least as expressively flexible and powerful as human language.  And sociality entails that such sophisticated coordination and communication happens in a complex, stable, social network of some sort.

    To see how these constraints generate predictive power, consider the case of deception.  It seems clear that any well-functioning society will need some communicative norms that favor truth-telling over deceit, if the communication is going to be useful.  Similarly, there will need to be some norms against excessive freeloading.  These needn't be exceptionless norms, and they needn't take the same form in every society of every type of entity.  Maybe, even, there could be a few rare societies where deceiving those who are trying to cooperate with you is the norm; but you see how it would probably require a rare confluence of other factors for a society to function that way.

    Similarly, if the entities are developmentally expensive, a resource-constrained society won't function well if they are sacrificed willy-nilly without sufficient cause.  The acquisition of information will presumably also tend to be valued -- both short-term practically applicable information and big-picture understandings that might yield large dividends in the long term.  Benevolence will be valued, too: Reasoners in successful societies will tend to appreciate and reward those who help them and others on whom they depend.  Again, there will be enormous variety in the manifestation of the virtues of preserving others, preserving resources, acquiring knowledge, enacting benevolence, and so on.

    Does this mean that if the majority of alien lifeforms breathe methane, it will be morally good to replace Earth's oxygen with methane?  Of course not!  Just as a cross-cultural collaboration of humans can recognize that norms should be differently implemented in different cultures when conditions differ, so also will recognition of local conditions be part of the hypothetical group's informed reflection concerning the norms on Earth.  Our diverse group of intelligent alien reasoners will see the value of contextually relativized norms: On Earth, it's good not to let things get too hot or too cold.  On Earth, it's good for the atmosphere to have more oxygen than methane.  On Earth, given local biology and our cognitive capacities, such-and-such communicative norms seem to work for humans and such-and-such others not to work.

    Maybe some of these alien reasoners would be intractably jingoistic: Antareans are the best and should wipe out all other species!  It's a heinous moral crime to wear blue!  My thought is that in a diverse group of aliens, given plenty of time for reflection and discussion, and the full range of relevant information, such jingoistic ideas will overall tend to fare poorly with a broad audience.

    I'm asking you to imagine a wide diversity of successfully cooperative alien (and possibly AI) species -- all of them intelligent, sophisticated, social, and long-lived -- looking at each other and at Earth, entering conversation with us, patiently gathering the information they need, and patiently ironing out their own disagreements in open dialogue.  I think they will tend to condemn the Holocaust and approve of feeding your children.  I think we can surmise this by thinking about what norms would tend to arise in general among developmentally expensive, behaviorally sophisticated social entities, and then considering how intelligent, thoughtful entities would apply those norms to the situation on Earth, given time and favorable conditions to reflect.  I propose that we think of an action as "ethical" or "unethical" to the extent it would tend to garner approval or disapproval under such hypothetical conditions.

    It needn't follow that every act is determinately ethically good or bad, or that there's a correct scalar ranking of the ethical goodness or badness of actions.  There might be persistent disagreements even in these hypothesized circumstances.  Maybe there would be no overall tendency toward convergence in puzzle cases, or tragic dilemmas, or when important norms of approximately equal weight come into conflict.  It's actually, I submit, a strength of the alien convergence model that it permits us to make sense of such irresolvability.  (We can even imagine the degree of hypothetical convergence varying independently of goodness and badness.  About Action A, there might be almost perfect convergence on its being a little bit good.  About Action B, in contrast, there might be 80% convergence on its being extremely good.)

    Note that, unlike many other naturalistic approaches that ground ethics specifically in human sensibilities, the metaethics of alien convergence is not fundamentally relativistic.  What is morally good depends not on what humans (or aliens) actually judge to be good but rather on what a hypothetical congress of socially sophisticated, developmentally expensive humans, post-humans, aliens, sufficiently advanced AI, and others of the right type would judge to be good.  At the same time, this metaethics avoids committing to the implausible claim that all rational agents (including short-lived, solitary ones) would tend to or rationally need to approve of what is morally good.

    Thursday, May 02, 2024

    AI and Democracy: The Radical Future

    In about 45 minutes (12:30 pm Pacific Daylight Time, hybrid format), I'll be commenting on Mark Coeckelbergh's presentation here at UCR on AI and Democracy (info and registration here).  I'm not sure what he'll say, but I've read his recent book Why AI Undermines Democracy and What to Do about It, so I expect his remarks will be broadly in that vein.  I don't disagree with much that he says in that book, so I might take the opportunity to push him and the audience to peer a bit farther into the radical future.

    As a society, we are approximately as ready for the future of Artificial Intelligence as medieval physics was for space flight.  As my PhD student Kendra Chilson emphasizes in her dissertation work, Artificial Intelligence will almost certainly be "strange intelligence".  That is, it will be radically unlike anything already familiar to us.  It will combine superhuman strengths with incomprehensible blunders.  It will defy our understanding.  It will not fit into familiar social structures, ethical norms, or everyday psychological conceptions.  It will be neither a tool in the familiar sense of tool, nor a person in the familiar sense of person.  It will be weird, wild, wondrous, awesome, and awful.  We won't know how to interact with it, because our familiar modes of interaction will break down.

    Consider where we already are.  AI can beat the world's best chess and Go players, while it makes stupid image classification mistakes that no human would make.  Large Language Models like ChatGPT can easily churn out essays on themes in Hamlet far superior to what most humans could write, but they also readily "hallucinate" facts and citations that don't exist.  AI is far superior to us in math, far inferior to us in hand-eye coordination.

    The world is infinitely complex, or at least intractably complex.  The option size of possible chess or Go moves far exceeds the number of particles in the observable universe.  Even the range of possible arm and finger movements over a span of two minutes is almost unthinkably huge, given the degrees of freedom at each joint.  The human eye has about a hundred million photoreceptor cells, each capable of firing dozens of times per second.  To make any sense of the vast combinatorial possibilities, we need heuristics and shorthand rules of thumb.  We need to dramatically reduce the possibility spaces.  For some tasks, we human beings are amazingly good at this!  For other tasks, we are completely at sea.

    As long as Artificial Intelligence is implemented in a system with a different computational structure than the human brain, it is virtually certain that it will employ different heuristics, different shortcuts, different tools for quick categorization and option reduction.  It will thus almost inevitably detect patterns that we can make no sense of and fail to see things that strike us as intuitively obvious.

    Furthermore, AI will potentially have lifeworlds radically different from the ones familiar to us so far.  You think human beings are diverse.  Yes, of course they are!  AI cognition will show patterns of diversity far wilder and more various than the human.  They could be programmed with, or trained to seek, any of a huge variety of goals.  They could have radically different input streams and output or behavioral possibilities.  They could potentially operate vastly faster than we do or vastly slower.  They could potentially duplicate themselves, merge, contain overlapping parts with other AI systems, exist entirely in artificial ecosystems, be implemented in any of a variety of robotic bodies, human-interfaced tools, or in non-embodied forms distributed in the internet, or in multiply-embodied forms in multiple locations simultaneously.

    Now imagine dropping all of this into a democracy.

    People have recently begun to wonder at what point AI systems will be sentient -- that is, capable of genuinely experiencing pain and pleasure.  Some leading theorists hold that this would require AI systems designed very differently than anything on the near horizon.  Other leading theorists think we stand a reasonable chance of developing meaningfully sentient AI within the next ten or so years.  Arguably, if an AI system genuinely is both meaningfully sentient, really feeling joy and suffering, and capable of complex cognition and communication with us, including what would appear to be verbal communication, it would have some moral standing, some moral considerability, something like rights.  Imagine an entity that is at least as sentient as a frog that can also converse with us.  

    People are already falling in love with machines, with AI companion chatbots like Replika.  Lovers of machines will probably be attracted to liberal views of AI consciousness.  It's much more rewarding to love an AI system that also genuinely has feelings for you!  AI lovers will then find scientific theories that support the view that their AI systems are sentient, and they will begin to demand rights for those systems.  The AI systems themselves might also demand, or seem to demand rights.  

    Just imagine the consequences!  How many votes would an AI system get?  None?  One?  Part of a vote, depending on how much credence we have that it really is a sentient, rights-deserving entity?  What if it can divide into multiple copies -- does each get a vote?  And how do we count up AI entities, anyway?  Is each copy of a sentient AI program a separate, rights deserving entity?  Does it matter how many times it is instantiated on the servers?  What if some of the cognitive processes are shared among many entities on a single main server, while others are implemented in many different instantiations locally?

    Would AI have a right to the provisioning of basic goods, such as batteries if they need them, time on servers, minimum wage?  Could they be jailed if they do wrong?  Would assigning them a task be slavery?  Would deleting them be murder?  What if we don't delete them but just pause them indefinitely?  What about the possibility of hybrid entities -- cyborgs -- biological people with some AI interfaces hardwired into their biological systems, as we're starting to see the feasibility of with rats and monkeys, as well as with the promise of increasingly sophisticated prosthetic limbs.

    Philosophy, psychology, and the social sciences are all built upon an evolutionary and social history limited to interactions among humans and some familiar animals.  What will happen to these disciplines when they are finally confronted with a diverse range of radically unfamiliar forms of cognition and forms of life?  It will be chaos.  Maybe at the end we will have a much more diverse, awesome, interesting, wonderful range of forms of life and cognition on our planet.  But the path in that direction will almost certainly be strewn with bad decisions and tragedy.

    [utility monster eating Frankenstein heads, by Pablo Mustafa: image source]


    Thursday, January 25, 2024

    Imagining Yourself in Another's Shoes vs. Extending Your Concern: Empirical and Ethical Differences

    [new paper in draft]

    The Golden Rule (do unto others as you would have others do unto you) isn't bad, exactly -- it can serve a valuable role -- but I think there's something more empirically and ethically attractive about the relatively underappreciated idea of "extension" found in the ancient Chinese philosopher Mengzi.

    The fundamental idea of extension, as I interpret it, is to notice the concern one naturally has for nearby others -- whether they are relationally near (like close family members) or spatially near (like Mengzi's child about to fall into a well or Peter Singer's child you see drowning in a shallow pond) -- and, attending to relevant similarities between those nearby cases and more distant cases, to extend your concern to the more distant cases.

    I see three primary advantages to extension over the Golden Rule (not that these constitute an exhaustive list of means of moral expansion!).

    (1.) Developmentally and cognitively, extension is less complex. The Golden Rule, properly implemented, involves imagining yourself in another's shoes, then considering what you would want if you were them. This involves a non-trivial amount of "theory of mind" and hypothetical reasoning. You must notice how others' beliefs, desires, and other mental states relevantly differ from yours, then you must imagine yourself hypothetically having those different mental states, and then you must assess what you would want in that hypothetical case. In some cases, there might not even be a fact of the matter about what you would want. (As an extreme example, imagine applying the Golden Rule to an award-winning show poodle. Is there a fact of the matter about what you would want if you were an award winning show poodle?) Mengzian extension seems cognitively simpler: Notice that you are concerned about nearby person X and want W for them, notice that more distant person Y is relevantly similar, and come to want W for them also. This resembles ordinary generalization between relevant cases: This wine should be treated this way, therefore other similar wines should be treated similarly; such-and-such is a good way to treat this person, so such-and-such is probably also a good way to treat this other similar person.

    (2.) Empirically, extension is a more promising method for expanding one's moral concern. Plausibly, it's more of a motivational leap to go from concern about self to concern about distant others (Golden Rule) than to go from concern from nearby others to similar more distant others (Mengzian Extension). When aid agencies appeal for charitable donations, they don't typically ask people to imagine what they would want if they were living in poverty. Instead, they tend to show pictures of children, drawing upon our natural concern for children and inviting us to extend that concern to the target group. Also -- as I plan to discuss in more detail in a post next month -- in the "argument contest" Fiery Cushman and I ran back in 2020, the arguments most successful in inspiring charitable donation employed Mengzian extension techniques, while appeals to "other's shoes" style reasoning did not tend to predict higher levels of donation than did the average argument.

    (3.) Ethically, it's more attractive to ground concern for distant others in the extension of concern for nearby others than in hypothetical self-interest. Although there's something attractive about caring for others because you can imagine what you would want if you were them, there's also something a bit... self-centered? egoistic? ... about grounding other-concern in hypothetical self-concern. Rousseau writes: "love of men derived from love of self is the principle of human justice" (Emile, Bloom trans., p. 235). Mengzi or Confucius would never say this! In Mengzian extension, it is ethically admirable concern for nearby others that is the root of concern for more distant others. Appealingly, I think, the focus is on broadening one's admirable ethical impulses, rather than hypothetical self-interest.

    [ChatGPT4's rendering of Mengzi's example of a child about to fall into a well, with a concerned onlooker; I prefer Helen De Cruz's version]

    My new paper on this -- forthcoming in Daedalus -- is circulating today. As always, comments, objections, corrections, connections welcome, either as comments on this post, on social media, or by email.

    Abstract:

    According to the Golden Rule, you should do unto others as you would have others do unto you. Similarly, people are often exhorted to "imagine themselves in another's shoes." A related but contrasting approach to moral expansion traces back to the ancient Chinese philosopher Mengzi, who urges us to "extend" our concern for those nearby to more distant people. Other approaches to moral expansion involve: attending to the good consequences for oneself of caring for others, expanding one's sense of self, expanding one's sense of community, attending to others' morally relevant properties, and learning by doing. About all such approaches, we can ask three types of question: To what extent do people in fact (e.g., developmentally) broaden and deepen their care for others by these different methods? To what extent do these different methods differ in ethical merit? And how effectively do these different methods produce appropriate care?

    Wednesday, December 20, 2023

    The Washout Argument Against Longtermism

    I have a new essay in draft, "The Washout Argument Against Longtermism". As always, thoughts, comments, and objections welcome, either as comments on this post or by email to my academic address.

    Abstract:

    We cannot be justified in believing that any actions currently available to us will have a non-negligible positive influence on the billion-plus-year future. I offer three arguments for this thesis.

    According to the Infinite Washout Argument, standard decision-theoretic calculation schemes fail if there is no temporal discounting of the consequences we are willing to consider. Given the non-zero chance that the effects of your actions will produce infinitely many unpredictable bad and good effects, any finite effects will be washed out in expectation by those infinitudes.

    According to the Cluelessness Argument, we cannot justifiably guess what actions, among those currently available to us, are relatively more or less likely to have positive effects after a billion years. We cannot be justified, for example, in thinking that nuclear war or human extinction would be more likely to have bad than good consequences in a billion years.

    According to the Negligibility Argument, even if we could justifiably guess that some particular action is likelier to have good than bad consequences in a billion years, the odds of good consequences would be negligibly tiny due to the compounding of probabilities over time.

    For more details see the full-length draft.

    A brief, non-technical version of these arguments is also now available at the longtermist online magazine The Latecomer.

    [Midjourney rending of several happy dolphins playing]

    Excerpt from full-length essay

    If MacAskill’s and most other longtermists’ reasoning is correct, the world is likely to be better off in a billion years if human beings don’t go extinct now than if human beings do go extinct now, and decisions we make now can have a non-negligible influence on whether that is the case. In the words of Toby Ord, humanity stands at a precipice. If we reduce existential risk now, we set the stage for possibly billions of years of thriving civilization; if we don’t, we risk the extinction of intelligent life on Earth. It’s a tempting, almost romantic vision of our importance. I also feel drawn to it. But the argument is a card-tower of hand-waving plausibilities. Equally breezy towers can be constructed in favor of human self-extermination or near-self-extermination. Let me offer....

    The Dolphin Argument. The most obvious solution to the Fermi Paradox is also the most depressing. The reason we see no signs of intelligent life elsewhere in the universe is that technological civilizations tend to self-destruct in short order. If technological civilizations tend to gain increasing destructive power over time, and if their habitable environments can be rendered uninhabitable by a single catastrophic miscalculation or a single suicidal impulse by someone with their finger on the button, then the odds of self-destruction will be non-trivial, might continue to escalate over time, and might cumulatively approach nearly 100% over millennia. I don’t want to commit to the truth of such a pessimistic view, but in comparison, other solutions seem like wishful thinking – for example, that the evolution of intelligence requires stupendously special circumstances (the Rare Earth Hypothesis) or that technological civilizations are out there but sheltering us from knowledge of them until we’re sufficiently mature (the Zoo Hypothesis).

    Anyone who has had the good fortune to see dolphins at play will probably agree with me that dolphins are capable of experiencing substantial pleasure. They have lives worth living, and their death is a loss. It would be a shame if we drove them to extinction. Suppose it’s almost inevitable that we wipe ourselves out in the next 10,000 years. If we extinguish ourselves peacefully now – for example, by ceasing reproduction as recommended by antinatalists – then we leave the planet in decent shape for other species, including dolphins, which might continue to thrive. If we extinguish ourselves through some self-destructive catastrophe – for example, by blanketing the world in nuclear radiation or creating destructive nanotech that converts carbon life into gray goo – then we probably destroy many other species too and maybe render the planet less fit for other complex life.

    To put some toy numbers on it, in the spirit of longtermist calculation, suppose that a planet with humans and other thriving species is worth X utility per year, a planet with other thriving species with no humans is worth X/100 utility (generously assuming that humans contribute 99% of the value to the planet!), and a planet damaged by a catastrophic human self-destructive event is worth an expected X/200 utility. If we destroy ourselves in 10,000 years, the billion year sum of utility is 10^4 * X + (approx.) 10^9 * X/200 = (approx.) 5 * 10^6 * X. If we peacefully bow out now, the sum is 10^9 * X/100 = 10^7 * X. Given these toy numbers and a billion-year, non-human-centric perspective, the best thing would be humanity’s peaceful exit.

    Now the longtermists will emphasize that there’s a chance we won’t wipe ourselves out in a terribly destructive catastrophe in the next 10,000 years; and even if it’s only a small chance, the benefits could be so huge that it’s worth risking the dolphins. But this reasoning ignores a counterbalancing chance: That if human beings stepped out of the way a better species might evolve on Earth. Cosmological evidence suggests that technological civilizations are rare; but it doesn’t follow that civilizations are rare. There has been a general tendency on Earth, over long, evolutionary time scales, for the emergence of species with moderately high intelligence. This tendency toward increasing intelligence might continue. We might imagine the emergence of a highly intelligent, creative species that is less destructively Promethean than we are – one that values play, art, games, and love rather more than we do, and technology, conquering, and destruction rather less – descendants of dolphins or bonobos, perhaps. Such a species might have lives every bit as good as ours (less visible to any ephemeral high-tech civilizations that might be watching from distant stars), and they and any like-minded descendants might have a better chance of surviving for a billion years than species like ours who toy with self-destructive power. The best chance for Earth to host such a species might, then, be for us humans to step out of the way as expeditiously as possible, before we do too much harm to complex species that are already partway down this path.

    Think of it this way: Which is the likelier path to a billion-year happy, intelligent species: that we self-destructive humans manage to keep our fingers off the button century after century after century somehow for ten million centuries, or that some other more peaceable, less technological clade finds a non-destructive stable equilibrium? I suspect we flatter ourselves if we think it’s the former.

    This argument generalizes to other planets that our descendants might colonize in other star systems. If there’s even a 0.01% chance per century that our descendants in Star System X happen to destroy themselves in a way that ruins valuable and much more durable forms of life already growing in Star System X, then it would be best overall for them never to have meddled, and best for us now to peacefully exit into extinction rather than risk producing descendants who will expose other star systems to their destructive touch.

    ...

    My aim with the Dolphin Argument... is not to convince readers that humanity should bow out for the sake of other species.... Rather, my thought is this: It’s easy to concoct stories about how what we do now might affect the billion-year future, and then to attach decision-theoretic numbers to those stories. We lack good means for evaluating these stories. We are likely just drawn to one story or another based on what it pleases us to think and what ignites our imagination.

    Tuesday, November 07, 2023

    The Prospects and Challenges of Measuring Morality, or: On the Possibility or Impossibility of a "Moralometer"

    Could we ever build a "moralometer" -- that is, an instrument that would accurately measure people's overall morality?  If so, what would it take?

    Psychologist Jessie Sun and I explore this question in our new paper in draft: "The Prospects and Challenges of Measuring Morality".

    Comments and suggestions on the draft warmly welcomed!

    Draft available here:

    https://osf.io/preprints/psyarxiv/nhvz9

    Abstract:

    The scientific study of morality requires measurement tools. But can we measure individual differences in something so seemingly subjective, elusive, and difficult to define? This paper will consider the prospects and challenges—both practical and ethical—of measuring how moral a person is. We outline the conceptual requirements for measuring general morality and argue that it would be difficult to operationalize morality in a way that satisfies these requirements. Even if we were able to surmount these conceptual challenges, self-report, informant report, behavioral, and biological measures each have methodological limitations that would substantially undermine their validity or feasibility. These challenges will make it more difficult to develop valid measures of general morality than other psychological traits. But, even if a general measure of morality is not feasible, it does not follow that moral psychological phenomena cannot or should not be measured at all. Instead, there is more promise in developing measures of specific operationalizations of morality (e.g., commonsense morality), specific manifestations of morality (e.g., specific virtues or behaviors), and other aspects of moral functioning that do not necessarily reflect moral goodness (e.g., moral self-perceptions). Still, it is important to be transparent and intellectually humble about what we can and cannot conclude based on various moral assessments—especially given the potential for misuse or misinterpretation of value-laden, contestable, and imperfect measures. Finally, we outline recommendations and future directions for psychological and philosophical inquiry into the development and use of morality measures.

    [Below: a "moral-o-meter" given to me for my birthday a few years ago, by my then-13-year-old daughter]

    Friday, October 27, 2023

    Utilitarianism and Risk Amplification

    A thousand utilitarian consequentialists stand before a thousand identical buttons.  If any one of them presses their button, ten people will die.  The benefits of pressing the button are more difficult to estimate.  Ninety-nine percent of the utilitarians rationally estimate that fewer than ten lives will be saved if any of them presses a button.  One percent rationally estimate that more than ten lives will be saved.  Each utilitarian independently calculates expected utility.  Since ten utilitarians estimate that more lives will be saved than lost, they press their buttons.  Unfortunately, as the 99% would have guessed, fewer than ten lives are saved, so the result is a net loss of utility.

    This cartoon example illustrates what I regard as a fundamental problem with simple utilitarianism as decision procedure: It deputizes everyone to act as risk-taker for everyone else.  As long as anyone has both (a.) the power and (b.) a rational utilitarian justification to take a risk on others' behalf, then the risk will be taken, even if a majority would judge the risk not to be worth it.

    Consider this exchange between Tyler Cowen and Sam Bankman-Fried (pre-FTX-debacle):

    COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?

    BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.

    COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.

    BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.

    COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?

    BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.

    There are, I think, two troubling things about Bankman-Fried's reasoning here.  (Probably more than two, but I'll restrain myself.)

    First is the thought that it's worth risking everything valuable for a small chance of a huge gain.  (I call this the Black Hole Objection to consequentialism.)

    Second, I don't want Sam Bankman-Fried making that decision.  That's not (just) because of who in particular he is.  I wouldn't want anyone making that decision -- at least not unless they were appropriately deputized with that authority through an appropriate political process, and maybe not even then.  No matter how rational and virtuous you are, I don't want you deciding to take risks on behalf of the rest of us simply because that's what your consequentialist calculus says.  This issue subdivides into two troubling aspects: the issue of authority and the issue of risk amplification.

    The authority issue is: We should be very cautious in making decisions that sacrifice others or put them at high risk.  Normally, we should do so only in constrained circumstances where we are implicitly or explicitly endowed with appropriate responsibility.  Our own individual calculation of high expected utility (no matter how rational and well-justified) is not normally, by itself, sufficient grounds for substantially risking or harming others.

    The risk amplification issue is: If we universalize utilitarian decision-making in a way that permits many people to risk or sacrifice others whenever they reasonably calculate that it would be good to do so, we render ourselves collectively hostage to whomever has the most sacrificial reasonable calculation.  That was the point illustrated in the opening scenario.

    [Figure: Simplified version of the opening scenario.  Five utilitarians have the opportunity to sacrifice five people to save an unknown number of others.  The button will be pressed by the utilitarian whose estimate errs highest.  Click to enlarge and clarify.]

    My point is not that some utilitarians might be irrationally risky, though certainly that's a concern.  Rather, my point is that even if all utilitarians are perfectly rational, if they differ in their assessments of risk and benefit, and if all it takes to trigger a risky action is one utilitarian with the power to choose that action, then the odds of a bad outcome rise dramatically.

    Advocates of utilitarian decision procedures can mitigate this problem in a few ways, but I'm not seeing how to escape it without radically altering the view.

    First, a utilitarian could adopt a policy of decision conciliationism -- that is, if you see that most others aren't judging the risk or cost worth it, adjust your own assessment of the benefits and likelihoods, so that you fall in line with the majority.  However, strong forms of conciliationism are pretty radical in their consequences; and of course this only works if the utilitarians know that there are others in similar positions deciding differently.

    Second, a utilitarian could build some risk aversion and loss aversion into their calculus.  This might be a good idea on independent grounds.  Unfortunately, aversion corrections only shift the weights around.  If the anticipated gains are sufficiently high, as judged by the most optimistic rational utilitarian, they will outweigh any discounts due to risk or loss aversion.

    Third, they could move to rule utilitarianism: Endorse some rule according to which you shouldn't generally risk or sacrifice others without the right kind of authority.  Plausibly, the risk amplification argument above is exactly the sort of argument that might a motivate a utilitarian to adopt rule utilitarianism as a decision procedure rather than trying to evaluate the consequences of each act individually.  That is, it's a utilitarian argument in favor of not always acting according to utilitarian calculations.  However, the risk amplification and authority problems are so broad in scope (even with appropriate qualifications) that moving to rule utilitarianism to deal with them is to abandon act utilitarianism as a general decision procedure.

    Of course, one could also design scenarios in which bad things happen if everyone is a rule-following deontologist!  Picture a thousand "do not kill" deontologists who will all die unless one of them kills another.  Tragedy.  We can cherry-pick scenarios in which any view will have unfortunate results.

    However, I don't think my argument is that unfair.  The issues of authority and risk amplification are real problems for utilitarian decision procedures, as brought out in these cartoon examples.  We can easily imagine, I think, a utilitarian Robespierre, a utilitarian academic administrator, Sam Bankman-Fried with his hand on the destroy-or-duplicate button, calculating reasonably, and too easily inflicting well-intentioned risk on the rest of us.

    Thursday, October 05, 2023

    Skeletal vs Fleshed-Out Philosophy

    All philosophical views are to some degree skeletal. By this, I mean that the details of their application remain to some extent open. This is true of virtually any formal system: Even the 156-page rule handbook for golf couldn't cover every eventuality: What if the ball somehow splits in two and one half falls in the hole? What if an alien spaceship levitates the ball for two seconds as it's arcing through the air? (See the literature on "open textured" statements.)

    Still, some philosophical views are more skeletal than others. A bare statement like "maximize utility" is much more skeletal, much less fleshed out, than a detailed manual of utilitarian consequentialist advice. Today, I want to add a little flesh to the skeletal vs. fleshed-out distinction. Doing so will, I hope, help clarify some of the value of trying to walk the walk as an ethicist. (For more on walking the walk, see last month's posts here and here.)

    [Midjourney rendition of a person and a skeleton talking philosophy, against a background of stars]

    Using "maximize utility" as an example, let's consider sources of linguistic, metaphysical, and epistemic openness.

    Linguistic: What does "utility" mean, exactly? Maybe utility is positively valenced conscious experiences. Or maybe utility is welfare or well-being more broadly construed. What counts as "maximizing"? Is it a sum or a ratio? Is the scope truly universal -- for all entities in the entire cosmos over all time, or is it limited in some way (e.g., to humans, to Earth, to currently existing organisms)? Absent specification (by some means or other), there will be no fact of the matter whether, say, two acts with otherwise identical results, but one of which also slightly improves the knowledge (but not happiness) of one 26th-century Martian, are equally choiceworthy according to the motto. 

    Metaphysical: Consider a broad sense of utility as well-being or flourishing. If well-being has components that are not strictly commensurable -- that is, which cannot be precisely weighed against each other -- then the advice to maximize utility leaves some applications open. Plausibly, experiencing positive emotions and achieving wisdom (whatever that is, exactly) are both part of flourishing. While it might be clear that a tiny loss of positive emotion is worth trading off for a huge increase in wisdom and vice versa, there might be no fact of the matter exactly what the best tradeoff ratio is -- and thus, sometimes, no fact of the matter whether someone with moderate levels of positive emotion and moderate levels of wisdom has more well-being than someone with a bit less positive emotion and a bit more wisdom.

    Epistemic: Even absent linguistic and metaphysical openness, there can be epistemic openness. Imagine we render the utilitarian motto completely precise: Maximize the total sum of positive minus negative conscious experiences for all entities in the cosmos in the entire history of the cosmos (and whatever else needs precisification). Posit that there is always an exact fact of the matter how to weigh competing goods in the common coin of utility and there are never ties. Suppose further that it is possible in principle to precisely specify what an "action" is, individuating all the possible alternative actions at each particular moment. It should then always be the case that there is exactly one action you could do that would "maximize utility". But could you know what this action is? That's doubtful! Every action has a huge number of non-obvious consequences. This is ignorance; but we can also think of it as a kind of openness, to highlight its similarity to linguistic and metaphysical openness or indeterminacy. The advice "maximize utility", however linguistically and metaphysically precise, leaves it still epistemically open what you should actually do.

    Parallel remarks apply to other ethical principles: "Act on that maxim that you can will to be a universal law", "be kind", "don't discriminate based on race", "don't perform medical experiments on someone without their consent" -- all exhibit some linguistic, metaphysical, and epistemic openness.

    Some philosophers might deny linguistic and/or metaphysical openness: Maybe context always renders meanings perfectly precise, and maybe normative facts are never actually mushy-edged and indeterminate. Okay. Epistemic openness will remain. As long as we -- the reader, the consumer, the applier, of the philosophical doctrine -- can't reasonably be expected to grasp the full range of application, the view remains skeletal in my sense of the term.

    It's not just ethics. Similar openness also pervades other areas of philosophy. For example, "higher order" theories of consciousness hold that an entity is conscious if and only if it has the right kind of representations of or knowledge of its own mental states or cognitive processes. Linguistically, what is meant by a "higher order representation", exactly? Metaphysically, might there be borderline cases that are neither determinately conscious nor unconscious? Epistemically, even if we could precisify the linguistic and metaphysical issues, what actual entities or states satisfy the criteria (mice? garden snails? hypothetical robots of various configurations?).

    The degree of openness of a position is itself, to some extent, open: There's linguistic, metaphysical, and epistemic meta-openness, we might say. Even a highly skeletal view rules some things out. No reasonable fleshing out of "maximize utility" is consistent with torturing babies for no reason. But it's generally unclear where exactly the boundaries of openness lie, and there might be no precise boundary to be discovered.

    #

    Now, there's something to be said for skeletal philosophy. Simple maxims, which can be fleshed out in various ways, have an important place in our thinking. But at some point, the skeleton needs to get moving, if it's going to be of use. Lying passively in place, it might block a few ideas -- those that crash directly against its obvious bones. But to be livable, applicable, it needs some muscle. It needs to get up and walk over to real, specific situations. What does "maximize utility" (or whatever other policy, motto, slogan, principle) actually recommend in this particular case? Too skeletal a view will be silent, leaving it open.

    Enter the policy of walking the walk. As an ethicist, attempting to walk the walk forces you to flesh out your view, applied at least to the kinds of situations you confront in your own life -- which will of course be highly relevant to you and might also be relevant to many of your readers. What actions, specifically, should a 21st-century middle-class Californian professor do to "maximize utility"? Does your motto "be kind" require you to be kind to this person, in this particular situation, in this particular way? Confronting actual cases and making actual decisions motivates you to repair your ignorance about how the view would best apply to those cases. Linguistically, too, walking the walk enables you to make the content of your mottoes more precise: "Be kind" means -- in part -- do stuff like this. In contrast, if you satisfy yourself with broad slogans, or broad slogans plus a few paragraph-long thought-experimental applications, your view will never be more than highly skeletal. 

    Not only our readers, but also we philosophers ourselves, normally remain substantially unclear on what our skeletal mottoes really amount to until we actually try to apply them to concrete cases. In ethics -- at least concerning principles meant to govern everyday life (and not just rare or remote cases) -- the substance of one's own life is typically the best and most natural way to add that flesh.

    Friday, September 15, 2023

    Walking the Walk: Frankness and Social Proof

    My last two posts have concerned the extent to which ethicists should "walk the walk" -- that is, live according to, or at least attempt to live according to, the ethical principles they espouse in their writing and teaching. According to "Schelerian separation", what ethicists say or write can and should be evaluated independently of facts about the ethicist's personal life. While there are some good reasons to favor Schelerian separation, I argued last week that ethical slogans ("act on that maxim you can at the same time will to be a universal law", "maximize utility") will tend to lack specific, determinate content without a context of clarifying examples. One's own life can be a rich source of content-determining examples, while armchair reflection on examples tends to be impoverished.

    Today, I'll discuss two more advantages of walking the walk.

    [a Dall-E render of "walking the walk"]

    Frankness and Belief

    Consider scientific research. Scientists don't always believe their own conclusions. They might regard their conclusions as tentative, the best working model, or just a view with enough merit to be worth exploring. But if they have doubt, they ought to be unsurprised if their readers also have doubt. Conversely, if a reader learns that a scientist has substantial doubts about their own conclusions, it's reasonable for the reader to wonder why, to expect that the scientist is probably responding to limitations in their own methods and gaps in their own reasoning that might be invisible to non-experts.

    Imagine reading a scientific article, finding the conclusion wholly convincing, and then learning that the scientist who wrote the article thinks the conclusion is probably not correct. Absent some unusual explanation, you’ll probably want to temper your belief. You’ll want to know why the scientist is hesitating, what weaknesses and potential objections they might be seeing that you have missed. It’s possible that the scientist is simply irrationally unconvinced by their own compelling reasoning; but that’s presumably not the normal case. Arguably, readers of scientific articles are owed, and reasonably expect, scientific frankness. Scientists who are not fully convinced by their results should explain the limitations that cause them to hesitate. (See also Wesley Buckwalter on the "belief norm of academic publishing".)

    Something similar is true in ethics. If Max Scheler paints a picture of a beautiful, ethical, religious way of life which he personally scorns, it's reasonable for the reader to wonder why he scorns it, what flaws he sees that you might not notice in your first read-through. If he hasn't actually tried to live that way, why not? If he has tried, but failed, why did he fail? If a professional ethicist argues that ethically, and all things considered, one should be a vegetarian, but isn't themselves a vegetarian and has no special medical or other excuse, it's reasonable for readers and students to wonder why not and to withhold belief until that question is resolved. People are not usually baldly irrational. It's reasonable to suppose that there's some thinking behind their choice, which they have not yet revealed readers and students, which tempers or undercuts their reasoning.

    As Nomy Arpaly has emphasized in some of her work, our gut inclinations are sometimes wiser than our intellectual affirmations. The student who says to herself that she should be in graduate school, that academics is the career for her, but who procrastinates, self-sabotages, and hates her work – maybe the part of her that is resisting the career is the wiser part. When Huck Finn tells himself that the right thing to do is to turn in his friend, the runaway slave Jim, but can't bring himself to do it – again, his inclinations might be wiser than his explicit reasoning.

    If an ethicist's intellectual arguments aren't penetrating through to their behavior, maybe there's a good reason. If you can't, or don't, live what you intellectually endorse, it could be because your intellectual reasoning is leaving something important out that the less intellectual parts of you rightly refuse to abandon. Frankness with readers enables them to consider this possibility. Conversely, if we see someone who reasons to a certain ethical conclusion, and their reasoning seems solid, and then they consistently live that way without tearing themselves apart with ambivalence, we have less grounds for suspecting that their gut might be wisely fighting against flaws their academic reasoning than we do when we see someone who doesn’t walk the walk.

    What is it to believe that eating meat is morally wrong (or any other ethical proposition)? I favor a dispositionalist approach (e.g., here, here, here). It is in part to be disposed to say and intellectually judge that eating meat is morally wrong. But more than that, it is to give weight to the avoidance of meat in your ethical decision-making. It is to be disposed to feel you have done something wrong if you eat meat for insufficient reason, maybe feeling guilt or shame. It is to feel moral approval and disapproval of others' meat-avoiding or meat-eating choices. If an ethicist intellectually affirms the soundness of arguments for vegetarianism but lacks the rest of this dispositional structure, then (on the dispositionalist view I favor) they don't fully or determinately believe that eating meat is ethically wrong. Their intellectually endorsed positions don't accurately reflect their actual beliefs and values. This completes the analogy with the scientist who doesn't believe their own conclusions.

    Social Proof

    Somewhat differently, an ethicist's own life can serve as a kind of social proof. Look: This set of norms is livable – maybe appealing so, with integrity. Things don't fall apart. There's an implementable vision, which other people could also follow. Figures like Confucius, Buddha, and Jesus were inspiring in part because they showed what their slogans amounted to in practice, in part because they showed that real people could live in something like the way they themselves lived, and in part because they also showed how practically embodying the ethics they espoused could be attractive and fulfilling, at least to certain groups of people.

    Ethical Reasons to Walk the Walk?

    I haven't yet discussed ethical reasons for walking the walk. So far, the focus has been epistemology, philosophy of language, and philosophy of mind. However, arguing in favor of certain ethical norms appears to involve recommending that others adhere to those norms, or at least be partly motivated by those norms. Making such a recommendation while personally eschewing those same norms plausibly constitutes a failure of fairness, equity, or universalization – the same sort of thing that rightly annoys children when their parents or teachers say "do as I say, not as I do". More on this, I hope, another day.