3 Thought Experiments: An Exercise in Ponens and Tollens

“One rationalist’s modus ponens is another’s modus tollens”

—Lots of People?

I was recently reminded that all those cognitive biases I read about apply to me too.

I.

Whenever I argue with my friends about moral philosophy, I usually get to the point where I just want to yell, “But don’t you realize that deep down you’re a utilitarian?” Utility need not be narrowly defined as consumption of goods and activities. You can fold in autonomy and virtue and basically anything else you think is good into people’s utility functions. Then utilitarianism is just the best way of achieving maximum utility (this follows definitionally because you are allowed to use any means necessary). And other popular moral philosophies end up running into some pretty ridiculous results:

II.

According to Rawls, the just society is that which does the best by its worst off members. He justifies this by saying that under a veil of ignorance, agents will play maximin strategies and choose the society that has the least bad “worst outcome”. To illustrate the maximin strategy, consider the following thought experiment:

You are trying to choose a car, and you have a choice between Dealer A and Dealer B. Once you choose a dealer, you will be randomly assigned one of their cars (for free).

  •      Dealer A has 999 cars that are rated 100/100, and one car rated 3/100.
  •      Dealer B has 1000 cars rated 4/100.

I think it is safe to say that most people will choose Dealer A. However, a maximin strategy means you choose Dealer B. You could argue that the difference between 3/100 and 4/100 is huge while the difference between 4/100 and 100/100 is tiny, but I could make it 3.9/100 or even arbitrarily close and you would still choose Dealer B. Now suppose the dealers are societies and the cars are your quality of life. Maximin strategies under a veil of ignorance seem counterintuitive and not what people will do, and basing the idea of justice on this erroneous assumption seems really sketchy.

III.

Kant’s “categorical imperative for the human will” goes as follows: “Act so that you treat humanity, whether in your own person or in that of another, always as an end and never as a means only”. This is a great idea, and I would love if more people did it. But to say it is the “supreme practical principle” can generate awful consequences:

Suppose aliens show up to Earth and demand that we give them John Doe so that they can kill him; they will leave if they get him and blow up the planet if they don’t. Suppose you believe with near certainty that they are truthful. Unfortunately, John Doe is unconscious and unresponsive, so it impossible to obtain his consent. Thus we have the choice between giving him up, in which case John Doe will be killed, or not giving him up, in which case the Earth will be destroyed, 7 billion lives will end, the human race will be extinct, and oh by the way John Doe will still be killed. And yet the categorical imperative requires we not give John Doe up. This runs deeply counter to all our intuitions, and I cannot imagine that anyone feels in their bones that this is the correct course of action.

IV.

This brings us to utilitarianism, which has its own counterintuitive results. Take the Repugnant Conclusion. Assuming some mechanism of utility aggregation, you can find some very large population that would be deemed “better” than our current society—even though its citizens have lives barely worth living—because there is more total utility. This is also pretty ridiculous: after all, a society where life is barely worth living sounds awful.

I can make justifications for my philosophy. The Repugnant Conclusion requires us to think about really large numbers (population) and really small numbers (individual utility), both of which humans are notoriously bad at dealing with intuitively. So here I’m more inclined to accept a counterintuitive result.

V.

But notice I’m ignoring this icky result not because I feel deep down that utilitarianism is right but because I’m willing to accept not feeling that way. This is perfectly natural: the amazing thing about humans is that we can be more than just our intuitions. But what if a Rawlsian is more willing to accept my cars thought experiment, or a Kantian my aliens thought experiment, rather than the Repugnant Conclusion? Maybe they’re not a utilitarian deep down. And can I say they’re “wrong” or “ridiculous”? Not unless I want to argue against their aesthetic preferences, which just leads to Nowheresville.

Lesson 1 from this is I should stop thinking that if I can just show my friends their inner utilitarian that they will embrace the philosophy. They may just have different aesthetic preferences regarding unintuitive results, and thus different moral leanings. Lesson 2 is that not everyone thinks like me. The typical mind fallacy is not hard to understand, but easy to forget.

So where do we go from here? I could descend into moral relativism or nihilism and claim that because all we’re doing is choosing systems based on aesthetic preferences, it is silly to think any are “better” or “correct”. While I’m sympathetic to these claims, I think they are super useless. They don’t tell us anything about what we should do. I prefer actionable steps, like finding and working together on the surprising amount of common ground we share. Rawlsians and Utilitarians can team up to fight global poverty, and Kantians will join for the fight against systemic oppression. And just about any reasoned moral philosophy I can think of believes that truth should not be a matter of opinion. There is so much low-hanging fruit that we should be able to agree on, that quibbling about what happens if aliens show up should be one of the last things we do.

2 thoughts on “3 Thought Experiments: An Exercise in Ponens and Tollens

  1. Hello Duncan! I recently realized I had not upgraded you to “following him” status on facebook, which I do with my actual friends. I think this post uses a minimaxing optimization for moral philosophies: that is, you seem to evaluate philosophies based on how acceptable the worst corollary of those philosophies is. a) this doesn’t seem like it necessarily ought to be the way you evaluate philosophies, b) didn’t you just say you don’t like minimaxing? 😉

    Like

    1. Hi MJ! You’re absolutely correct that the big picture is more nuanced than choosing which philosophy disagrees with intuitions in the most palatable way. The main point I am trying to get across is that any normative philosophy will have counterintuitive results. Any ethical theory that seemed completely intuitive would be a horrible mess of contradictions and inconsistencies because, well, that’s how our intuitions are. The project of normative philosophy then is about choosing which intuitions to care about. You’re right that I talk here almost exclusively about negative intuitions, those that disagree with a philosophy, and don’t talk at all about positive intuitions, those which make us actively support a philosophy. This was done mostly for space considerations, but to take on the project fully, you have to balance both kinds of intuitions. Otherwise, as you say, you will fall into the trap of maximining. If you want to hear someone arguing positively about intuitions, I recommend Eliezer’s defense of utilitarian intuitions:
      https://www.lesswrong.com/posts/r5MSQ83gtbjWRBDWJ/the-intuitions-behind-utilitarianism

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s