Archives (unused)

3 Thought Experiments: An Exercise in Ponens and Tollens

“One rationalist’s modus ponens is another’s modus tollens”

—Lots of People?

I was recently reminded that all those cognitive biases I read about apply to me too.

I.

Whenever I argue with my friends about moral philosophy, I usually get to the point where I just want to yell, “But don’t you realize that deep down you’re a utilitarian?” Utility need not be narrowly defined as consumption of goods and activities. You can fold in autonomy and virtue and basically anything else you think is good into people’s utility functions. Then utilitarianism is just the best way of achieving maximum utility (this follows definitionally because you are allowed to use any means necessary). And other popular moral philosophies end up running into some pretty ridiculous results:

II.

According to Rawls, the just society is that which does the best by its worst off members. He justifies this by saying that under a veil of ignorance, agents will play maximin strategies and choose the society that has the least bad “worst outcome”. To illustrate the maximin strategy, consider the following thought experiment:

You are trying to choose a car, and you have a choice between Dealer A and Dealer B. Once you choose a dealer, you will be randomly assigned one of their cars (for free).

  •      Dealer A has 999 cars that are rated 100/100, and one car rated 3/100.
  •      Dealer B has 1000 cars rated 4/100.

I think it is safe to say that most people will choose Dealer A. However, a maximin strategy means you choose Dealer B. You could argue that the difference between 3/100 and 4/100 is huge while the difference between 4/100 and 100/100 is tiny, but I could make it 3.9/100 or even arbitrarily close and you would still choose Dealer B. Now suppose the dealers are societies and the cars are your quality of life. Maximin strategies under a veil of ignorance seem counterintuitive and not what people will do, and basing the idea of justice on this erroneous assumption seems really sketchy.

III.

Kant’s “categorical imperative for the human will” goes as follows: “Act so that you treat humanity, whether in your own person or in that of another, always as an end and never as a means only”. This is a great idea, and I would love if more people did it. But to say it is the “supreme practical principle” can generate awful consequences:

Suppose aliens show up to Earth and demand that we give them John Doe so that they can kill him; they will leave if they get him and blow up the planet if they don’t. Suppose you believe with near certainty that they are truthful. Unfortunately, John Doe is unconscious and unresponsive, so it impossible to obtain his consent. Thus we have the choice between giving him up, in which case John Doe will be killed, or not giving him up, in which case the Earth will be destroyed, 7 billion lives will end, the human race will be extinct, and oh by the way John Doe will still be killed. And yet the categorical imperative requires we not give John Doe up. This runs deeply counter to all our intuitions, and I cannot imagine that anyone feels in their bones that this is the correct course of action.

IV.

This brings us to utilitarianism, which has its own counterintuitive results. Take the Repugnant Conclusion. Assuming some mechanism of utility aggregation, you can find some very large population that would be deemed “better” than our current society—even though its citizens have lives barely worth living—because there is more total utility. This is also pretty ridiculous: after all, a society where life is barely worth living sounds awful.

I can make justifications for my philosophy. The Repugnant Conclusion requires us to think about really large numbers (population) and really small numbers (individual utility), both of which humans are notoriously bad at dealing with intuitively. So here I’m more inclined to accept a counterintuitive result.

V.

But notice I’m ignoring this icky result not because I feel deep down that utilitarianism is right but because I’m willing to accept not feeling that way. This is perfectly natural: the amazing thing about humans is that we can be more than just our intuitions. But what if a Rawlsian is more willing to accept my cars thought experiment, or a Kantian my aliens thought experiment, rather than the Repugnant Conclusion? Maybe they’re not a utilitarian deep down. And can I say they’re “wrong” or “ridiculous”? Not unless I want to argue against their aesthetic preferences, which just leads to Nowheresville.

Lesson 1 from this is I should stop thinking that if I can just show my friends their inner utilitarian that they will embrace the philosophy. They may just have different aesthetic preferences regarding unintuitive results, and thus different moral leanings. Lesson 2 is that not everyone thinks like me. The typical mind fallacy is not hard to understand, but easy to forget.

So where do we go from here? I could descend into moral relativism or nihilism and claim that because all we’re doing is choosing systems based on aesthetic preferences, it is silly to think any are “better” or “correct”. While I’m sympathetic to these claims, I think they are super useless. They don’t tell us anything about what we should do. I prefer actionable steps, like finding and working together on the surprising amount of common ground we share. Rawlsians and Utilitarians can team up to fight global poverty, and Kantians will join for the fight against systemic oppression. And just about any reasoned moral philosophy I can think of believes that truth should not be a matter of opinion. There is so much low-hanging fruit that we should be able to agree on, that quibbling about what happens if aliens show up should be one of the last things we do.

Against Incomparability

There is a commonly held belief that there are some things that simply cannot be compared. For example, I have had many people tell me that you cannot put a dollar value on a human life, or that it’s impossible to compare two people’s lived experiences. This argument seems to come in one of two forms.

First, that one type of good is categorically better than another. For example, one might say that human life is more valuable than any amount of money. So given the choice between saving a stranger’s life and receiving X amount of money, you should save the person, no matter how big X is. But what if X is large enough to save two strangers from dying of disease if donated to effective charities? The utilitarian argues you should take the money and save the two people rather than saving just the one person.

It’s not as simple as this. When you start thinking about human lives as just numbers, it’s very easy to lose track of your empathy, why you’re trying to help people in the first place. This does damage to yourself and to the society you’re a part of, and cannot be written off. But certainly some amount X should be large enough to make this worth it. What if X can save a million people? A billion? It seems downright irresponsible to refuse. To say that a single human life is infinitely more important than any amount of money could have us end up with significantly more humans dying than if we made such tradeoffs.

The second form is that two things literally can’t be compared, except maybe to say that they’re both good or both bad. There’s nothing wrong with this position. It’s just aggressively useless if I ever want to make decisions involving those two goods. Can you imagine if Congress decided, “It’s impossible to say whether investing in education or healthcare is better for people. Guess we can’t do anything.” That would be silly.

This brings me to the most frustrating thing about all this: we make these comparisons all the time. I once asked one of my friends to give me an example of two things that were fundamentally incomparable. The answer: “financial prosperity and health”. I stopped for a moment, then responded: “But you support the Affordable Care Act. That is very clearly trading off some people’s financial prosperity to ensure better health for others. Even if you opposed it, that would still be a comparison.” If we’re going to be trading off between things, we might as well be honest about it and do it in the most intelligent way possible.

I’m not saying that such comparisons are always easy or always useful. For example, take comparing the discrimination faced by a black person versus that faced by a gay person. While we could in theory figure out which of them is worse, this is usually unnecessary. They’re both bad, and we’re not trading them off against each other. We’re trading them both off with the preservation of the status quo, a comparison which is much easier to make.

Tradeoffs are hard. They’re messy. We make mistakes, and not all of them are easily fixed. And reducing people to numbers to compare can devalue them in societally damaging ways. But tradeoffs are necessary if we want to succeed. Accepting that is the first step.

Why We’re Here

Simmons: Do you ever wonder why we’re here?

Grif: One of life’s great mysteries isn’t it? Why are we here? I mean, are we the product of some cosmic coincidence? Or is there really a God, watching everything. You know, with a plan for us and stuff. I don’t know man, but it keeps me up at night.

Simmons: What? I mean why are we out here, in this canyon.

Grif: Uh… Oh… Yeah…

Simmons: What’s all this stuff about God?

Grif:  Uh… um… Nothing.

Simmons: You wanna talk about it?

Grif: No

Red vs Blue

So why are we here? Well because I’m writing a blog, and for one reason or another, you’ve decided to read it. My name is Duncan Rheingans-Yoo, and the blog is called (finish later). Many of my friends have asked what it’s going to be about, or what I hope to get out of it. I like to respond with the following:

I sometimes have ideas—about all sorts of things, but most commonly philosophy, politics, rationality, effective altruism, and things happening at Harvard. Typically, I find the nearest person who will listen and tell them. I’ve decided that this is a pretty inefficient way of doing…well…anything other than convincing my friends that at any time I might accost them with my latest obsession.

So, I’m here to organize the ideas I have so that they’re harder to lose track of. I don’t expect many people to read my blog (especially when starting out), but if you do, I’m also here to convey my ideas to you and get feedback. I’m here to become a better writer, but because my posts are likely to be very irregular, this is a tertiary goal.

And finally:

A very long time ago, ever so long ago, the universe as we know it took shape. Particles swirled and condensed, coalescing into stars and planets and galaxies. Later, billions of year later, on one tiny blue dot in the vastness of space, life began. It was primitive at first, but developed into complex structures that could think and love, could look up and try to touch the stars. One day, if humanity makes it through the growing pains where our strength outpaces our wisdom, maybe we’ll do that. Humanity will spread across the universe, flourishing across all those stars and planets and galaxies. And I’m here to play my part, however small, in making that happen.

But. You know. That’s more of a long-term goal. For now, I’ll settle for organizing my ideas and getting feedback.