Josh Haas's Web Log

Consequentialism is logically indefensible

without comments

I had previously written about my intuitive objections to utilitarianism as a moral framework, mainly that it’s premised on there always being a right answer, which is something that’s neither obviously true nor desirable.

I think those objections arise for me because at its core, consequentialism is a philosophically indefensible idea. Here’s my attempt at a charitable statement of a minimal consequentialist position:

1. Some possible states of the physical universe are better or worse than other possible states
2. Let’s say choice A leads to state of the universe A, and choice B leads to state of the universe B. If state of the universe A is better than state of the universe B, choice A is better than choice B.

Or less formally, the better-ness or worse-ness of a choice comes from the better-ness or worse-ness of the choice’s consequences.

Consequentialism comes from a good set of intuitions, for the most part. The primary intuition, I think, is the desire to de-center the decision-maker in the realm of moral choice. What I mean by that is that human beings are selfish and emotional. They treat harm to themselves or people they care about as more important than harm to strangers. They get worked up by a photo of a cute kid on a charity brochure and donate, when a competing charity with worse marketing materials is actually saving many more lives.

Consequentialism is an attempt to step back from this and say, look, let’s be rational and objective. Rather than letting things like cute photos of kids sway us, let’s actually evaluate the consequences of our actions.

Although the instinct to evaluate consequences more rationally is good, I think it’s an intellectual mistake to jump from that to believing that better-ness and worse-ness as being properties of states of the universe. I have three claims about this belief: first, that it’s an unprovable axiom, second, that as an axiom, it’s incoherent and arbitrary, and third, that it provides zero guidance for decision making.

First, the belief that some states of the physical universe are better or worse than others is, at best, an unprovable axiom in one’s system of belief. To see this, try to imagine an experiment that would prove that one possible state of affairs is better than another. You can’t: “good” and “bad” aren’t observable phenomenon, at least not unless you already accept consequentialism as true. So, in a discussion between person A who believes in consequentialism, and person B that doesn’t, person A can’t say “look, you’re wrong, let me show you.” The best A can say is “I choose to adopt consequentialism as an axiom for making my decisions”.

That consequentialism is at best an axiom isn’t a damning critique; any imaginable system of decision-making will likely depend on some unprovable axioms. But, it means that consequentialism has to be defended on its value as an axiom; does it represent some basic obvious truth that makes sense to build on top of?

I would argue that it doesn’t, because as an axiom, I believe that it’s incoherent and arbitrary. I think that any attempt to flesh out what it means for a state of the universe to be better or worse than another state is doomed from the get-go.

Here’s a challenge for consequentialists. Imagine two universes that are identical except that in one universe, Suzy gets stung by a bee, and in another, Sam stubs his toe. Which universe is better?

Let’s make this easy on you. First, let’s stipulate that you have no epistemic limits in examining these two universes: you can pause and rewind time, you can determine the exact position and trajectory of every particle (screw Heisenberg anyway). You can measure the amount of cortisol in Suzy’s blood stream, and inspect Sam’s brain on a neuron-by-neuron level. Second, I’ll give you an infinite amount of time to complete your analysis.

What’s your decision procedure for choosing which universe is better? How do you justify this decision procedure?

Here’s my prediction: for any decision-procedure you come up with, I can come up with an alternate procedure that also matches your justification, but which gives a different answer.

I’m confident in my prediction because I think the intuitions of “better” and “worse” we have intrinsically rely on subjectivity; there’s no mapping between them and the observable physical universe. You might be able to identify patterns of neuron activation that lead to Suzy opening her mouth and saying “that hurts!”, but you can’t go from looking at neurons to an appreciation of how much pain she feels without a subjective observer.

To use the vocabulary of professional philosophy, our intuitions about “better” and “worse” are statements about qualia, but the relationship between qualia and the physical world is an unsolved and perhaps unsolvable problem. So consequentialism, at least insofar as it tries to remotely resemble human moral intuitions, has to solve one of the biggest unsolved problems in philosophy before it can get to a coherent account of itself. And even if we do somehow come up with a way around it, there’s no reason I’m aware of to think that there will emerge a single decision procedure for picking which world is better, as opposed to a range of justifiable, contradictory choices.

Generally when picking axioms, you want ones that can be cashed out to something definitive and concrete. The consequentialist assumption that some possible universes are better than others seems to fail this criterion.

Putting that aside, though, I think there’s a strong practical argument to be made that consequentialism is a useless philosophy in terms of real-world decision making. The argument starts from the observation that we can only meaningfully predict the consequences of our actions a short distance into the future. If I give Suzy a flower, I might be able to reasonably predict that this will make her feel good. But I might not be able to predict that when she takes a detour on her way home to get a vase for it, she gets hit by a car. Decisions have unexpected consequences, and what seems like something that will lead to a better universe might very easily lead to a worse one.

What most consequentialists presumably do is acknowledge their limited ability to predict the future, and optimize for what they think will be the best universe. Here’s the kicker, though: there’s no reason to think that they do any better at this than chance.

Why? The lifetime of the universe is long. You might be able to accurately predict things a few minutes into the future, and in some cases even years. But millions, billions of years? Not so much. Even if you successfully optimize such that the next five years after your decision are better than had you chose otherwise, that’s a rounding error in the total lifespan of the universe.

If you take consequentialism seriously, an event that happens 100,000 years from now is just as important as an event that happens tomorrow. So, I don’t think a serious consequentialist has anything to say to the guy who goes “well, I’m going to do whatever I like, because in the grand scheme of things, I’m just as likely to make a better universe than a worse one”. For all practical purposes, the dude is right.

This is ironic, because the motivation behind consequentialism was to remove the bias of the human actor, strip away the cognitive limitations that lead us to not donate money to a charity because the effects are distant and far from us. But if you really take the long-term, big picture view, all human action becomes essentially meaningless.

I’m not a nihilist. I happily make choices, and I imbue those choices with moral weight in my own mind, and I feel rationally justified in doing so. What I don’t do, though, is pretend that I can predict the consequences of my actions, or that those consequences are what gives my choices weight. Rather, I embrace the subjectivity of my decisions; I believe that normativity is inherently a subjective phenomenon, that it’s the predicament of having to make a decision that brings questions like right and wrong, good and bad into the world, and that the rightness and wrongness of a decision can’t be separated out from the relationship between the decision-maker and the world.

Why? That’s a longer story, for another day…

Written by jphaas

May 22nd, 2014 at 4:56 pm

Posted in Uncategorized