Josh Haas's Web Log

Utilitarianism tries too hard to be right

without comments

Utilitarianism, broadly speaking, is the idea that the correctness of a decision can be evaluated by summing up its consequences for each individual. “The greatest good for the greatest number.”

Utilitarianism is a very clean philosophy. To figure out if something is right or wrong, all you have to do is do the math. If you’re a utilitarian, there are no moral dilemmas, just practical dilemmas: we’re limited by our ability to predict consequences, but the better we are at predicting and optimizing, the more right we become.

Utilitarianism is popular in public policy and economic spheres, precisely because it dissolves moral ambiguity, and creates objective grounds for making decisions. It’s hard to argue against someone making utilitarian arguments without being armed with better statistics.

What seems secure and objective, though, is built on shaky ground. What really is “good” for another person? Is it quantifiable and comparable? Is there an objective, right answer?

Let’s take a hard question: end-of-life euthanasia for terminal illnesses. I don’t think a utilitarian can discuss euthanasia without totally missing the point. From a utilitarian standpoint, we have to assign some value to the person being alive, some value to their suffering, some value to the flashes of positive emotions they feel in the midst of their pain, and add it up to get an answer for whether or not euthanasia is justified. Presumably, for a utilitarian, there is some correct value for those numbers.

One patient decides to embrace euthanasia, because she sees accepting her own mortality and leaving the world with dignity as a victory.

Another patient chooses to continue living as long as possible, because she sees enduring the suffering and fighting for life, even in the face of overwhelming odds, as meaningful.

Is one of those patients wrong? Did she miscompute her own utility function?

I think the best way of looking at it is, there is no right answer. Life isn’t about getting a 100% correct score on the test. The idea that for every decision there’s an objectively right choice is a huge philosophic leap-of-faith, and to me, it’s a deeply unpalatable leap. I see utilitarianism as a form of washing one’s hands of personal responsibility: instead of having to make real choices and accept them, you can say, “I was just doing the best I can to pick the right answer”. It’s a philosophy for moral cowards.

To be fair, I think many people gravitate towards utilitarianism not because they want to give up personal responsibility for their decisions, but because they think it’s important to fight for a universe where other people matter, to avoid solipsistic moral systems where the important thing is feeling good about your decisions rather than taking into account the effect they have on other people. People become consequentialists because they think consequences matter.

And they do matter. But the reason they matter is precisely because we care about relationships with other people. If I make a decision that hurts a friend, that decision isn’t wrong because the friend got hurt. It’s wrong to the extent that I didn’t take my friend’s preference not to be hurt into account. If she gives me permission to hurt her, because she’s willing to accept the pain in the service of something else that she values, then hurting her isn’t wrong at all. Not because of a utilitarian “ends-justifying-the-means” thing, but because the decision preserves the basis of our relationship, which is that we make decisions in collaboration with one another.

I don’t believe in trying to do the right thing, because I don’t believe in “right”. What I do believe in is taking responsibility for choices, and choosing as a “we” instead of as an “I”. If there is a fundamental moral imperative, I believe that imperative is to expand “we” as much as possible. Utilitarianism hurts the “we”, by reducing others to a preference function, as if people’s preferences don’t change, as if I can’t just go over to the people whom my decisions affect and actually talk to them.

Utilitarianism taken to its logical extreme is fundamentally dehumanizing: it values people based on what they experience, rather than what they choose. It makes questions like this one — is it better for one person to be tortured or an unimaginably large number of people to suffer tiny inconveniences — seem like reasonable things to ask. Here, the question itself is what does the damage: it conveniently removes the possibility of consulting with the torture victim and the larger society about what they think, what they value.

So: utilitarianism, not so much. It seems simple and clean, but that simplification process throws out the baby with the bathwater: the actual human relationships and collaborative decision-making process that are why we care about our impact on others in the first place.

Edit: I wrote a follow-up post that explores my objections to consequentialism from a more rigorous, logical perspective.

Written by jphaas

May 19th, 2014 at 5:22 pm

Posted in Uncategorized