Archive for May, 2014
Responding to hate
So a writer at Jezebel collected a day’s worth of transcripts from a PUAHate chat group, which is the community that helped develop / reinforce the worldview of the UCSB shooter.
I really dislike this piece of reporting. I think it’s valuable to shed light on dark corners of the internet like that, but when the writer says things like “From an observer’s perspective, PUAHate is a group of self-pitying babies who believe they’re entitled to women who are much more attractive than they are,” she is saying exactly what all of these guys would expect, in their distorted world view, that she would say.
The kind of hate on display by these guys comes from a place of deep lack of self-worth, which they then protect themselves from by inventing them-against-the-world stories. Read the rest of this entry »
What we’re trying to do with Bubble
I wrote a new post on the Bubble blog explaining what we’re trying to accomplish:
Our goal is to erase the distinction between software use and software creation.
Good software, when used, is:
* Helpful. It empathizes with the users’ point of view, understands what they want, and lets them do it with as little effort as possible. My email provider, GMail, tries to figure out what mail I actually want to read. It marks things as spam for me, it guesses which mail is important, and it lets me hide emails I don’t want to see without making me delete them.
* Friendly. It communicates a desire to be the user’s ally. Have you ever used TurboTax Online? The users of that software are doing a stressful, hateful chore. TurboTax is their friend, holding their hand as they go through it. It goes out of its way to comfort and reassure. It’s not a perfect piece of software, but it’s pretty damn impressive.
* Empowering. Good software lets its users work miracles. Love it or hate it, what would the world look like without Microsoft Word? Fifty years ago, type-setting a document for printing was a professional trade. It took years to master the technologies behind setting margins, picking colors, preparing fonts of different sizes. Now, a five-year-old can do in ten minutes what used to take an expert hours or days.
That’s what using good software is like. Creating software, on the other hand…
Read the full post here!
Instability and the unknown
What if you’re a good cancer cell?
Selfless, considerate, hard-working, supportive of your friends and neighbors. Good. But, your friends and neighbors are also cancer cells, and while you may be doing your part to make your community grow and prosper, your community is slowly killing the larger world that you’re a part of?
I just read this talk on how Silicon Valley is creating a dystopic internet. It’s a disturbing read. Some parts of it I viscerally agree with (the venture-capital ecosystem being toxic, and big tech companies having too much power) and other parts I viscerally disagree with (that data collection is a bad thing, and that governments should play a role in deciding which data companies can keep). My biggest take-away, though, is uncertainty. I don’t think I have the information nor the wisdom to know how all these forces will play out in the long term, and I don’t trust people who think that they do.
The scary thing is this is my industry. This is a subject-matter that I’ve spent thousands of hours thinking about and working on, and I still don’t know what “good” for the system is. I don’t think anyone really knows.
Debugging code, McKinsey-style
The consulting firm McKinsey teaches all its analysts a concept called “Mutually Exclusive, Collectively Exhaustive” (MECE). It’s a way of breaking down a problem into a set of smaller problems that guarantees that the smaller problems contain the answer to the larger one.
Mutually exclusive means: each small problem doesn’t overlap with others; they can be analyzed independently.
Collectively exhaustive means: when you sum the small problems up, they equal the larger problem.
The classic example is business profit. Suppose John’s apple farm is no longer making as much money as it used to, and you want to know why. Read the rest of this entry »
Can I put “meta-ethics” in a post title and have anyone read it?
I guess by writing this post, I’m making this question non-rhetorical.
Anyway yes this is about meta-ethics, sorry. If ethics is “why is stealing wrong?” meta-ethics is “What does it mean to say that stealing is wrong?” I.e., it’s like ethics, but meta: yeah?
So lately I’ve been bashing on utilitarianism and its daddy-philosophy, consequentialism. Smashing shoddy philosophy is fun, but it’s a lot easier to tear things down than to build them up. So I feel like it’s only fair to say what I believe in. Read the rest of this entry »
Consequentialism is logically indefensible
I had previously written about my intuitive objections to utilitarianism as a moral framework, mainly that it’s premised on there always being a right answer, which is something that’s neither obviously true nor desirable.
I think those objections arise for me because at its core, consequentialism is a philosophically indefensible idea. Here’s my attempt at a charitable statement of a minimal consequentialist position:
1. Some possible states of the physical universe are better or worse than other possible states
2. Let’s say choice A leads to state of the universe A, and choice B leads to state of the universe B. If state of the universe A is better than state of the universe B, choice A is better than choice B.
Or less formally, the better-ness or worse-ness of a choice comes from the better-ness or worse-ness of the choice’s consequences. Read the rest of this entry »
Should all technology be open-source?
In my last post, I wrote about frameworks for seeing the world that have lost their transformative power.
Here’s one that hasn’t.
I’m a fan of the open-source software movement. Open-sourcing software means releasing it under a license that lets others freely use, modify, and build on top of it. When the movement started, this was a deeply counterintuitive idea. Software was and is valuable; people’s livelihoods depend on selling it. Giving it away, for free, with no strings attached seemed as crazy as throwing a stack of hundred-dollar bills out the window of your car.
Since the movement started in the late 90s, open-source has moved from a fringe practice to the lifeblood of the entire software ecosystem. Pretty much every new software company builds on top of years and years worth of open-source code. Meanwhile, programming has become increasingly lucrative as a profession, and I would argue that that’s because of, not in spite of, open source: an hour spent by a programmer today is worth hundreds of times more than an hour spent by a programmer twenty years ago, because today that programmer is building on top of twenty years of open source code. Read the rest of this entry »
So You Say You Want a Revolution
From Robespierre the Incorruptible, Robespierre the Daemonic:
But The Gleaming Vision and False Consciousness are two of the most crucial tools in the Revolutionary’s toolbox. I think that the tepid nature of much current Leftist writing (when it isn’t just disappearing entirely into theory) owes to the lack of a forceful (coercively so) positive future vision, and the complementary near-myopic focus on critique. …
Without a Gleaming Vision, and the accusations of False Consciousness to level at those who reject the Gleaming Vision, critique only serves the purpose of establishing internal purity tests, one-upping dialogic opponents, and getting tenure or magazine posts. Allusions to Gleaming Visions remain steadfastly vague, whether you are reading Slavoj Zizek, Naomi Klein, Silvia Federici, or Antonio Negri. While they are hectoring in their criticism of capitalism’s blatant faults, they are fuzzy on the details of its successor–and thus the need for revolution rather than reform is not clear. Thomas Piketty’s surprisingly modest solutions in Capital in the 21st Century–a global wealth tax, but that’s about it–drastically separate him from the radical crowd. In The Nation, Timothy Shenk half-heartedly carps about Piketty’s incrementalism while making only the fuzziest motions at “a much richer set of possibilities†and “a more promising alternative†for the future. He doesn’t bother to say what they might be. That won’t cut it.
Yeah.
This is the Occupy Wall Street circumstance: something is wrong, we don’t know how to fix it. Read the rest of this entry »
Utilitarianism tries too hard to be right
Utilitarianism, broadly speaking, is the idea that the correctness of a decision can be evaluated by summing up its consequences for each individual. “The greatest good for the greatest number.”
Utilitarianism is a very clean philosophy. To figure out if something is right or wrong, all you have to do is do the math. If you’re a utilitarian, there are no moral dilemmas, just practical dilemmas: we’re limited by our ability to predict consequences, but the better we are at predicting and optimizing, the more right we become.
Utilitarianism is popular in public policy and economic spheres, precisely because it dissolves moral ambiguity, and creates objective grounds for making decisions. It’s hard to argue against someone making utilitarian arguments without being armed with better statistics.
What seems secure and objective, though, is built on shaky ground. What really is “good” for another person? Is it quantifiable and comparable? Is there an objective, right answer?
Let’s take a hard question: end-of-life euthanasia for terminal illnesses. I don’t think a utilitarian can discuss euthanasia without totally missing the point. From a utilitarian standpoint, we have to assign some value to the person being alive, some value to their suffering, some value to the flashes of positive emotions they feel in the midst of their pain, and add it up to get an answer for whether or not euthanasia is justified. Presumably, for a utilitarian, there is some correct value for those numbers.
One patient decides to embrace euthanasia, because she sees accepting her own mortality and leaving the world with dignity as a victory.
Another patient chooses to continue living as long as possible, because she sees enduring the suffering and fighting for life, even in the face of overwhelming odds, as meaningful.
Is one of those patients wrong? Did she miscompute her own utility function?
I think the best way of looking at it is, there is no right answer. Life isn’t about getting a 100% correct score on the test. The idea that for every decision there’s an objectively right choice is a huge philosophic leap-of-faith, and to me, it’s a deeply unpalatable leap. I see utilitarianism as a form of washing one’s hands of personal responsibility: instead of having to make real choices and accept them, you can say, “I was just doing the best I can to pick the right answer”. It’s a philosophy for moral cowards.
To be fair, I think many people gravitate towards utilitarianism not because they want to give up personal responsibility for their decisions, but because they think it’s important to fight for a universe where other people matter, to avoid solipsistic moral systems where the important thing is feeling good about your decisions rather than taking into account the effect they have on other people. People become consequentialists because they think consequences matter.
And they do matter. But the reason they matter is precisely because we care about relationships with other people. If I make a decision that hurts a friend, that decision isn’t wrong because the friend got hurt. It’s wrong to the extent that I didn’t take my friend’s preference not to be hurt into account. If she gives me permission to hurt her, because she’s willing to accept the pain in the service of something else that she values, then hurting her isn’t wrong at all. Not because of a utilitarian “ends-justifying-the-means” thing, but because the decision preserves the basis of our relationship, which is that we make decisions in collaboration with one another.
I don’t believe in trying to do the right thing, because I don’t believe in “right”. What I do believe in is taking responsibility for choices, and choosing as a “we” instead of as an “I”. If there is a fundamental moral imperative, I believe that imperative is to expand “we” as much as possible. Utilitarianism hurts the “we”, by reducing others to a preference function, as if people’s preferences don’t change, as if I can’t just go over to the people whom my decisions affect and actually talk to them.
Utilitarianism taken to its logical extreme is fundamentally dehumanizing: it values people based on what they experience, rather than what they choose. It makes questions like this one — is it better for one person to be tortured or an unimaginably large number of people to suffer tiny inconveniences — seem like reasonable things to ask. Here, the question itself is what does the damage: it conveniently removes the possibility of consulting with the torture victim and the larger society about what they think, what they value.
So: utilitarianism, not so much. It seems simple and clean, but that simplification process throws out the baby with the bathwater: the actual human relationships and collaborative decision-making process that are why we care about our impact on others in the first place.
Edit: I wrote a follow-up post that explores my objections to consequentialism from a more rigorous, logical perspective.
Co-op capitalism?
So I read this:Â The Minimum Wage Worker Strikes Back
Nothing really new or surprising, but it makes it very vivid how unsustainable working minimum wage in fast food is, and how hard a trap it is for people to get out of.
Articles like this demand a response, because pretty much every day I take advantage of goods and services provided by companies that are offering their workers a similarly-shitty bargain.
So the main reactions people seem to have is “raise the minimum wage” and “unionize”. And then political opponents respond that this is bad for business, destroys jobs, etc. Debates like this frustrate me because they’re implicitly adversarial: are you pro-labor or pro-capital? Pick a side, call the other side names.
I’m trying to imagine what a world where the labor / capital dichotomy doesn’t exist looks like. Read the rest of this entry »