Debugging code, McKinsey-style
The consulting firm McKinsey teaches all its analysts a concept called “Mutually Exclusive, Collectively Exhaustive” (MECE). It’s a way of breaking down a problem into a set of smaller problems that guarantees that the smaller problems contain the answer to the larger one.
Mutually exclusive means: each small problem doesn’t overlap with others; they can be analyzed independently.
Collectively exhaustive means: when you sum the small problems up, they equal the larger problem.
The classic example is business profit. Suppose John’s apple farm is no longer making as much money as it used to, and you want to know why. Read the rest of this entry »
Can I put “meta-ethics” in a post title and have anyone read it?
I guess by writing this post, I’m making this question non-rhetorical.
Anyway yes this is about meta-ethics, sorry. If ethics is “why is stealing wrong?” meta-ethics is “What does it mean to say that stealing is wrong?” I.e., it’s like ethics, but meta: yeah?
So lately I’ve been bashing on utilitarianism and its daddy-philosophy, consequentialism. Smashing shoddy philosophy is fun, but it’s a lot easier to tear things down than to build them up. So I feel like it’s only fair to say what I believe in. Read the rest of this entry »
Consequentialism is logically indefensible
I had previously written about my intuitive objections to utilitarianism as a moral framework, mainly that it’s premised on there always being a right answer, which is something that’s neither obviously true nor desirable.
I think those objections arise for me because at its core, consequentialism is a philosophically indefensible idea. Here’s my attempt at a charitable statement of a minimal consequentialist position:
1. Some possible states of the physical universe are better or worse than other possible states
2. Let’s say choice A leads to state of the universe A, and choice B leads to state of the universe B. If state of the universe A is better than state of the universe B, choice A is better than choice B.
Or less formally, the better-ness or worse-ness of a choice comes from the better-ness or worse-ness of the choice’s consequences. Read the rest of this entry »
Should all technology be open-source?
In my last post, I wrote about frameworks for seeing the world that have lost their transformative power.
Here’s one that hasn’t.
I’m a fan of the open-source software movement. Open-sourcing software means releasing it under a license that lets others freely use, modify, and build on top of it. When the movement started, this was a deeply counterintuitive idea. Software was and is valuable; people’s livelihoods depend on selling it. Giving it away, for free, with no strings attached seemed as crazy as throwing a stack of hundred-dollar bills out the window of your car.
Since the movement started in the late 90s, open-source has moved from a fringe practice to the lifeblood of the entire software ecosystem. Pretty much every new software company builds on top of years and years worth of open-source code. Meanwhile, programming has become increasingly lucrative as a profession, and I would argue that that’s because of, not in spite of, open source: an hour spent by a programmer today is worth hundreds of times more than an hour spent by a programmer twenty years ago, because today that programmer is building on top of twenty years of open source code. Read the rest of this entry »
So You Say You Want a Revolution
From Robespierre the Incorruptible, Robespierre the Daemonic:
But The Gleaming Vision and False Consciousness are two of the most crucial tools in the Revolutionary’s toolbox. I think that the tepid nature of much current Leftist writing (when it isn’t just disappearing entirely into theory) owes to the lack of a forceful (coercively so) positive future vision, and the complementary near-myopic focus on critique. …
Without a Gleaming Vision, and the accusations of False Consciousness to level at those who reject the Gleaming Vision, critique only serves the purpose of establishing internal purity tests, one-upping dialogic opponents, and getting tenure or magazine posts. Allusions to Gleaming Visions remain steadfastly vague, whether you are reading Slavoj Zizek, Naomi Klein, Silvia Federici, or Antonio Negri. While they are hectoring in their criticism of capitalism’s blatant faults, they are fuzzy on the details of its successor–and thus the need for revolution rather than reform is not clear. Thomas Piketty’s surprisingly modest solutions in Capital in the 21st Century–a global wealth tax, but that’s about it–drastically separate him from the radical crowd. In The Nation, Timothy Shenk half-heartedly carps about Piketty’s incrementalism while making only the fuzziest motions at “a much richer set of possibilities†and “a more promising alternative†for the future. He doesn’t bother to say what they might be. That won’t cut it.
Yeah.
This is the Occupy Wall Street circumstance: something is wrong, we don’t know how to fix it. Read the rest of this entry »
Utilitarianism tries too hard to be right
Utilitarianism, broadly speaking, is the idea that the correctness of a decision can be evaluated by summing up its consequences for each individual. “The greatest good for the greatest number.”
Utilitarianism is a very clean philosophy. To figure out if something is right or wrong, all you have to do is do the math. If you’re a utilitarian, there are no moral dilemmas, just practical dilemmas: we’re limited by our ability to predict consequences, but the better we are at predicting and optimizing, the more right we become.
Utilitarianism is popular in public policy and economic spheres, precisely because it dissolves moral ambiguity, and creates objective grounds for making decisions. It’s hard to argue against someone making utilitarian arguments without being armed with better statistics.
What seems secure and objective, though, is built on shaky ground. What really is “good” for another person? Is it quantifiable and comparable? Is there an objective, right answer?
Let’s take a hard question: end-of-life euthanasia for terminal illnesses. I don’t think a utilitarian can discuss euthanasia without totally missing the point. From a utilitarian standpoint, we have to assign some value to the person being alive, some value to their suffering, some value to the flashes of positive emotions they feel in the midst of their pain, and add it up to get an answer for whether or not euthanasia is justified. Presumably, for a utilitarian, there is some correct value for those numbers.
One patient decides to embrace euthanasia, because she sees accepting her own mortality and leaving the world with dignity as a victory.
Another patient chooses to continue living as long as possible, because she sees enduring the suffering and fighting for life, even in the face of overwhelming odds, as meaningful.
Is one of those patients wrong? Did she miscompute her own utility function?
I think the best way of looking at it is, there is no right answer. Life isn’t about getting a 100% correct score on the test. The idea that for every decision there’s an objectively right choice is a huge philosophic leap-of-faith, and to me, it’s a deeply unpalatable leap. I see utilitarianism as a form of washing one’s hands of personal responsibility: instead of having to make real choices and accept them, you can say, “I was just doing the best I can to pick the right answer”. It’s a philosophy for moral cowards.
To be fair, I think many people gravitate towards utilitarianism not because they want to give up personal responsibility for their decisions, but because they think it’s important to fight for a universe where other people matter, to avoid solipsistic moral systems where the important thing is feeling good about your decisions rather than taking into account the effect they have on other people. People become consequentialists because they think consequences matter.
And they do matter. But the reason they matter is precisely because we care about relationships with other people. If I make a decision that hurts a friend, that decision isn’t wrong because the friend got hurt. It’s wrong to the extent that I didn’t take my friend’s preference not to be hurt into account. If she gives me permission to hurt her, because she’s willing to accept the pain in the service of something else that she values, then hurting her isn’t wrong at all. Not because of a utilitarian “ends-justifying-the-means” thing, but because the decision preserves the basis of our relationship, which is that we make decisions in collaboration with one another.
I don’t believe in trying to do the right thing, because I don’t believe in “right”. What I do believe in is taking responsibility for choices, and choosing as a “we” instead of as an “I”. If there is a fundamental moral imperative, I believe that imperative is to expand “we” as much as possible. Utilitarianism hurts the “we”, by reducing others to a preference function, as if people’s preferences don’t change, as if I can’t just go over to the people whom my decisions affect and actually talk to them.
Utilitarianism taken to its logical extreme is fundamentally dehumanizing: it values people based on what they experience, rather than what they choose. It makes questions like this one — is it better for one person to be tortured or an unimaginably large number of people to suffer tiny inconveniences — seem like reasonable things to ask. Here, the question itself is what does the damage: it conveniently removes the possibility of consulting with the torture victim and the larger society about what they think, what they value.
So: utilitarianism, not so much. It seems simple and clean, but that simplification process throws out the baby with the bathwater: the actual human relationships and collaborative decision-making process that are why we care about our impact on others in the first place.
Edit: I wrote a follow-up post that explores my objections to consequentialism from a more rigorous, logical perspective.
Co-op capitalism?
So I read this:Â The Minimum Wage Worker Strikes Back
Nothing really new or surprising, but it makes it very vivid how unsustainable working minimum wage in fast food is, and how hard a trap it is for people to get out of.
Articles like this demand a response, because pretty much every day I take advantage of goods and services provided by companies that are offering their workers a similarly-shitty bargain.
So the main reactions people seem to have is “raise the minimum wage” and “unionize”. And then political opponents respond that this is bad for business, destroys jobs, etc. Debates like this frustrate me because they’re implicitly adversarial: are you pro-labor or pro-capital? Pick a side, call the other side names.
I’m trying to imagine what a world where the labor / capital dichotomy doesn’t exist looks like. Read the rest of this entry »
A modest proposal to fix science
Here’s the proposal: separate hypothesis generation from hypothesis testing.
My inspiration is this great post which demonstrates how remarkably easy it is for experimenters to produce results that support their hypothesis, regardless of whether their hypothesis is correct.
The idea: outsource all experimentation to special labs. You send them a hypothesis and a check, they send you a “disproved” or “could not disprove” paper; the lab signs, you sign, and it gets published in Nature.
Right now research professors both generate the theories and perform the experiments. In this world, they just generate the theories, and then use their grant money to pay other people to test them.
Also, companies and private individuals could use the labs as well. Which both democratizes and standardizes research.
The labs then become specialists in performing strictly controlled, statistically valid research at an efficient price-point. Since they become basically science factories, it’s much easier to audit to see if the experiments they are doing are valid, and because they’re doing thousands of similar experiments, should be able to drive price down… it’s more like running a McDonalds than running a research group.
Meanwhile, it frees up professors to actually focus on generating domain-specific insights, instead of forcing them to be experts on statistics and valid experimental techniques.
EDIT:  A couple people have pointed out that it’s the process of experimentation / getting into the nitty-gritty of things that leads to hypotheses.  Which is a great point.  So let me amend the above to say, professors and such can and should still do experimentation in their own labs.  But, if they want to then get a conclusion published in a peer-reviewed journal, they should outsource the official verification work to an external lab.
Freedom and (Sub)-Culture
I’ve had the weird and lonely experience lately of peering into a lot of different sub-cultures while feeling like I don’t really belong to any of them. Some of this is through in-person friendships, some of this is me following various people on the internet, some is general cultural osmosis.
Here are, in no particular order, some of the cultures I’ve been observing:
- The neo-Marxist Brooklyn literary scene (I read Full Stop), where talking about “class consciousness†and quoting Lenin is apparently still something one does.
- The Singularity Institute, particularly their literary masterpiece (I say this mostly un-ironically) Harry Potter and the Methods of Rationality.
- The Homestuck fandom (another literary masterpiece), and fandom culture in general, especially as it inhabits web comics and Tumblr.
- The maker movement and 3d printing worlds
- The technology startup world, including Paul Graham-ia, New York’s local VC culture, the hipster-fashion-tech scene (this is where my office is), and the lean startup movement
- The feminist / social justice movement, which venn-diagrams with the Brooklyn literary scene, the tech startup world, and the fandom community.
- The new age personal development world
- The Ivy League alumni finance / consulting network (and my former employer Bridgewater which is a sub-culture in its own right).
- The independent game developer community (or sub-section of, anyway), mostly because I started following the developer of Analogue: A Hate Story (lit. m.p.) on Twitter
- The Brooklyn-based urban / literary exploration scene (such as Atlas Obscura)
- The evidence-based athleticism-oriented (I’m making this phrase up, there might be a better one) fitness movement as embodied by Fitocracy
- This guy, who maybe isn’t really a sub-culture, but his writing has been making me think about all of this and he doesn’t fit into any of the above, so he gets his own bullet point.
I’m attracted to at least some aspect of everything on this list (and turned-off by aspects of many of them as well). Weirdly, some of these sub-cultures virulently despise each other, such as the first two I mentioned. I find this confusing and kind of alarming.
Consensus reality? There’s no consensus. My universe does not have a paper of record.
I’ve been wrestling with a number of questions related to this situation:
- How can I have a meaningful sense of community when I feel that every community embraces only a partial truth?
- What will be the power relationship between these sub-cultures and the mainstream (people who read the New York Times, who know the names of celebrities, who can meaningfully identify with national politics)?
- To what degree are sub-cultures mappable and legible, and how would one use tools like Twitter graph analysis, etc. to do so?
To put these questions into context, I think that mainstream western culture is dying. Its political institutions have lost the ability to legitimize; its cultural productions are banal; its economic health is uncertain at best, and its modes of production and consumption are ecologically unsustainable. In contrast, I think there’s incredible vitality in all the sub-cultures I listed, although I have no idea if that vitality is constructive or destructive (I guess the polite term for that ambiguity these days is “disruptiveâ€).
So — this matters. I’ve picked my side, in a sense: I’m on Team Internet. I’m self-employed and starting a movement, vs taking a job at some mainstream institution. But I’m still pretty lost re: how I want to navigate this cultural landscape.
Anyway, when lost, make a map. I’m working on a General Theory of Sub-Culture, Values, and Agency. I have a couple rough-draft principles. But it’s bed-time, so I’ll save them for another blog post.
Build Your Own Tech
New post by me on the Bubble blog:
So the real secret reason I started Bubble is because I wanted a better online to-do list.
Have you ever googled for “to-do software†before?
There’s a lot of it. Lots and lots and lots of to-do lists. I think there are entire countries where the whole population does nothing but make to-do list software.