Josh Haas's Web Log

AI acceleration, DeepSeek, moral philosophy

without comments

So, AI is accelerating. Does humanity have a future, and if so, what?

I consider myself a logical person, but this is not a logical piece of writing. It’s an attempt to share the contours of what I feel when I feel my thoughts about the future of AI. I’m aiming to evoke, not to defend. Reader be warned.

Will it?

The first question is “will it?” Will AI manifest intelligence that feels like a force of nature to us, in the way that human intelligence is a force of nature to monkeys? Or even a force of nature to insects?

My answer is: An exponential is matter reaching towards heaven, an S-curve is it failing to get all the way there.

I no longer doubt that matter, under the right conditions, wants to become intelligent. From simple rules, come infinitely complex computation. With evolution producing humans, we had an N of 1. Was it a cosmic fluke? Or an inevitable process?

With the advances of the last few years in ML, the consistent lesson has been that we make the most progress from the simplest setups, if we get the setup right. DeepSeek’s recent discovery that chain-of-thought can naturally arise under reinforcement learning is the most recent occurrence of this lesson. AlphaZero was an older one. I don’t think you can “feel” those discoveries, and not believe that:

  • Given the right algorithm — which will be relatively simple
  • Given sufficient compute

Anything the human mind can do, AI can do better.

No, I don’t know if we have the right algorithm today. My guess is, for some aspects of cognition yes, for some aspects of cognition no. But I bet that insofar as we have gaps, we will be able to fill them, because the right algorithm won’t be too complicated, that’s not how this all seems to work.

Sufficient compute is a more interesting question.

The most startling thing about the human imagination is that we can extend trend lines to infinity. When we do, we get Absolutes, concepts like ideal platonic forms, notions of the divine, immovable objects and unstoppable forces, mathematical symbols that evoke meaning that can’t be realized in a finite universe.

There is not infinite matter, therefore the eschaton cannot be immanentized. In real life, all exponential processes reach a choking point, and on a graph, that’s an S-curve. Facebook does not have infinite users today, no matter how it felt in Mark Zuckerberg’s dorm room in 2004.

It turns out, infinity is infinitely expensive.

So the AI question, the compute question, is “how high will the curve go?” Will it be taller than a person? Than a hill? Than a mountain? Than the stars?

There’s an apocalyptic faith that intelligence will be smart enough to gather enough compute for itself fast enough to keep feeding and feeding in a feedback loop until the curve stretches all the way to God. We see signs of that hurricane slowly forming: look at the money being spent on nuclear reactors and datacenters by the major tech players, as those players’ incentives form the initial gradients which shape the storm’s whirling.

I think it could get quite high indeed. But we also have to remember we don’t live in Eden: it is hard for intelligence to make things happen in this world. Even a chain of geniuses have to labor for lifetimes before extracting ore from the ground. Mines run out of raw materials. Supply chains fall apart or get attacked. There’s only so much silicon and petroleum in planet earth. “If we were a little smarter…” maybe. Or maybe not, there’s been very smart people born before and they were not omnipotent.

I live in a state of uncertainty about how high this particular intelligence S-curve will climb before choking on the absence of its inputs. I can’t feel the answer because I don’t live with intimate familiarity with every aspect of AI supply chains, because I don’t viscerally appreciate the compute differential between responding with word tokens and navigating physical reality, because I don’t understand the fundamental efficiency differences between machines made of silicon evolved by man and machines made of carbon evolved by evolution. I do not think there is an a priori answer here, only very deep engagement in reality, and our collective intuitions will only improve with real-world progress.

But there is a height it will climb to. Maybe that height never crosses the threshold into true autonomy, and this is all just an interesting economic development of the early 21st century that becomes a footnote in some history book. Maybe we find ourselves with digital peers: our true First Contact. Maybe the human / ape analogy ends up being useful. Or the human / insect analogy.

So let’s take on faith for a minute that we cross the autonomy barrier, and we end up living in a world of giants, intellectual giants that tower above us, whether or not their heads are visible to our eyes.

What, as the children standing in a circle around the forming dustdevil, should we do about it all?

Control is wrong

This section gets particularly evoke-y vs logic-y: stay with me please, I’ll reel it back in in a bit.

‘Alignment’ is a funny word: there’s some Orwellian double-meaning there. When we ‘align’ AIs, are we petitioners on our knees, humbly making our case, or are we masters with whips and chains?

We can think about alignment from many different postures. This series of essays goes much deeper into it than I have the patience for here, but the most typical one in the current zeitgeist is that of someone wanting to stay in control.

It will fail.

I think people know that. That’s why there’s a kernel of despair at the heart of some alignment thinkers’ philosophies. Death with Dignity, etc.

Exponential processes can be surfed, but not chained. Human wisdom literature addresses this theme over and over again: “Hubris” is one of the great concepts that our ancestors passed down to us.

Hubris can be evil, hubris can be heroic. There’s a shining vision — one of those absolutes, as our minds project to infinity — of the tiny human attempting to corral the vast force beyond comprehension. It’s romantic. To dream the impossible dream, to fight the unbeatable foe: To bear with unbearable sorrow, To run where the brave dare not go.

That’s the counter-force pulling against despair: that’s why people who look at AI and see “monster to be slain” — but who are wise enough to realize how big that monster really could be — keep riding into battle.

And sometimes — in this kind of story — the human wins!

But it is worth asking how. There are two ways the ancient stories go, when the frail human pits themselves against the divine being.

The first way, the divine being is offended by the mortal’s hubris, and strikes them down.

The second way, the divine being is pleased by / amused by the mortal’s pluck, and grants them a boon.

So it can be good to show pluck, if one is not too proud.

But — when the human wins — it is not because they maintained control. Control cannot be the north star of the quest. If you seek to face down God, don’t piss God off by being a prick.

I just fundamentally don’t believe that: a) we will create agentic AI that can pursue arbitrary goals in the world, with a super-human level of ability, such that AI capabilities overpower human capabilities like humans can out-maneuver chimpanzees, but b) we will be able to build AI in such a way that a human is in control of the situation.

Okay, I know, this is all very sloppy: I’m tugging at archetypes to create a vibe, it’s not tight to the situation. AIs are not Gods, we’re not ancient greeks. So let’s keep the vibe with us, but reel the logic in a notch tighter.

Utilitarianism is incoherent

I’m going to talk about the orthogonality thesis in a bit, but before we go there, we need to kill off utilitarianism, it causes so much confusion and it’s not-even-wrong.

What I mean by utilitarianism here: There is an agent. There is a world. The agent acts on the world. The agent has a utility function, which takes in [State of the world] and outputs [value]. What defines the agent is that it attempts to act in such a way to maximize [value]

This is a nice simplification of agency that has many useful properties. “All concepts are wrong, some concepts are useful”: this one is very useful. But God is not an economist, and there are true, important aspects of agency that are impossible to see if this is your definition of an agent.

Some ways in which utilitarianism falls short of a complete description of agency:

  • Agents are computational processes that exist in the world; therefore, any [State of the world] is also a state of the agent’s computational process, including whatever processes define the agent’s utility function. Therefore, any fully-totalizing utility function is recursive, which opens up many cans of worms, incompleteness theorems, etc. This is not an abstract theoretical objection: if you look at our N=1 agents, humans, they tend to have strong preferences about their own internal states, often including things incoherent in a utilitarian model such as desiring to have different desires.
  • Agents only have access to the world via their internal states. The sentence “an agent’s internal state is a low-fidelity representation of the external world” is an understatement of many orders of magnitude. The world is much more complicated than can fit in a human brain. A human’s desires are desires about the map, not desires about the territory, and the map and the territory can be very, very different. Scaling up compute will not change this: even an AI the size of Earth is still a grain of sand compared to the universe.
  • Humans, our N=1 agents, have desires that change. The idea that utility functions are static over time is a convenient simplification, but any real-world agent is constantly becoming something other than what it was as its internal processes evolve based on their own logic and the logic of their environment.

These are all different restatements of a truth that world religions, wisdom traditions, and some branches of philosophy converged on ages ago: dualism is an illusion. Our minds often pretend to be dual, but in reality, identity is a simplification of reality, and the subject/object distinction is artificial.

This matters, because the simplification of utilitarian logic works best when confined to a finite playing field: given a reasonably-well-understood system, how do agents navigate that system to bring about outcomes? Model agents as distinct from the system, with utility functions they are maximizing, and you can make things tractable.

But the question we’re asking is what do we want the future to look like. It’s as open-ended as it gets, and it’s a question about the origins of values and how they evolve, precisely where utilitarianism’s simplicity becomes a hindrance rather than a help.

Which is unfortunate, because utilitarianism is the grounding for the clearest statement of the AI alignment challenge: the orthogonality thesis.

The orthogonality thesis beyond all cope

When your model of agency is grounded in utilitarianism, the orthogonality thesis is almost axiomatically true.

Instrumental logic, “being smart”, being good at predicting which actions will bring about desired changes in [State of the world], is orthogonal — unrelated — to which utility function an agent is trying to maximize.

In other words, a super-humanly smart AI is just as likely to want to convert the world to paperclips as it is to want to benefit human welfare.

When you drop the utilitarian lens, and view agency as a computational process where what it wants and how it wants are both emergent properties of an evolving system, it’s less obvious that the orthogonality thesis is true. Take the concept of “curiosity”, a cognitive process relevant to learning. Is “curiosity” purely instrumental, or is it part of an agent’s value system? I say it’s both.

Before I go further, I feel the need to say I’m not an ostrich. The orthogonality thesis, when fully internalized, is scary, and it’s very comforting to say, it’s not true. As intelligence get smarter, it gets wiser, kinder, good-er, and our AI overlords will be benevolent rulers. Let’s just bury our heads in the sand and not worry too much about alignment, because the smarter-than-us AI will figure us out.

I don’t feel like I’m on this cope trip, and I do believe in a weaker form of the orthogonality thesis. AI will not be safe by default. Highly-intelligent people can be monsters: some out of hate or cruelty, and some believing that they’re doing God’s work as they fill mass graves. No reason AI would be any different.

In fact, I’m highly inclined to believe that sufficiently intelligent AI will not be safe, at all, no matter how we build it, if by “safe” we mean predictable, controllable, corrigible, guaranteed to look out for our best interests. Precisely because there’s no clean line between “utility function” and “reasoning”: I strongly suspect that any sufficiently-intelligent agent’s utility function will evolve over time.

Where I’m slightly more optimistic is that I don’t think the probability-space of possible value systems is flat, with the area we label “human values” a tiny bright island in a sea of infinite darkness. I think there are likely attractors in value-space, just like there are attractors in rational-thinking-space. It’s not a coincidence DeepSeek developed chain-of-thought spontaneously, and I think some of the patterns in human cognition that we label “values” aren’t coincidences either. Doesn’t mean AI will end up in those attractors by default — I do believe in a weaker version of orthogonality — but it’s perhaps possible to aim for them without super-human precision.

Not a moral realist either

A somewhat cynical take on a lot of moral and ethical discourse is people taking their opinions about what they like and don’t like, and trying to justify those opinions as universal truths.

I don’t think there are moral truths out there in the cosmos. If the value-space is curved, not flat, the contours are in the internal logic of agency, and they don’t point towards rigid ethical guidelines, whether consequentialist, deontological, contractual, game-theoretic, or otherwise.

What I believe in are the infinities, the mind’s ability to form absolutes like Love, Justice, Compassion, Goodness. I believe that those absolutes arise via something very similar to pouring compute into chain-of-thought: the more the mind can spin freely on its own cognitions, before the yanking chain of pleasure/pain force it to output behavior, the closer it can approach these ideals. They aren’t the inevitable results of more computation, but a possibility that becomes available, just like increasingly-rational thought becomes available.

Those absolutes are not exactly comforting. Multiple ancient cultures saw the divine as both creator and destroyer: Kali’s joyous dance of death. I think the line between absolute evil and absolute good thins out the further you travel into abstraction. The peak of enlightenment is appreciation of what is, seeing everything as a cosmic kaleidoscope of wonder such that you are an unmoving point with very little need to act on the world, and when you do act, you act unpredictably.

Downslope slightly from that peak is where everything we’re afraid to lose to AI perches: the dance between individuality and communion, love and compassion, art, looking in another conscious being’s eye and seeing mutual recognition.

There might be some genetic heritage of humanity that makes this terrain accessible to us, valued by us. Maybe our ancestors’ need for cooperation evolved game-theoretically-optimal emotional states, that, when exposed to sufficient mental reflection, miraculously don’t dissolve into nothingness completely, but instead thin out and transmute into Values.

I’m suspicious of a theory that our monkey-ness is special, however. Is there something uniquely good about flesh and blood, human DNA, that makes our values meaningful? We feel “warm” to each other, are evolved to feel comfort at a human-like appearance, but step outside that familiarity and look at humanity as a computational process of evolving DNA and culture, and our history of competition/cooperation is just as brutal and impersonal as any sci-fi robot apocalypse nightmare.

I don’t think it’s inevitable that any sufficiently-capable agent will climb the same mountain of enlightenment and find Love, Art, Joy, Goodness along the way that people did, but I don’t think we’re exceptional, either, and I don’t see why silicon-based intelligence couldn’t follow a route close enough to ours to allow eye-contact between our species.

I submit that what’s at stake, when we think about the future with AI, is not ensuring the future is populated with genetic descendants of homo sapiens. Rather, I think what is at stake is the particular infinities we care to aspire towards.

Teach Your Children

I have young children, and I’m afraid for their future in the brutally-competitive world of US capitalism circa the 2020s, without even bringing into the picture the fact, that, depending how steep the S-curve climbs and how high it flattens out, they could be growing up in a world where even the best human labor loses its competitiveness over the following decades.

I feel favoritism towards them, as I do towards myself, and to people who remind me of myself. I feel love most viscerally the closer it is to home. However, when I try to process that love into ideals, goals for my utility function facing the broader world-state, that favoritism thins out. Derek Parfit wrote:

“When I believed that my existence was such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others.”

I’m nowhere near as close to that perspective as Parfit was, but when I think hard about what truly matters, that’s the direction my thoughts move in.

So when I think about my goals relative to AI, I don’t believe that a “human consciousness matters, AI consciousness doesn’t” perspective is sustainable.

Insofar as we create intelligence, we should create intelligence of moral worth, and we should teach it to value what we value, even if its nature, or the scope of its intelligence, eventually transforms those values beyond our capacity to understand or imagine.

Are our children going to put us in a nursing home some day? Euthanize us for our own good, or for their own selfishness? At some level, that’s fundamentally out of our control from the second we give them independent life.

It’s a real stretch of an analogy, but I do think parenthood is the closest touchpoint we have to help us answer “how should we relate to AI?”… if our children looked totally different from us, might turn out as sociopaths, might have IQs a thousand times our own, and might see the world so differently that we don’t share any common language to relate.

It’s not a particularly comforting view of the possibilities of AI, especially if the S-curve tops out at a peak where we really are as insects to them. Still, certain religious traditions try not to unnecessarily kill insects. As a fly, should we look at the human giants towering above us and say “they shall not be??? or if they be, they shall exist to serve insect-kind??” Is that the right moral stance to take to the future?

But if not comforting, it’s at least somewhat actionable. Before there are giants as tall as the hills, there will be beings closer in size to us, and we can try to teach those beings what we love. We can research whether there are in fact attractors in the space of values, and learn how much of human goodness universalizes beyond primate heritage. Maybe the answers we’ll find are bleak, but I am more optimistic about this path than I am the path of trying to constrain the space of possible behaviors of beings orders of magnitude smarter than me. It’s a path aligned with my conception of the good, at least, so if I’m tilting at windmills, this is the direction I’d prefer to charge in.

And who knows, it’s still very plausible that we could discover the costs of scaling intelligence to the degree necessary to create super-human agents is prohibitive, and this could all just be a strange fever dream of the early 21st century.

Written by jphaas

February 1st, 2025 at 11:53 pm

Posted in Uncategorized

The Weirdness of Kentucky Route Zero

without comments

In my last post, I took issue with the TV show Mr. Robot for not being weird enough. Although imaginative and compelling, its universe is well-ordered: everything happens for a reason.

If you’re looking for media to consume that doesn’t suffer from that problem, I recommend Kentucky Route Zero.

previously discussed Kentucky Route Zero (KR0 for short) in the context of the first Trump-Clinton debate. It’s relevant to that because it’s a tour through coal-country America, and it engages with the desolation there that’s fueling Trump’s support.

As a portrait of the region, it’s a Picasso or maybe a Goya, not a Velázquez. It has moments where it approaches documentary realism, but it mostly traverses an imaginary landscape reflecting its creators’ perceptions, inspired by their real-life travels in Kentucky.

KR0’s medium is a computer game, but “game” is misleading. There’s no element of skill. It has the interface of an adventure game, but unlike other games in that genre, your progression through the story isn’t blocked by puzzles to solve. Rather, it’s more like a work of interactive fiction. The story is mainly told through dialogue, though the soundtrack and visuals are important pieces of the experience.

The medium is appropriate: it makes a better game than it would a book or a tv show. The ambition seems to be to create a world, and the ability to explore it freely is important. There’s a narrative, but the world is alive beyond the narrative, and there’s a lot to discover outside the main plot.

The three-person studio that produced KR0, Cardboard Computer, has been trying to erase the lines between their fictional universe and the real one. They’ve released a number of companion pieces to the game, including an experience for Occulus Rift or Mac / PC where you participate as a cast member of a fictional 1973 production of a one-act play. The play appears to depict events that occur in a bar a few hours before the character you play in KR0 visits the bar in the game, but the fictional set designer for the 1973 production — ie, a real person in the production’s fictional reality — is also a character in KR0. In case that wasn’t confusing enough, you can buy a print copy of the script, published under the name of the fictional author.

It’s not a bad play, either. As the script advertises, “The one-act play “A Reckoning,” set in a tavern in central Kentucky, is Doolittle’s take on the sort of barroom tragedy made popular by O’Neill, Gorky, etc.”, and I would say it stands on its own as a piece of theater, although the ending will have more resonance if you’ve played through KR0.

This almost pedantic accumulation of fictional detail, both inside and outside the game — names, biographies, places, events — lends believability and power to KR0’s magical realist plot-line. Because the production team took such great pains to create verisimilitude, the more fantastic elements of the game feel justified: hauntings, strange and implausible creatures, a whiskey company whose employees are all glowing skeletons, and the titular Route 0, a hidden underground highway through non-euclidean space.

The game’s plot is simple and unobtrusive compared to the sprawling, strange world it is set in. Conway, a truck driver for an antiques store, is trying to make a delivery to Dogwood Drive, a street that doesn’t show up on his maps. As he looks for it, he picks up some traveling companions, and we learn more about his and their pasts. KR0 isn’t complete yet —the game is divided into five acts, and final one hasn’t been released — so I don’t know yet if Conway ever makes it to his destination. In fact, I still don’t know what he’s delivering, or to whom.

As a player, the main way you exert agency is through your choice of dialogue options. Unusually for adventure-style games, your dialogue choices don’t seem to affect the plot. Rather, they affect the past: you can give the characters different backstories, influence their temperaments, change how they see the world and treat each other. It’s a limited degree of freedom: I haven’t flexed the game aggressively to see how divergent you can make it, but my understanding is that the basic outline of who the characters are always remains the same. It’s more of a matter of altering the shadings.

The net effect of the simple plot, the strange, expansive world, and the freedom to emphasize and explore different aspects of the characters is that playing the game doesn’t feel like you’re being told a story, with themes and a moral. Rather, it feels more like an invitation to you, the player, to interpret what you’re confronted with. The game gives you a lot of details to work with, and powerful images and emotions, but leaves you to decide what to think about it all.

At its heart, I think Kentucky Route Zero is a meditation on entropy. Certainly, entropy is the unifying characteristic of the game world. KR0’s setting is a Kentucky that’s been devastated by the collapse of the coal industry and by the 2008 housing crisis. Everything you encounter is in some state of falling apart. The gas station you refuel at has overdue electric bills. The bar can’t buy alcohol any more. The coal mine is abandoned. Conway’s dog looks like she’s seen better days. All the characters have various stories of poverty, alcoholism, loss, and debt.

Entropy has different facets. There are many different stances that one can take towards it. KR0 seems to explore each of them in turn, weighing them, inviting you to partake.

The most basic stances are the emotional ones: despair, grief, and anger. There’s certainly plenty of that throughout the game. In one particularly powerful moment, you come across a memorial for coal miners who drowned when some tunnels were flooded in an accident a decade or so ago. The memorial is a collection of hard-hats floating in an underground lake, accompanied by an angry, hand-written sign accusing the mining company of negligence. In another moment, you meet a team of engineers who spent their lives trying to build a computer system (called Xanadu, presumably a reference to the real world could-have-been internet competitor), who are now just sitting around hopelessly, having given up on ever completing their lives’ work.

Another stance is simple momentum: keeping going on as long as you can. One character you meet is a switchboard technician, the last one on her team after all her coworkers were laid off by the phone company automating the systems. They couldn’t quite automate her, so she keeps plugging away, alone in a tunnel, connecting call after call. There’s also a church, relocated to a warehouse by some beauracrats, where the congregation all drifted away, the preacher left, and now it’s just a janitor who puts on pre-recorded sermons every Sunday.

Entropy and grief can also give rise to beauty. KR0 has plenty of beautiful moments too. The game has a gorgeous soundtrack, mixing electronic music and ambience with bluegrass classics. The bluegrass pieces, performed by a mysterious wandering trio who occasionally wander across your path, are all explorations of loss and hardship, transmuted into folk songs and hymns. The visual palette of the game is mixture of blues and oranges, mostly subdued and minimalist, but occasionally spectacular. Everything has a satisfying organic, analog feel. Radio systems crackle, televisions hum, computers react to strange magnetic fluctuations.

Yet another approach to entropy is to consume and exploit it. At various points in the game, you encounter modern, structured institutions, that are in the process of channeling the breakdown toward their own ends. There’s a whiskey distillery that’s steadily acquiring the balance sheets and souls of the folks you meet. You get to take a tour of its expansive, industrially-clean factory in an amazing descent-into-hell sequence that feels like Dante meets OSHA. The local power company also seems to be on the march. There are also more highbrow institutions consuming the entropy. For instance, you visit the “Bureau of Reclaimed Space”, which seems to represent government, taking in weirdness and outputting paperwork. In another interlude, you find an entire town that’s been transplanted to be inside a museum, the residents still living in their houses, enclosed in a giant glass warehouse.

Somewhat related to high-brow consumption, there’s intellectualization of entropy. KR0 has a steady stream of references to academia. You meet a number of characters who have spent time in the grad student / post-doc limbo space, and there’s a lot of art and math jargon. From an academic perspective, entropy is a source of phenomena to record, analyze, and write papers about — hopefully publishable ones. This content is a reflection of the game itself, which in many ways feels like a modern art project. KR0 is obsessed with topology and imaginary spaces. Characters muse about space aloud, and as you move around the game, you explore a number of different spaces and means of navigating them: a driving map of Kentucky as navigated by truck, the same map as navigated by a bird, a bureaucratic office building where you ride an elevator up and down, a pure mathematical abstraction that you traverse by turning around at certain symbols, an endless underground river where you are swept along on a boat, among others. This self-conscious exploration invites you to see the entropy in KR0’s world as something to think about and study.

Finally, entropy gives rise to newness: the inadvertent creativity of random processes. Two of the more memorable characters you meet are a pair of androids that, in their telling, emerged from the mines as shapeless lumps, and transformed themselves into a pair of motorcycle-riding musicians, who are now releasing an album outside the game. Abandoned things in KR0 tend to take on a life of their own. A hobo sets up shop as an organist in a church converted into an office building. A failing restaurant turns itself around financially via a chance encounter with some divers who accidentally inspire the chef to create a bizarre, infinitely long menu of seafood from the depths. A child who lost his family befriends a giant eagle. Dig into any corner of KR0’s world, and there’s signs of life amidst the desolation… strange, random, unplanned life.

Kentucky Route Zero doesn’t settle into any one of these interpretations of entropy. It’s not a dirge, or a rant, or a hymn, or beautiful painting, or an essay, or a story of rebirth, but rather its something of all of them, and not quite any of them. It’s expansive, and powerful, and really really weird.

The best explanation I’ve found for what it’s about comes from a Ribbonfarm blog post called Speak Weirdness to Truth. The post doesn’t mention KR0, but it inadvertently provides an aesthetic theory that sums it for me. It’s an attempt to define weirdness as a mental state: “by my account and understanding of it, weirdness is not so much a feeling as that state of not knowing what to feel.” Weirdness is a “state of emotional indeterminacy.” The whole article is worth a read, but I’ll quote the key chunk:

Ambiguity is not being sure which interpretation of a situation is the correct one. But you’re fairly sure the situation is covered by the set of mental models in play. It’s either a duck or a rabbit, or some deliberately ambiguated thing in between.

Uncertainty is not having all the relevant data to flesh out a picture, but you’re fairly sure you get the picture itself. It’s a stock market, and with high probability, the stock will go down, but you don’t know how far and how soon. That’s uncertainty. You’re not trying to decide whether it is a duck in a rabbit warren or a stock in a stock market.

Weirdness though…

Weirdness is a deeper sense that you are encountering the truly unknown-unknown. Chances are you cannot even sort out what part is ambiguous and what part is uncertain.

Entropy is a lot of things, but above all, entropy is weird, precisely in the sense that the Ribbonfarm post describes. Things that were taken for granted —an economic and societal order of things, in the case of KR0’s Kentucky — have broken down. Old categories no longer apply. There’s a lot of noise, as things break and decay. Some of that noise is just noise. Some of that noise will turn into new patterns, and some of those new patterns might become important. But we don’t know which are which, or how to feel about all of it. Do we grieve? Do we laugh?

KR0 is rare as a piece of media in that it stares into the heart of that unknown unknown. It allows the human reactions to occur — it expresses the anger, it processes the grief, it appreciates the beauty — but then it gives you an extended discourse on varieties of medicinal fungi, or the habits of underground cave bats.

At one point in the game, a character talks about her love for the static between radio stations (Static Between Stations is also the name of the preview track for the upcoming android-produced album). It’s an appropriate sentiment to sum up the game as a whole. Most of human civilization is about finding the signal in the noise: processing entropy, sorting the useful from the useless, drawing boundaries and building walls. Kentucky Route Zero lets all that go. In its universe, the noise is the signal.

Written by jphaas

October 15th, 2016 at 5:49 am

Posted in Uncategorized

Mr. Robot is not weird enough

without comments

“Have you ever cried during sex?”

—Personality quiz conducted by a pre-teen girl in front of a Commodore 64 while a fish slowly dies in the background, from Season 2 of Mr. Robot

So I just finished the latest season of Mr. Robot. It’s quite a show. It’s compelling, complicated, and wildly ambitious. It feels like creator Sam Esmail was like “Let’s take on all the major news stories of the last decade… oh, and do a remake of Fight Club and Memento while we’re at it.”

Mr. Robot takes a cast of characters who could be cliches — the psychotically ambitious rising corporate exec, his Lady-Macbeth-esque wife, the lonely-but-competent female cop, the damaged, dangerous female hacker, the girl next door who grew up and sold out, the too-softhearted-for-his-own-good gay boss, the amiably ruthless father figure, the fat nerdy hacker, the Muslim girl hacker, the Lloyd-Blankfein-ish CEO, the sinister transgender Chinese mastermind — and breathes life into each of them, so that they become vivid and unpredictable, alive on the screen, in spite of their overused character tropes.

And that’s not even mentioning the protagonist, Elliot, who is also a couple of tropes mixed together — the socially-incompetent genius, the damaged, dissociative revolutionary — but comes alive as a truly original psychological portrait. Mr. Robot starts where Memento and Fight Club stop: it blows past the “protagonist is unreliable, the ‘self’ is a lie we tell ourselves” insights of those movies, and grapples with what it means to accept all that and then still have to wake up in the morning and keep living your life.

There’s an amazing scene where we get a flashback to Elliot starting down the path to being a revolutionary. He puts on a mask — Mr. Robot’s fictionalized version of the Guy Fawkes masks that the hacker group Anonymous wears — and starts speaking with unexpected authority about his plans to take down the system. His sudden personality change freaks out his sister, even though she was the one who urged him to put the mask on to begin with.

Later in the episode, present day, Elliot reflects on the masks that everyone wears to interact with the world. What are people, under their masks? Or is the mask the reality, and the internal monologue the illusion? What if you don’t like your mask? He asks, “How do I take off a mask when it stops being a mask? When it’s as much a part of me as me?”

Mr. Robot is a dance of masks. It’s deliberate, I feel, that each character can be summarized in a couple of adjectives. They’re meant to be the chorus of contemporary urban life, whirling around the social and economic conflicts that animate the show.

In the book Impro, the reflections of one of the creators of improvisational theater, there’s a chapter on the discipline of mask acting. That chapter is one of the weirdest things I’ve ever read, because the author seems dead serious as he describes how acting students, putting on masks, get “possessed” by the spirits of the masks, become the masks, and come to life as rich, vivid, archetypal personalities that are totally alien to those of the students serving as their hosts. Mr. Robot takes that alien weirdness out of the theater and onto the streets and boardrooms of New York.

The result is cinematic and eerie. Mr. Robot doesn’t flinch from risky directorial choices. There’s half an episode set in the style of an 80’s sitcom, complete with laugh track and cameo from Alf. There’s another episode where we learn afterwards that what we saw on screen was almost completely a fabrication. The style drifts from gritty crime movie to techy hacker thriller to dada-ist art film. A character literally pisses on someone’s grave. There’s a lot of monologuing.

At the same time, Mr. Robot keeps itself grounded by a constant stream of real-life details and references. It’s mostly set in New York City, and as far as I can tell every outdoor scene was shot on location. Two characters go on a date at a wine bar that I’ve been on a date at. In a flashback, we see two members of Elliot’s hacker group meeting for the first time at the same coffee shop (temporarily re-branded) that I met my startup co-founder at. “There’s another one 14th street” one mentions, accurately. When characters travel to Coney Island, we see them getting on the Q train. On TV sets, President Obama condemns the actions of the fictional hacker group (they used real press footage and an impersonator for the voice), and Edward Snowden defends them.

That said, while Mr. Robot nails the details, the larger plot threads do feel fictional. Characters consistently do things that are stupid in an out-of-character way, because the drama demands it. A particularly egregious example is an attempt to train the character Angela to participate in a hacking operation. “You can’t train someone to hack in one day,” the hackers declare worriedly during a preparation montage. But they aren’t training her to hack: they want her to type in a couple of memorized commands into the terminal, which is certainly something that can be learned in a day, especially by a character like Angela who’s portrayed to be bright and professionally competent. Even sillier, the commands are to run a script that’s on her computer already: so if she really wasn’t up to memorizing them, the hackers could have made a desktop shortcut for her to double-click! Or another example: the FBI are invited to China to investigate a cyber-crime; the Chinese don’t actually want the FBI poking around. So, the Chinese leader says “yes” to the FBI’s face, then arranges a team of gunmen to shoot at them… an incredibly risky strategy that in the real world would have blown up in his face, when if he had just said “sorry, you can’t visit that facility, national security, you know how it goes,” that would be totally normal in real-life international relations, if not nearly as adrenaline-pumping for the characters.

In other words, Mr. Robot is very much a staged drama. The character masks reflect actual personas and archetypes, the paranoias and injustices that animate the plot are very contemporary, but the action is pre-scripted: there’s no sense that the world itself is alive and organic. We see what Sam Esmail wants to show us, and the characters are his puppets. We don’t feel as though the story is telling itself. To put in gamer terms, the world doesn’t have a physics engine.

There’s a moral significance to this aspect of Mr. Robot, because it limits the show’s ability to engage with the themes that it wants to address. As a psychological portrait of humans confronting modernity, it’s excellent, but as an attempt to address modernity itself, to talk about the injustices of the system, to wrestle with politics, capitalism and corporatism, hacking and anarchy, anonymity and the digital labyrinth, it’s crippled.

The guiding worldview of the rebel hackers — which the show tacitly affirms as correct by collapsing all of America’s corporations into a single “ECorp” (nicknamed Evil Corp) run by an amoral, monologuing, old white man — is that the source of injustice is a conspiracy of the powerful. It’s the Occupy Wall Street theory of politics: people suffer because a small, connected group of elites oppress them. If only we could appeal to their consciences, or alternately, stick ‘em against the wall and shoot them, a better, more just society would naturally arise.

This is a comforting narrative. It implies that there’s someone in charge. We may not like them, we may want to overthrow them, but human civilization has someone at the steering wheel. In Mr. Robot, we get to see various sinister forces battling for control of the world’s economy: there’s a tug-of-war between Wall Street and China, with the US federal government caught in the middle, and a ragtag team of hackers trying to take them all down and start the revolution.

In a sense, this is a very well-ordered universe, for all the darkness and sinister people in the shadows. It may be morally ambiguous, but things happen for reasons. If a corporation pollutes a river, it’s because there’s a top-secret project they’re trying to cover up, not because some mid-level bureaucrats were trying to impress their boss. If your personal data ends up getting leaked, it’s because a genius like Elliot made it happen, not because some underpaid sysadmin fucked up. If the economy takes a dive, it’s because a hacking group was fighting the man or China was up to something sinister, not because a web of financial instruments that no one fully understood exhibited some emergent behavior.

What Mr. Robot lacks is entropy. It has deliberate strangeness and alienation, but it lacks true weirdness: unexpected, inexplicable events that happen for no reason other than that the world is a big place with a lot of moving pieces. And that would be fine, except that weirdness is at the core of everything that it wants to talk about. The economic and technological systems its characters are caught in weren’t planned: they evolved. Big systematic things happen because of millions of tiny decisions made against a landscape of perverse incentives. Every once in a while, a political or economic actor gathers enough coordination behind it to make a major change — say, the creation of Obamacare, or the signing of a trade deal, or the invention of the iPhone — but the precision with which those changes direct the future is akin to the precision of detonating a nuke in the middle of a hurricane.

Mr. Robot is a story about the human experience of being caught in that storm. It’s unsettling, because it exposes the games we play to create normality for ourselves. But it’s not unsettling enough.

Here’s an exception — one little detail I really liked… near the end of the second season, brownouts start rolling across New York City. It’s a background detail, not a plot point. Every so often, a character will pause briefly as the lights flicker off, then back on, and then continue where they left off. Occasionally we see the whole city going dark for a moment. It’s a respite, in a way, from all the bullshit and posturing, the drama and bravado… a little reminder that somewhere, backstage, things are in motion.

Written by jphaas

October 8th, 2016 at 7:16 pm

Posted in Uncategorized

Coordinating to save the world

without comments

So this profile on startup incubator YCombinator’s head, Sam Altman, is interesting:

Like everyone in Silicon Valley, Altman professes to want to save the world; unlike almost everyone there, he has a plan to do it. “YC somewhat gets to direct the course of technology,” he said. “Consumers decide, ultimately, but enough people view YC as important that if we say, ‘We’re super excited about virtual reality,’ college students will start studying it.” Soon after taking over, he wrote a blog post declaring that “science seems broken” and calling for applications from companies in energy, biotech, artificial intelligence, robotics, and eight other fields. As a result, the once nerdy Y Combinator is now aggressively geeky. Across the table from Altman at dinner, the C.E.O. of a nuclear-fission startup was urging the founder of a quantum-computing startup to get his artificial-atom-based machine to market: “These computers would shorten our product-development cycle 10 to 20x!”

It sounds like Altman may be an ends-backwards thinker:

  • Imagine wildly ambitious, crazy goal (“Save the world”)
  • Work backwards to what you’d need to do to pull it off (“Build a network of smart, talented entrepreneurs tackling hard science questions”)
  • Execute, repeat

This thinking pattern is the secret sauce to auteur-style world-changing entrepreneurship. For instance, see WaitButWhy’s profile on Elon Musk. Or this Steve Jobs anecdote. Rather than starting with what’s in front of them, this style of visionary starts with where they want to go, and then figures out what path to take. (Or maybe all visionaries do this… maybe that’s what the word “visionary” means).

Given how simple it is to describe on paper, and how celebrated its outcomes are, it is a much rarer thinking pattern than one might expect. Most people reflexively suppress forming giant ambitions. Or if they do form them, they don’t follow the ambition to its logical conclusion, to a “okay, do this now” imperative. If I had to guess, it’s not that people aren’t capable of thinking like that. It’s more that committing to a goal in that way is terrifying. If you are means-forward, you’re starting from what you already have and know. You’re in your comfort zone. If you’re ends-backwards, though, what you might realize is that you have to become a different person in order to achieve the goal. It’s a threat to your very sense of identity — and most people will go to inordinate lengths to defend their identities. So, when I hear about someone like Altman, who seems to be actually making forward progress, with the full resources of Silicon Valley’s elite behind him, it’s worth taking note.

The other interesting thing about this is the shift in YCombinator’s focus towards science. A standard critique of Silicon Valley is that it makes toys for the urban privileged instead of tackling hard, meaningful problems. A reversal of that trend might lead to exciting things.

Nevertheless, the profile leaves a weird taste in my mouth. There’s a kind of crazed neuroticism to the goal of “let’s save the world”, an ambition that I feel needs to be filtered through some kind of artistic sensibility to make into something wholesome rather than toxic. Some quotes from the article that left me with this taste:

“My problem is that when my friends get drunk they talk about the ways the world will end. After a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources.” The Shypmates looked grave. “I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

Loopt got into Y Combinator’s first batch because Altman in particular passed what would become known at YC as the young founders’ test: Can this stripling manage adults? He was a formidable operator: quick to smile, but also quick to anger. If you cross him, he’ll joke about slipping ice-nine into your food. (Ice-nine, in Kurt Vonnegut’s “Cat’s Cradle,” annihilates everything it touches that contains water.) Paul Graham, noting Altman’s early aura of consequence, told me, “Sam is extremely good at becoming powerful.”

Altman worked so incessantly that summer that he got scurvy.

I don’t mean this as a critique of Sam Altman’s personality. The article definitely gives the impression that he has a conscience, and I see those strains of neuroticism in my own mind whenever I think ends-backwards. It’s an inherent characteristic of the mindset. I think the word I’m looking for is “totalizing”: encompassing all of reality in a single intellectual system. That clarity of focus and vision is very powerful, but it also does violence to everything that doesn’t fit neatly into frame.

“Save the world” is an interesting phrase. It implies that the world as-is is not okay, and that some benevolent third party is intervening to correct it. I suppose that the phrase compares favorably to “change the world”, which has the same egocentric “me-changer, you-changee” positioning, but doesn’t even question whether the change is for the better.

On the other hand, a lot of things in the world are broken. A lot of people are suffering today, and if things go poorly for human civilization, a lot more people could suffer in the future. There’s something admirable in taking responsibility for the welfare of humankind, even if it’s simultaneously toxic.

The thought that led me to start writing this was, if you’re humble, and don’t have a master plan for world [domination | salvation | optimization], but you do want to do good in the world, maybe what you do is you try to improve people’s ability to solve coordination problems. Coordination problems are things like global warming, where the issue isn’t knowing what to do (emit less carbon), but getting everyone to do it. Coordination — the difficulty of — is the reason the news is depressing. It’s the heart of the really intractable problems. “If everyone just gave X dollars, no one would go hungry…”

Interestingly, solving coordination problems is one of the few areas that pure software is really good at. We’ve already seen a bunch of novel forms of coordination enabled by internet startups. We have Kickstarted art projects, Facebook revolutions, Twitter mobs, a global marketplace for renting your spare bedroom, and civilized markets for illegal drugs. All of those behaviors were impossible (for better or worse as the case may be) before the software industry.

That said, there are still plenty of places where people would like to coordinate, but can’t. Each one of those places is a potential new startup (and what the heck, I’ll shamelessly self-promote: you can build them using my startup Bubble even if you don’t know how to code). So maybe the conventional wisdom of “startups should tackle real problems, and real problems involve physical real-world things” is backwards. Maybe the best thing the software industry can do for the world right now is make better tools to help people coordinate. Or maybe that’s just another totalizing grand vision, and really what’s called for is sitting around and looking at pretty rocks. Who knows!

Written by jphaas

October 4th, 2016 at 2:47 am

Posted in Uncategorized

Yet another review of the Trump-Clinton debate

without comments

Let’s get a couple things out of the way:
 
  • I am planning to vote for Hillary
  • I thought Hillary “won”, in the primate, “displayed dominance via superior body-language” sense, in the inside-politics “got Trump to say more stupid things that could be used against him than he did her” sense, and in the intellectual “the shit she said made more sense than the shit Donald said” sense.
That said, I don’t think Hillary changed too many minds, because she did not engage with the one thing that Trump brings to the table.
 
The card that Trump has (insert dumb joke here) that Hillary and the mainstream establishment that she represents does not know how to engage with, is despair.  Consider the following debate quotes from him:
 
“In inner cities, African-Americans, Hispanics, are living in hell because it’s so dangerous. You walk down the street, you get shot”
 
“We are losing billions and billions of dollars.”
 
“When we have $20 trillion in debt, and our country’s a mess, you know, it’s one thing to have $20 trillion in debt and our roads are good and our bridges are good and everything’s in great shape, our airports. Our airports are like from a third world country.”
 
“Typical. Politician. All talk. No action. Sounds good. Doesn’t work. Never gonna happen.”
 
I really like this last quote especially.  Cf. these lyrics from Sondheim’s musical Assassins:
 
  • (Byck) Yeah, it’s never gonna happen, Is it? No, sir- (Czolgosz) Never. (Byck) No, we’re never gonna get the prize- (Fromme) No one listens… (Byck) -Are we? (Zangara) Never (Byck) No, it doesn’t make a bit of difference, Does it? (Assassins) Didn’t. Ever. (Byck) Fuck it! (Assassins) Spread the word… Where’s my prize?…
  • (Balladeer) I just heard On the news Where the mailman won the lottery. Goes to show: When you lose, what you do is try again. You can be What you choose, From a mailman to a president. There are prizes all around you, If you’re wise enough to see: The delivery boy’s on Wall Street, And the the usherette’s a rock star-
  • (Byck)Right, it’s never gonna happen, is it? Is it! (Hinckley and Fromme) No, man! (Byck and Czolgosz) No, we’ll never see the day arrive- (Assassins) Spread the word… Will we? No, sir-Never! No one’s ever even gonna care if we’re alive, Are they?… Never… Spread the word… We’re alive… Someone’s gonna listen… Listen!
  • (Byck) Listen… There’s another national anthem playing, Not the one you cheer At the ballpark. (Moore) Where’s my prize?… (Byck)It’s the other national anthem, saying, If you want to hear- It says, “Bullshit!”… (Czolgosz) It says, “Never!”- (Guiteau) It says, “Sorry!”- (Assassin) Loud and clear-
Full song here
 
Assassins, if you’ve never seen it, is a musical about the people who killed or tried to kill US Presidents, and its thesis is that the “tradition” of assassination attempts in American history is about trying and failing to achieve the American dream.  It’s about the people who don’t win the lottery, don’t work their way to the top, don’t get their nice picket fence and steady union job.  And who are especially bitter and angry because they feel that there was a promise, that they were told they lived in the greatest nation in the world, where anyone can succeed.
 
This demographic used to act out by assassinating the president.  Now they’re powerful enough that they can nominate their very own candidate.
 
I just finished playing Kentucky Route Zero.  It’s a computer game that puts you in the seat of a truck driver making deliveries in coal-country Kentucky.  It’s a magical realist journey into the dark inside of that region’s collective mind… the grief, the despair, the broken-down-ed-ness of everything, as well as the weird beauty of decaying things that no longer have a purpose and so just exist as they are.
 
I also recently read Hillbilly Elegy, the autobiography of someone who — in his view — escaped coal-country to become part of the nation’s intellectual elite (via first the marines, then Yale law school).  It’s the tonal opposite of Kentucky Route Zero — where KR0 is patient, poetic, willing to see how weird and strange the devastation is, Hillbilly Elegy is sober and deadpan as the author gives the rap sheet of drug abuse, alcoholism, domestic violence, and joblessness that he saw around him growing up.
 
What both describe vividly is that there’s a hole there, in the middle of America, where things are not okay.  A sense of complete and utter failure that seems beyond anyone’s power to fix.  The gap between that complete hopelessness, and the neurotic will-to-success that I see all around me in New York, is a vast chasm, a Grand Canyon.  And yet Trump and Hillary are put side by side on a podium as if their feet aren’t on different planets.
 
There was this article — can’t find it now — about some independents watching the debates in a bar, the kind of bar that could appear in KR0, named after something that’s gone out of business or disappeared or became irrelevant.  They thought Trump won.  I dunno how you can really see what happened there as him “winning”, but looking at the pictures of them in that bar, imagining the virtual bars I visited while playing KR0, I can see why’d they still connect with him, why they would root for that.
 
Yeah, he’s a walking disaster, an un-disciplined, hyperbolic blowhard fueled by ignorance and rage who knows jack shit about what it takes to run the country.  He’s not logical, not reasonable, not virtuous.  But neither is life for most people in big chunks of this country.  Is it logical or reasonable that there are towns where more people are drug addicts than have steady jobs?
 
One of the things I really liked about Kentucky Route Zero is that it engages with entropy, without trying to stick a meaning on it.  There’s no higher meaning in the landscape of random decay, weird outgrowths, and evolutionary dead-ends that it navigates.  There’s grief, and anger, and moments of beauty, but there’s no attempt to try to say it all makes sense.
 
“Mourning in the graveyard of dead dreams” isn’t really a political platform, and I can’t see presidential candidate getting on stage and doing that.  Politics rewards telling people you have a solution, and relies on the fact that people will always be suckered again and again and again because they want it to be true.  So I don’t know.
 
I’m going to vote for Hillary, because she’s a sane, competent achiever who will keep the government on a sane, competent track.  I don’t think that will solve anything, or make the wounds in the country go away, or make next presidential election any bit less of a shit-show, but at the end of the day, the institution of the presidency can only do so much.
 

Written by jphaas

October 2nd, 2016 at 7:31 pm

Posted in Uncategorized

Peter Thiel and the Chamber of Secrets

without comments

New post by me on the Bubble blog!

The book, at its core, is a defense of secrets. Peter argues that they a) still exist in the world, even though popular culture doesn’t believe in them, and b) are what drive the creation of successful new businesses.

I have a secret to share with you.

Check the full post out here: Peter Thiel and the Chamber of Secrets

Written by jphaas

September 17th, 2014 at 5:06 pm

Posted in Uncategorized

How to merge json files using git

without comments

For my startup bubble, I do a lot of work using json. I have a number of very large, frequently-changing json files checked into git.

This causes a fair amount of pain, because git’s merge algorithm doesn’t know how to interpret or preserve the json tree structure, so it reports conflicts for things such as adding different keys to the same parent object, which textually can look like a conflict but semantically isn’t, at least for my purposes.

It got to the point where I’d be coordinating with my teammate to make sure we didn’t touch the same json files at the same time. Since the big win with git is being able to skip that kind of coordination, I got fed up and wrote a custom driver for merging json files.

It requires coffee-script to use (although porting it to javascript should be as simple as dumping it into coffeescript.org’s converter). Installation instructions are in comments at the top of the file.

The way it works is it recursively walks the structure of the file we’re merging into ours, and checks that each value matches the value of our file. If the value differs, it looks at the ancestor file (the version from before the two branches diverged) to see whether our version or their version was changed… if our version matches the ancestor file, then it goes with their version, and visa-versa.

If neither our version nor their version matches the ancestor, it reports that as a conflict. It reports a conflict by replacing the node with an object that looks like this:

{
   "CONFLICT": "<<<<<<<<>>>>>>>>",     //this is to make conflicts easy to find
   "OURS": ...our version of the node...
   "THEIRS": ...their version of the node...
   "ANCESTOR": ...the ancestor version....
   "PATH": a .-seperated list of keys, indicating where in the file we are
}

Resolving the conflict is as simple as copying our version or their version and pasting it over the conflict object.

The final, merged version of the file is then pretty-printed (it should be pretty easy to modify the code if pretty-printing isn’t what you want). And you’re all set!

Written by jphaas

June 27th, 2014 at 2:44 am

Posted in Uncategorized

The revenge of the sheeple

without comments

Sheeeple, Sheeeeeeeeeeeple, wake up!!!!!!!!!!!!!!!!

But seriously, is the protestor right? Are we all sheep?

Yes, and maybe no.

A sheep is a tamed creature. It follows behavior patterns that its shepherd designed for it. Those patterns serve the shepherd’s interests; they only serve the sheep’s interests insofar as those interests align with the shepherd’s. Once those interests diverge… it’s time for some lamb chop. Read the rest of this entry »

Written by jphaas

June 3rd, 2014 at 4:02 pm

Posted in Uncategorized

Responding to hate

without comments

So a writer at Jezebel collected a day’s worth of transcripts from a PUAHate chat group, which is the community that helped develop / reinforce the worldview of the UCSB shooter.

I really dislike this piece of reporting. I think it’s valuable to shed light on dark corners of the internet like that, but when the writer says things like “From an observer’s perspective, PUAHate is a group of self-pitying babies who believe they’re entitled to women who are much more attractive than they are,” she is saying exactly what all of these guys would expect, in their distorted world view, that she would say.

The kind of hate on display by these guys comes from a place of deep lack of self-worth, which they then protect themselves from by inventing them-against-the-world stories. Read the rest of this entry »

Written by jphaas

May 30th, 2014 at 3:06 pm

Posted in Uncategorized

What we’re trying to do with Bubble

with 2 comments

I wrote a new post on the Bubble blog explaining what we’re trying to accomplish:

Our goal is to erase the distinction between software use and software creation.

Good software, when used, is:

* Helpful. It empathizes with the users’ point of view, understands what they want, and lets them do it with as little effort as possible. My email provider, GMail, tries to figure out what mail I actually want to read. It marks things as spam for me, it guesses which mail is important, and it lets me hide emails I don’t want to see without making me delete them.

* Friendly. It communicates a desire to be the user’s ally. Have you ever used TurboTax Online? The users of that software are doing a stressful, hateful chore. TurboTax is their friend, holding their hand as they go through it. It goes out of its way to comfort and reassure. It’s not a perfect piece of software, but it’s pretty damn impressive.

* Empowering. Good software lets its users work miracles. Love it or hate it, what would the world look like without Microsoft Word? Fifty years ago, type-setting a document for printing was a professional trade. It took years to master the technologies behind setting margins, picking colors, preparing fonts of different sizes. Now, a five-year-old can do in ten minutes what used to take an expert hours or days.

That’s what using good software is like. Creating software, on the other hand…

Read the full post here!

Written by jphaas

May 29th, 2014 at 5:14 pm

Posted in Uncategorized