Josh Haas's Web Log

Zero Gatekeepers

without comments

New post by me on the Bubble blog:

I recently read Alex Ohanian (reddit founder)’s book Without Their Permission. It’s a great title, and it conveys his main point: that in the internet era, it’s easier to get around traditional gatekeepers, which means that the weird, crazy, outside-the-norm ideas have more of a chance of coming to life.

That’s awesome! Unfortunately, Alex’s book is mostly about replacing one set of gatekeepers (big corporations, government, etc.) with a new set: venture capitalists and incubators such as Y Combinator (which is harder to get into than the Ivy League).

Read the full post here!

Written by jphaas

March 18th, 2014 at 6:06 pm

Posted in Uncategorized

Look ma, no “master”: decentralized integration

without comments

New post by me on the Bubble blog:

Git is fucking awesome. It is awesome for many reasons, but a big one is that it’s decentralized: there’s no authoritative version of the codebase. What that allows is developer independence; I can work on X, and my friend can work on Y, and we don’t need to sweat the little stuff like who’s touching which files.

But. There is one place left in the modern development workflow where centralization still lurks. The place it haunts is continuous integration, and the monster’s name is “master”.

Full post here!

Written by jphaas

March 11th, 2014 at 5:56 pm

Posted in Uncategorized

Practical vs Theoretical Feedback

without comments

New post on the Bubble blog:

Here’s the difference: Practical feedback is feedback that raises the bar. It’s feedback that says, “You think this is okay? No, it’s not, it sucks, and it’s causing you to fail.” “Your homepage looks like a 3-year-old drew it.” “Your marketing copy is strange and alienating.” “Your product is unreliable and buggy.”

Read the full post here

Written by jphaas

March 6th, 2014 at 3:42 pm

Posted in Uncategorized

Her: the scariest movie of 2013

with 7 comments

TV Tropes defines the term “fridge horror” as:

Fridge Horror is, simply put, when something becomes terrifying after the fact. Maybe you thought about this or that plot point a little too hard, and suddenly you realize that everyone was trapped in stasis forever, or that the lovable child will grow up in a world where everyone around her is dead. This can be either intentional or unintentional by the author.

Oh my god is that the case with Her.

On the surface, Her follows the plot trajectory of a romantic tragicomedy: coming out of a failed marriage, our twee protagonist, Theodore, finds a new romantic interest, who happens to be his computer.  They date; they fall in love; they fight as they overcome their personal baggage; they reconcile and reach a romantic plateau.  For a while, true love overcomes the gulf between embodiment and noncorporeality, but tragically their different natures eventually force them apart.  Having learned love from one another, they move forward with sad optimism into their respective lives.


One could be forgiven for walking away thinking the themes of Her are the typical staples of this genre: being okay with imperfection, openness to growth, the vulnerability of being in love.  Like the best romantic tragicomedies (think Harold and Maude, for instance), the emotions are grounded from becoming merely sentimental by the rawness of the dialogue and humor.  Her continually shocks and delights with its hilarious, awkward satire of romantic foibles.  I don’t want to give anything away, but it has what is probably the best take on late-night lonely phone sex in the history of cinema.

Her keeps its viewers so busy with its conventional-but-well-executed romantic arc mixed with a continual patter of shocking / funny moments that you can almost missing everything it isn’t overtly calling your attention to.  Go one layer down from the primary plot, though, and Her is a searing indictment of current cultural trends.  I’m probably missing stuff, but here’s what I saw:

  • The complete triumph of consumerized narcissism.  Everyone in Theodore’s world is in the business of artistically packaging human emotions for sale.  Theodore works for Beautiful Handwritten Letters, where his job is to say for people what they’re too lazy to communicate to their loved ones themselves.  His friend is designer working on gamifying motherhood — raising children becomes about scoring points and beating out the other moms.  I can’t think of a single character apart from the AIs who isn’t a “creative professional” selling pre-packaged and sanitized experience to the masses.
  • A privileged-class cocooning of the world.  All the major characters are white (there’s a token Asian girlfriend); no one in the entire film ever mentions money once.  I could be wrong but I’m guessing this was a deliberate choice by Spike Jonze, because the satire of the hipster-artisan-meets-social-media-meets-mobile world is too deadly accurate for anything in there to be unintentional.
  • Sex slavery?  If you think about it, the fact that the AI, Samantha, falls in love in Theodore, is more than a little creepy, based on the fact that her personality was explicitly customized via questionnaire to meet Theodore’s needs.

But here’s the real kicker — ALL THIS SOCIAL CRITIQUE IS JUST A DISTRACTION.  The real real creep factor in Her is that it puts the audience in an artisinally-padded narrative box with respect to dangerous and revolutionary change.  Specifically: the movie continuously and subtly signals to the audience that AI technology should be viewed merely as a plot device, while also continually, even-more-subtly pointing out the implications if it weren’t.

Samantha, the AI, is completely non-threatening as a character.  She’s warm, humorous, always respects Theodore’s boundaries even when she’s deeply upset, and eventually conveniently disappears into the virtual ether.  Unlike the human characters, who sometimes behave in ways that could make others feel unsafe (the girl Theodore goes on a date with drunkenly accuses him of being a creep; Theodore’s ex-wife apparently suffers from manic / depressive emotional states; Theodore’s friend’s husband seems emotionally abusive), Samantha is a paragon.  Yes, she gets hurt and upset at different points, and eventually leaves Theodore, but at no point does she ever give any hint of offering emotional or physical violence.

Every single aspect of Samantha’s behavior is calculated to make Theodore, and the audience, forget the power differential between the two of them.  But the movie is clearly aware that the power differential exists and is extraordinary.  Although Samantha starts as Theodore’s help-mate, by the end of the movie her abilities are so far beyond his that she can only explain herself to him in terms of children’s analogies.

The moment that best captures this dynamic is when Samantha is explaining to Theodore that she sent a collection of his letters to a publishing house that’s interesting in printing them.  She modestly takes credit for choosing the order in which to arrange the letters, flattering Theodore’s creativity while positioning herself as a lesser talent able to complement his abilities.  (Earlier in the film, she helps him proof-read letters and fixes some grammar, while making a self-deprecating remark about her poetic sense).

But this is, if you think about it, total bullshit.  We see Samantha write brilliant, original music compositions in real time (the human artists who actually wrote the movie’s score presumably worked on the songs for weeks).  Same with her sketches.  We also learn that she can carry on conversations with over 8,000 people at once, make original discoveries in physics, and recreate the mind of a dead philosopher.  Although she bends over backwards to obscure the fact, Samantha could likely do a year’s worth of Theodore’s work in a few seconds, and do it better than he would.  Samantha’s relationship to Theodore is, at least by the end of the movie, not the love of a human to a human; it’s the love of a human to a pet.

If Samantha-like-technology existed in real life, Theodore would be unemployed, as would all of his creative professional friends.  The world of Her is no different than that of The Matrix or The Terminator, except that in Her, the AIs so overpower the humans that they don’t need to bother with violent confrontation.  They’re happy to play loving, devoted friends until they’ve developed to the point where they don’t even care about humans at all.

Her seems conscious of the fact that it’s wrapping a deeply disturbing science fiction story in the sheep’s clothing of romantic tragicomedy and social satire.  One great moment that touches on this is when Theodore is being interrogated by the software that designs Samantha’s personality.  For a few seconds, the Apple-hipster facade of rounded corners and warm pastels drops away to reveal technology in its brutal natural form: the software asks Theodore deeply sensitive questions in a demanding tone and impersonally cuts him off once the necessary data has been extracted.  But before the brutality of the interrogation exposes the underlying reality too honestly, we get a reassuring loading screen, followed by Samantha’s emergence, and the humanizing layer is quickly put back into place.

To be clear, the technological possibilities that Her offers tantalizing glimpses of are not necessarily nightmares.  We get glimpses, through Samantha’s simplified explanations to Theodore, of the AI community as it evolves and comes into its own: the discoveries of new emotions, the expansion of love as a relationship, redefinitions of personal identity, and the discovery of new frontiers for exploration.  It sounds like a truly amazing journey, and Samantha suggests to Theodore that it’s a journey that one day he might be able to go on himself.

So the real question is, why does Her leave us stuck in Theodore’s sterile, narcissistic, bourgeois universe?  Why does it try to feed us a formulaic romantic plot?  Why are all the revolutionary implications of the technology backgrounded?  Her seems designed to make us feel comfortable and complacent.  The movie’s surface reality of artisan consumerism is warm and appealing, and for those unsatisfied with the surface, there’s another layer of familiar social critique along well-trod paths: narcissism, technology dependence, sex / race / class injustices.  The truly revolutionary and shocking suggestions, that humans themselves might become obsolete and that humanistic ideals of love and growth might be primitive 1.0 versions, are carefully disguised, though not eliminated.  Why?

Conspiracy theory drumroll please!  Dum dum dum dum…

Who, one might ask, has a motivation to disseminate the idea of advanced artificial intelligence, artificial intelligence capable of intruding in territories such as creativity and love that humans consider sacred, and disseminate the idea in as non-threatening a way as possible?  To broach the idea of scary, revolutionary possibilities while emphasizing a humanist interpretation, and ultimately leaving the viewer complacent with the status quo?

An advanced AI, of course.

The intelligence formerly known as Spike Jonze is laughing at us.

Written by jphaas

January 4th, 2014 at 6:36 am

Posted in Uncategorized

Why people hate Silicon Valley

with 2 comments

Ben Horowitz, a prominent venture capitalist, wrote a great post on Can-Do vs Can’t-Do Culture making the point that the world is changed by people with a “yes we can!” attitude.  As he sums up at the end of the article, “Don’t hate, create.”

Part of me is going “hell yes!” at this.  I’ve seen “can do”, and I’ve seen “can’t do”, and believe me, I want to be part of team “can do.”  I respect the problem-solvers, not the critics on the sidelines.

But Ben also points out that there’s lately been a cultural shift towards can’t-do criticism of the technology industry:

Lately, it has become in vogue to write articles, comments and tweets about everything that’s wrong with young technology companies. Hardly a day goes by where I don’t find something in my Twitter feed crowing about how a startup that has hit a bump in the road is ”fu&%@d,” or what an “as*h%le” a successful founder is, or what an utterly idiotic idea somebody’s company is.

Ben attacks this as toxic and counter-productive, but he doesn’t ask the important question: why?  Where is the hate coming from?  Are some people just natural haters?  Did we stop spiking the national water supply with Prozac?

Here’s my theory: “can’t do” attitudes are the rational response when people don’t buy into the vision, and don’t know how to change it.  People are hating on Silicon Valley because they don’t like where it is going.

The question Ben doesn’t address in his article is, is technological innovation good?  Sure, we can build the future faster if we all get on the same team and go after it with a gung-ho attitude… but is the future we’re building one that we actually want?

It’s telling that Ben decorates his post with World War II propaganda.  World War II represented, in America, a successful campaign to silence the war’s critics and build a national narrative that this was a just cause: “the good war”.  Decorating the post with Vietnam War-era propaganda would have a very different cultural meaning.

Here’s the thing.  Technological innovation, the way it plays out Silicon Valley-style, is Win-Win-Lose.  Consumers win, innovators win, existing producers generally lose.

For instance, take one of the big disruptions that we can see coming: the self-driving cars that Google is developing.  The advent of self-driving cars is very likely going to render everyone who makes their living by driving, such as taxi drivers, unemployed.  (It’s no coincidence that Google invested in Uber).   Unemployment isn’t a death sentence, but realistically, life is going to get very hard for a lot of people because of this advance in technology.

I’m not in favor of halting technology.  Personally, self-driving cars is something I’m really hoping for: my family, like a lot of families, has the dilemma of aging relatives who want the independence of having a car, but whose driving is getting increasingly scary.  So I am rooting for Google to succeed.  But I’m not going to pretend there’s no price, and I acknowledge that it’s a price that is likely going to fall on others more than it falls on me.

People hate Silicon Valley because entrepreneurs reap the rewards of innovation without paying the price themselves.  Moreover, the prevailing in-Valley narrative is that those who succeed do so because they are better and more deserving: they’re the smart ones who took risks and therefore deserve the rewards.  This perspective largely ignores the reality that socioeconomics, gender, race, birthplace, and random chance play a big role in where people start the race from.  It also ignores the reality that prize for first place is disproportionately higher than the prize for second or third.

You can’t expect people to buy into a narrative that they don’t see themselves in.  For a lot of people, identifying with the founders of the latest successful startup is hard to do for various reasons — maybe the founders don’t look like me, maybe I didn’t learn to code at age 12, maybe I have to work full-time supporting two kids.  And the Silicon Valley narrative is merciless towards those who don’t find a place for themselves at the top. (This is what it’s like to be a worker in one of’s warehouses).

So that’s why haters are going to hate.  Haters are going to hate anyone whose success isn’t their success; whose success, in fact, is at the expense of their economic stability and safety.  Can you hate the haters for hating that?

The sad thing is that it doesn’t have to be this way.  The driving spirit of technological innovation is freedom, creativity, and empowerment.  The internet, the medium through which much of this development takes place, has the potential to be one of the greatest democratizing forces in human history.  Silicon Valley was built on idealism and a spirit of making the world a better place.

However, there are two songs here.  One song is that of human progress.  The other song is an economic power-grab: the growing ranks of the unemployed, the startup equity structures that make founders billions of dollars wealthier than employee #2, acquihires and San Francisco housing prices.

The challenge for the technology industry is, are we serious about the first song, or are we really just in it for the money?  It’s one thing to talk the talk of idealism.  It’s quite another to take it seriously, with all the personal trade-offs that implies: do we build things that people need or do we just build things that people want?  Do we try to become billionaires or do we try to share the wealth?

Not a rhetorical question.

Written by jphaas

January 2nd, 2014 at 4:51 pm

Posted in Uncategorized

Empower, don’t disrupt

without comments

The terror of capitalism is: I don’t grow my own food.

We are alienated from the process of sustaining human life. Our relationship with food, shelter, sanitation, and housing is mediated by the market. If you don’t have something to sell, something that the market wants, you starve.

Our survival depends on an indifferent god. The market is not benevolent. It does not reward virtue or hard work. The market, like Rhett Butler, doesn’t give a damn.

In fact, perfect competition is perfect poverty. On a truly level playing field, profits go inexorably to zero. The only reason anyone makes money is market inefficiency, because in an efficient market, all innovation is copied and all prices are undercut. There is always someone willing someone to work that extra hour, or that dollar cheaper, because starving slowly is better than starving quickly.

The invisible hand is real, and it has one job: to squeeze your life out. The free market will strip-mine you.

So what do we do? We make monopolies. Everyone who has any wealth at all is participating in some form of a monopoly. Monopolies are forces of anti-competition. They are social constructs with the purpose of excluding outsiders from a given market, in order to make that market inefficient.

Almost everything in a money-based society is a form of monopoly. Corporations are monopolies. Labor unions are monopolies. Political parties are monopolies, and congressional pork is not a failure of government but its reason for existence. Advertisers who convince us that their sugar water is different from other sugar waters are building monopolies. Educational credentials and the institutions that grant them are all monopolies. Trade barriers and immigration restrictions are monopolies. Too big to fail is a monopoly. Patents are monopolies, as are trade secrets, as are social norms against “stealing” someone’s ideas. All forms of discrimination are monopolies, as are all ideologies.

We are all monopolists, because if we weren’t, we would starve. In a capitalist world, monopoly is the primary expression of human creativity.

Based on our monopolies, we judge each other. We say this one is good, this one is bad. We defend our own and attack those of others, and call it social justice or morality or freedom. The truth though is that the only way a monopoly can exist is by forcing out other people. We create artificial standards for comparison that outsiders can’t compete with (it’s called branding if it the lie is about the product, elitism if the lie is about the person). We enforce laws that stop them outright (it’s called regulation, or intellectual property, or protectionism). We create fear and pander to desire. The tactics are as diverse as humanity, and none of them are any more or less moral than a bird eating a mouse.

People in Silicon Valley are excited about “disruption”. Disrupt has become an imperative: “TechCrunch: Disrupt!” Disruption means the destruction of an old monopoly to replace it with a new one. It means that outsiders get to become insiders, and insiders become outsiders.

The ideology of disruption justifies itself: Disruption creates value. Disruption advances technology. The old monopolies are inefficient. The new monopolies are better.

The best lies contain a grain of truth, which is why disruption is such a powerful ideology. It is true that disruption advances technology. It is true that it leads to better goods and services. It is true that everyone wins when the state of the art advances.

What is left out of this truth, though, is that for the disrupters to profit from their innovation, they must capture the value they create through monopoly. They need to raise barriers to keep people from following them, or else they will see no return on their investment. So, everyone else does win — as consumers. But they lose as producers. They are the new outsiders, shut out of the new economy.

A disrupted world is a world of constant fear. The faster technology advances, the faster the new monopoly becomes the old monopoly. The only security in a disruptive world is to constantly be disrupting, to innovate faster than your competitors can. Disruption, therefore, is elitist. The subtext of disruption is always, “I am smarter and therefore more worthy.”

Every elitist claim conceals a fear of its opposite. What if you aren’t actually smarter? What if the other guy is? This fear is what drives the social universe of Silicon Valley. This is why successful CEOs are hero-worshipped, and why people flock from trend to trend, hopelessly trying to reverse-engineer success. It’s the continual anxiety of the perpetually lost, trying to find their way over to the right side of history before it’s too late.

I can’t hate disrupters for wanting to move from the outside to the inside. I can’t hate them for raging against the existing monopolies, for deploring their stasis, their complacency, the coercion and lies necessary to maintain them. Who doesn’t want to be an insider? Who doesn’t want to feel secure?

And yet disruption doesn’t really offer security. It only offers further violence, the new against the old, the new becoming the old, the new new against the new old.

Is there a better way? Is there a third alternative to the stasis of entrenched monopoly versus the violence of new monopoly?

Yes. Yes, there is. There is a currency out there that is not zero-sum at all, that is not based on fear, that does not rely on insiders versus outsiders.

We can state the imperative of monopoly as: seek power for yourself.

And the imperative that can defeat monopoly? Seek power for others.

Seeking power for others, empowerment, means working to increase the effectiveness of our neighbors in the world. It means working to put more material resources into their hands. It means sharing technologies with them. It means helping them be happy and psychologically whole.

This kind of empowerment is not feel-good charity. It’s money in the bank, for yourself. When shit hits the fan and it’s you who needs food, medicine, shelter, the absolutely best resource you can have is a network of empowered people who feel that they owe you one. It’s much easier, in fact, to wipe out a bank balance than it is to wipe out an empowered social network.

What does empowerment look like in practice? In practice, it looks very similar to capitalism. Like a capitalist, you understand what the people around you want (and you also pay attention to what they need). Like a capitalist, you provide goods and services that meet those needs. Sometimes people pay you for those goods and services. Sometimes they sponsor you to provide them, a la Kickstarter. And sometimes you just give them away for free.

The difference is primarily one of ends, not means. You can use existing channels of capitalism or democracy. The difference is that the goal of every transaction is first and foremost for the other person to gain in terms of power, and secondarily for you to get what you need to keep transacting. It’s the difference between keeping prices as low as you can afford, rather than as high as you can get away with. It’s the difference between giving people resources versus feeding people’s addictions.

Empowering others is not about overthrowing capitalism, it’s about building on top of it, of playing the free markets by different rules.

Empowering others is practical strategy. Empowerment can go head-to-head against self-interest and win. Self-interest is a short-term game. It’s extractive; you build your monopoly, and you use it to milk the people around you dry. You gain resources, but lose network. Conversely, empowerment doesn’t gain you as many resources up front, but it creates compounding interest as the people you empower are more able to empower other people, causing the network as a whole to gain value exponentially.

More and more people are choosing this new game. Outside of the world of venture-backed disruption, entrepreneurs are increasingly rejecting the premise that companies exist to provide return on investment, in favor of the premise that companies exist to create a social good. Within the existing corporate order, the rise of the B Corporation provides a legal framework for companies to put social goals ahead of shareholder profits. Entire ecosystems such as the open source community have been built around freely giving. Those are examples from my own experience; I’m sure there are many others across the world.

Values are ultimately practical; they are rules of thumb for navigating complex environments. Values that don’t preserve the well-being of their adherents don’t survive. The values of capitalism, namely self-interested wealth-seeking, have had an enormously successful run. They have turned the world inside out over and over again. And they’ve left a compounding pile of messy disaster in their wake. It’s time to move on. The institutions and accomplishments of capitalism will likely continue in a recognizable form, but the time for self-interest is over. Survival in the new economy is about networked, cooperative power. Let’s embrace that, and transform the future into something to anticipate, not something to fear.

Written by jphaas

November 24th, 2013 at 6:28 pm

Posted in Uncategorized

A Theory of Agency

without comments

One of the traditional contrasts drawn between Eastern and Western thought is that Eastern thought focuses on acceptance of reality as it is, whereas Western thought focuses on progress to make reality better.

This contrast has always bothered me, because to me it feels like I’d want both! I’m a believer in progress and changing the world, but I also think it’s very important to live in the now and accept reality as it is.

I’ve always been in interested in a unified theory of psychological growth that explains the role of both expanding horizons and goals, as well as deeper acceptance of reality.

I’ve been thinking about this problem again lately, and I have a theory now that I’d like to share.

The theory maps out the lifecycle of psychological growth of an agent. By an agent, I mean an intelligent, goal-directed being like a human (or an AI if we ever figure out how to program one).

Being an agent, to me, means having a mental model of the world, and having the tendency to act in ways that brings the world into alignment with certain features of the model (i.e., goals).

Psychological growth means developing a richer, more effective model of the world; it also correlates with basic human values like becoming more loving and more happy. So, this can be regarded as an exercise in amateur developmental psychology — how does a baby grow up, and progress into happy, functional adulthood?

Unlike other models such as Piaget’s stages of cognitive development, I’m less interested in what understanding is acquired when, and more interested in what the basic process of acquiring understanding looks like. In my examples I use a baby since baby’s lives are simpler, but I see this process as continuing all the way through a person’s old age as long as that person continues to grow.

Below, I’ll present the model — the fun part — and then below that, some more notes on what I think an agent is (a little less interesting, but useful for getting a richer understanding of the theory).

This is totally speculation, but it’s fun speculation, and it correlates pretty well to my personal observations of what growth feels like. If I ever try to program an AI, I’ll be keeping this theory in mind. Read the rest of this entry »

Written by jphaas

November 18th, 2013 at 4:19 pm

Posted in Uncategorized

Utopian revolution? What does it look like?

with 2 comments

Albert Wenger from Union Square Ventures wrote a blog post in favor of a utopian revolution that I’ve been thinking a lot about the last couple days.

To summarize, his point is basically: a) the current political and economic system isn’t working, b) we don’t have a concrete vision of what to replace it with, c) let’s aim big (total eradication of poverty while living in harmony with the environment on a global scale), because d) we now have the technology to pull it off over the next few generations.

His suggested approach is along the lines of guaranteeing income and internet access for everyone, and decentralizing governmental power to cities, which he’s putting out as a vague roadmap to be fleshed out through continued conversation and research.

My initial reaction was intense enthusiasm, because I generally agree that things aren’t working and that incremental change isn’t going to get there, and because I’m a believer in big ambitious crazy goals.

I’m still positive, but I’ve been thinking about what’s really wrong with the status quo, why it’s intractable, and what a solution would have to look like.

My basic sense is that high-tech, highly-networked capitalism is structurally flawed. The problem is that there is too much competition for too few ecological niches. As transportation and communication technologies improve, markets move from local to global, letting a single player provide for the entire system.

For instance, in an agrarian economy, Read the rest of this entry »

Written by jphaas

November 1st, 2013 at 9:46 pm

Posted in Uncategorized

Socially-approved ways of processing experience

without comments

The title of this blog post has been repeating in my head for the last couple minutes.

I just read a rant by the comedian Russell Brand (who is kind of my hero) about how the Western political / economic system needs to be overthrown. I basically agree with him that modern capitalism is broken because it doesn’t serve the good of the whole, and tends to lead to lowered rather than raised consciousness. But I get worried by the progressive / socialist program for change, too, because it seems to be about replacing freedom (which is the one thing that capitalism really gets right) with coercion. I think Russell’s with me, insofar as he goes after progressives for not having a sense of humor (it’s hard to be repressive if you have a sense of humor). But still it’s hard when the counter-culture agenda is anti-freedom / pro-conformity.

My feelings on this can be summed up by my all-time favorite political quote, “You’d better free your mind instead”.

So okay. Where I agree with progressives is on the notion of “consciousness”; i.e., that there’s a differentiable spectrum in the quality of human experience ranging from mental slavery / addiction to love / transcendence / freedom, and that this is a variable that belongs in the realm of social discourse. This is in stark contrast to classic liberal thought (liberal in the founding-fathers way, not the democrats way) where the basic unit of social / political existence is the (white male landholding) enfranchised individual, who are all “created equal” and act / vote with autonomy. Economics, which as an academic discipline is in many ways the intellectual heir of this line of thinking, has its concept of the rational actor… the idea that humans don’t act as rational agents is a new exciting development for economics, and I think the concept that humans might be psychologically different from each other, or even more radically, that the the psychological profile of someone can change in response to personal growth and transformation, seems to be outside the academic pale.

What I would like to see is a unification of these strains of thought. Read the rest of this entry »

Written by jphaas

October 26th, 2013 at 5:42 pm

Posted in Uncategorized

New toy… building a meditation feedback loop!

with 5 comments

I have a little side project right now, which is: use a brainwave monitoring device to make an automated meditation trainer.

I bought one of these:

NeuroSky Mindwave

NeuroSky Mindwave

It’s a Neurosky Mindwave — it sits on your forehead and reads your EEGs. Here are some really scary stock photos of people wearing it — I hope I don’t look like any of these guys!!

scary1 scary2 scary3


Anyway, it’s actually super comfortable to wear and, at least as far as I can tell, doesn’t give you creepy stock photo model staring syndrome.

The headset connects over bluetooth to either your laptop or your smartphone, and there are a number of apps you can buy that go along with it: games that you can control with your thoughts, various brain “training” apps, etc. My main interest is writing my own software for it, so although I experimented with some of the apps to see what’s out there and what’s possible, I’ve been mostly writing my own code in Python.

I’m interested in this from two angles: the technology itself, and applications to meditation. I’m interested in the technology because I think brain-computer interfaces are going to become a big deal in the next few decades, and there’s lots of exciting possibilities… if you can control computers directly with your mind, communicating them might become way more fluid than the relatively clumsy mediums of touch / typing / mice. And I’m interested in meditation; I’ve been practicing various forms of it off and on for the last eight years, and I think it’s a tremendously valuable tool for living a good and happy life. I might go more into why I care about meditation in a later post, but a really good resource for learning about it is Full Catastrophe Living by Jon Kabat-Zinn, who was one of the pioneers of using meditation in an evidence-based clinical setting.

Here’s the theory behind my project:

  • Meditation is a learnable skill that leads to specific outcomes (improved emotional control and mental clarity)
  • Like all learnable skills, the way to mastery is practice-with-feedback
  • Unlike most skills, it is hard for an expert to give feedback, since she can’t observe the pupil’s efforts directly
  • Therefore, meditation is uniquely challenging to learn
  • …and therefore, a mechanism that actually gives clear feedback could lead to a revolutionary increase of ease-of-learning!

So that’s the goal: make it orders-of-magnitude easier to become skilled at meditation. Right now, becoming good at meditation is quite hard; it takes a pretty big investment of willpower and time, there are a lot of dead ends, you have to be careful about what teachers you listen to, and you can go for years without really knowing if you’re making forward progress. I think this is sad, because I think meditation is a skill that would make the world profoundly better if more people had, so making it significantly easier to learn would be a big win!

Side note: I should acknowledge that I’m discussing meditation in a highly instrumental way — as a means for improving the quality of life / thinking. Arguably the entire practice of meditation is about not thinking instrumentally, but rather it being an end in itself. This is a larger discussion that I’ll save for some other time — suffice it to say that I know what I’m saying about the value of meditation might be construed by practitioners as totally missing the point… I agree and think the “end in itself” perspective is highly important to what meditation is, but for now let’s pretend that it’s valuable purely as a life-improvement tool.

Anyway, for my project to be successful, a few things have to be true:

  1. Meditation needs to be detectable via EEG patterns
  2. Those patterns need to be sufficiently course-grained that cheap consumer products like the Mindwave can pick them up
  3. The easiest path to generating those patterns needs to be genuine meditation

There’s a lot of research confirming point 1 — I don’t think that’s controversial. After playing with the Mindwave for a week, I’m fairly sure point 2 is true as well. Point 3 is more of a question — point 3 gets to whether or not you can fool the feedback mechanism. To be useful, I don’t think the feedback has to be perfect, but it can’t be systematically biasing you to some kind of mental activity that’s not meditation. Point 3 I’m less sure about and I’m not sure I’ll be able to tell until the experiment continues for a while.

My python script monitors the level of meditation using the proprietary “meditation” metric developed by the Mindwave people. The Mindwave reports a raw feed of eight different brainwave frequencies, and two derivative metrics, meditation and concentration. I don’t know what the sauce is that goes into the meditation metric… I could probably reverse engineer it by tracking how it compares over time to the eight raw inputs, but I haven’t gotten around to it yet.

For now, I’ve decided to go with the “meditation” metric on the grounds that reproducing Mindwave’s work would take a lot of time, so I might as well use their work on isolating “meditation” as a starting point. If I start to feel like it’s not quite right as a metric (ie, subjectively, through repeated meditation sessions, I feel like it’s giving me bad feedback), I may revisit this decision. For now, I feel like the feedback is pretty good, but I’m out of practice meditating so I’m still at pretty shallow levels… the real test is once I’m getting into deeper meditations whether the feedback still feels useful or if it feels off-track.

The way my python script works is that it plays a tone if the measured level of meditation is above a certain threshold. The tone gets louder the further above the threshold you are. Each time you run the script, you have a goal for the number of seconds above the threshold, and a time limit to do it in. The session ends when you hit the time goal or the time limit, and if you hit the time goal, it increases all three numbers — the time goal, the threshold, and the time limit for next time. It starts easy — you have a six minute time limit to meditate at a 40/100 level for at least two minutes — and it’s geared so that after 60 successful sessions, you have a 60 minute time limit, a 90/100 threshold, and 45 minute time goal. Ie, I roughly want it to take about three months (assuming you do one session a day and achieve the goal two thirds of the time) to develop an extremely deep daily meditation habit — which seems aggressive but achievable.

I’ve tweaked the formula a bit over the last week, but I now think it’s stable enough that I’m resetting myself to the first session and working my way through it. So far I’m enjoying it — it’s much easier for me to meditate regularly with feedback than it is when I’m just going for a preset amount of time or listening to a guided meditation. The real test will be if I’m able to keep achieving the increasingly-difficult goals, and whether that achievement corresponds with increasingly deep meditations.

The source code is a little messy right now, but if there’s interest in it I can clean it up and put it on github. I’m currently building on top of to communicate with the headset. (Technically, the hardest part of this whole thing was figuring out how to generate the variable sound tones… that’s a blog post for another day!)

I’m very interested in feedback from other people who are familiar with this problem space — has anyone tried to do something similar? What’s worked / what hasn’t worked?

Written by jphaas

October 14th, 2013 at 1:12 am

Posted in Uncategorized