Archive for October, 2013
Socially-approved ways of processing experience
The title of this blog post has been repeating in my head for the last couple minutes.
I just read a rant by the comedian Russell Brand (who is kind of my hero) about how the Western political / economic system needs to be overthrown. I basically agree with him that modern capitalism is broken because it doesn’t serve the good of the whole, and tends to lead to lowered rather than raised consciousness. But I get worried by the progressive / socialist program for change, too, because it seems to be about replacing freedom (which is the one thing that capitalism really gets right) with coercion. I think Russell’s with me, insofar as he goes after progressives for not having a sense of humor (it’s hard to be repressive if you have a sense of humor). But still it’s hard when the counter-culture agenda is anti-freedom / pro-conformity.
My feelings on this can be summed up by my all-time favorite political quote, “You’d better free your mind instead”.
So okay. Where I agree with progressives is on the notion of “consciousness”; i.e., that there’s a differentiable spectrum in the quality of human experience ranging from mental slavery / addiction to love / transcendence / freedom, and that this is a variable that belongs in the realm of social discourse. This is in stark contrast to classic liberal thought (liberal in the founding-fathers way, not the democrats way) where the basic unit of social / political existence is the (white male landholding) enfranchised individual, who are all “created equal” and act / vote with autonomy. Economics, which as an academic discipline is in many ways the intellectual heir of this line of thinking, has its concept of the rational actor… the idea that humans don’t act as rational agents is a new exciting development for economics, and I think the concept that humans might be psychologically different from each other, or even more radically, that the the psychological profile of someone can change in response to personal growth and transformation, seems to be outside the academic pale.
What I would like to see is a unification of these strains of thought. Read the rest of this entry »
New toy… building a meditation feedback loop!
I have a little side project right now, which is: use a brainwave monitoring device to make an automated meditation trainer.
I bought one of these:
It’s a Neurosky Mindwave — it sits on your forehead and reads your EEGs. Here are some really scary stock photos of people wearing it — I hope I don’t look like any of these guys!!
Anyway, it’s actually super comfortable to wear and, at least as far as I can tell, doesn’t give you creepy stock photo model staring syndrome.
The headset connects over bluetooth to either your laptop or your smartphone, and there are a number of apps you can buy that go along with it: games that you can control with your thoughts, various brain “training” apps, etc. My main interest is writing my own software for it, so although I experimented with some of the apps to see what’s out there and what’s possible, I’ve been mostly writing my own code in Python.
I’m interested in this from two angles: the technology itself, and applications to meditation. I’m interested in the technology because I think brain-computer interfaces are going to become a big deal in the next few decades, and there’s lots of exciting possibilities… if you can control computers directly with your mind, communicating them might become way more fluid than the relatively clumsy mediums of touch / typing / mice. And I’m interested in meditation; I’ve been practicing various forms of it off and on for the last eight years, and I think it’s a tremendously valuable tool for living a good and happy life. I might go more into why I care about meditation in a later post, but a really good resource for learning about it is Full Catastrophe Living by Jon Kabat-Zinn, who was one of the pioneers of using meditation in an evidence-based clinical setting.
Here’s the theory behind my project:
- Meditation is a learnable skill that leads to specific outcomes (improved emotional control and mental clarity)
- Like all learnable skills, the way to mastery is practice-with-feedback
- Unlike most skills, it is hard for an expert to give feedback, since she can’t observe the pupil’s efforts directly
- Therefore, meditation is uniquely challenging to learn
- …and therefore, a mechanism that actually gives clear feedback could lead to a revolutionary increase of ease-of-learning!
So that’s the goal: make it orders-of-magnitude easier to become skilled at meditation. Right now, becoming good at meditation is quite hard; it takes a pretty big investment of willpower and time, there are a lot of dead ends, you have to be careful about what teachers you listen to, and you can go for years without really knowing if you’re making forward progress. I think this is sad, because I think meditation is a skill that would make the world profoundly better if more people had, so making it significantly easier to learn would be a big win!
Side note: I should acknowledge that I’m discussing meditation in a highly instrumental way — as a means for improving the quality of life / thinking. Arguably the entire practice of meditation is about not thinking instrumentally, but rather it being an end in itself. This is a larger discussion that I’ll save for some other time — suffice it to say that I know what I’m saying about the value of meditation might be construed by practitioners as totally missing the point… I agree and think the “end in itself” perspective is highly important to what meditation is, but for now let’s pretend that it’s valuable purely as a life-improvement tool.
Anyway, for my project to be successful, a few things have to be true:
- Meditation needs to be detectable via EEG patterns
- Those patterns need to be sufficiently course-grained that cheap consumer products like the Mindwave can pick them up
- The easiest path to generating those patterns needs to be genuine meditation
There’s a lot of research confirming point 1 — I don’t think that’s controversial. After playing with the Mindwave for a week, I’m fairly sure point 2 is true as well. Point 3 is more of a question — point 3 gets to whether or not you can fool the feedback mechanism. To be useful, I don’t think the feedback has to be perfect, but it can’t be systematically biasing you to some kind of mental activity that’s not meditation. Point 3 I’m less sure about and I’m not sure I’ll be able to tell until the experiment continues for a while.
My python script monitors the level of meditation using the proprietary “meditation” metric developed by the Mindwave people. The Mindwave reports a raw feed of eight different brainwave frequencies, and two derivative metrics, meditation and concentration. I don’t know what the sauce is that goes into the meditation metric… I could probably reverse engineer it by tracking how it compares over time to the eight raw inputs, but I haven’t gotten around to it yet.
For now, I’ve decided to go with the “meditation” metric on the grounds that reproducing Mindwave’s work would take a lot of time, so I might as well use their work on isolating “meditation” as a starting point. If I start to feel like it’s not quite right as a metric (ie, subjectively, through repeated meditation sessions, I feel like it’s giving me bad feedback), I may revisit this decision. For now, I feel like the feedback is pretty good, but I’m out of practice meditating so I’m still at pretty shallow levels… the real test is once I’m getting into deeper meditations whether the feedback still feels useful or if it feels off-track.
The way my python script works is that it plays a tone if the measured level of meditation is above a certain threshold. The tone gets louder the further above the threshold you are. Each time you run the script, you have a goal for the number of seconds above the threshold, and a time limit to do it in. The session ends when you hit the time goal or the time limit, and if you hit the time goal, it increases all three numbers — the time goal, the threshold, and the time limit for next time. It starts easy — you have a six minute time limit to meditate at a 40/100 level for at least two minutes — and it’s geared so that after 60 successful sessions, you have a 60 minute time limit, a 90/100 threshold, and 45 minute time goal. Ie, I roughly want it to take about three months (assuming you do one session a day and achieve the goal two thirds of the time) to develop an extremely deep daily meditation habit — which seems aggressive but achievable.
I’ve tweaked the formula a bit over the last week, but I now think it’s stable enough that I’m resetting myself to the first session and working my way through it. So far I’m enjoying it — it’s much easier for me to meditate regularly with feedback than it is when I’m just going for a preset amount of time or listening to a guided meditation. The real test will be if I’m able to keep achieving the increasingly-difficult goals, and whether that achievement corresponds with increasingly deep meditations.
The source code is a little messy right now, but if there’s interest in it I can clean it up and put it on github. I’m currently building on top of https://github.com/BarkleyUS/mindwave-python to communicate with the headset. (Technically, the hardest part of this whole thing was figuring out how to generate the variable sound tones… that’s a blog post for another day!)
I’m very interested in feedback from other people who are familiar with this problem space — has anyone tried to do something similar? What’s worked / what hasn’t worked?
Quick idea: Civilizing comment threads
I find reading the comments of online articles, or things like Hacker News, like watching a car wreck; hard to look away from but painful. There’s a lot of technical ideas that have been tried for keeping comments okay, such as up / down voting, threading, reputation scores, etc. Some seem to work okay to varying degrees. Still, it’s hard to have a good conversation on the internet on sensitive / emotionally charged / political issues without things degenerating into ad hominem attacks and verbal abuse.
Technological solutions aside, I wonder if it’s possible to create quality communication on a thread-by-thread basis by re-introducing a lot of the social interactions that prevent in-person conversations from degenerating as rapidly? Like, what would happen if you responded to a really aggressive, name-calling comment with something like “Hi, I’m Josh, nice to meet you. I think you’re upset because… If I understand you correctly, your concern is…”.
I don’t really see this happen very often — even highly thoughtful comments tend to be written without any effort to address the person they are responding to at a personal level and make sure that they’re repeating and understanding the other person’s position. That’s in part because that kind of repetition is long-winded and slow, kind of the opposite of the internet comment medium which tends to be very short and too the point. But if rapid-fire exchanges lead to increasing hostility and lack of actual communication, maybe it’s worth deliberately being a little inefficient to try to slow things down.
The main examples of online communities that don’t break down are ones with a small group of regulars who all know each other. For instance, Fred Wilson’s blog A VC has a remarkably civil comment section, likely because it’s the same people over and over again. So my idea would be to try to create that kind of intimacy in broader forums, like on twitter (hard because of the character limit!) or on hacker news.
Anyway, not sure if I’ll do anything with this idea any time soon, since I tend not to comment that much anyway. Writing it down in case I want to revisit it someday…