Collapsing the Wave Function
To celebrate breaking-the-speed-of-light day, here’s my own crazy physics theory. This is definitely in the space of spaced-out 3 am thoughts, and actually proving or disproving it would be pretty hard as far as I can tell. But it also has a recurring plausibility and consistency to me such that I can’t quite get it out of my head. So here goes….
Background
I’m going to assume you more-or-less know what quantum mechanics and general relativity are. Basically, they are two competing sets of mathematical equations for describing how the world behaves. They both make mathematical predictions about what things will do, and in practice, the predictions made by quantum mechanics prove highly accurate at subatomic scales and the predictions made by general relativity prove highly accurate at human and cosmic scales.
The two theories are incompatible. The predictions made by general relativity are different than the predictions made by quantum mechanics. Somewhere in between the scale of tiny subatomic particles and things that we can see, the world stops working like quantum mechanics predicts and instead starts working like general relativity.
One of the holy grails of physics is a “grand unified theory”, a new mathematical model that predicts both microscopic and macroscopic behavior. Presumably, it would look like a system of equations that, when you input tiny values, simplify to look like the equations of quantum mechanics, and when you input large values, simplify to look like the equations of general relativity. It would explain where and why the cutoff between the two are. At this point although there’s been work done in this direction, like string theory research, no one has really figured it out.
Quantum mechanics describes the world in terms of probability waves. Rather than describing the trajectory of a particle, it describes the field of probable trajectories. The crazy thing about QM is that it predicts — and experiments observe — that particles actually interact with each other as if they were waves: a single particle, which common sense tells us is only at one place at one time, “collides” with a single other particle to form result-patterns that look like waves of particles were colliding and bouncing off each other. They form ripples. Read up on the double-slit experiment if this isn’t making sense (which it really doesn’t, unless you think very strangely).
When we look at how objects interact with each other, we see discrete objects, not waves of probabilities: we see a chair in one place, not a diffusion of possible places the chair could be. Under the Copenhagen interpretation of quantum mechanics, the most mainstream interpretation, at some point the quantum probability function “collapses” into one possible state. (In other interpretations, instead of the world collapsing into one state, the world splits into multiple possibilities each one corresponding to a different one, and our consciousness, for whatever reason, ends up in one of those possible worlds).
The Copenhagen interpretation explains that the collapse occurs when a system is “measured”. The confusing thing is that no one really knows what that means. There isn’t a precise scientific definition of measurement. Some people think that it’s a scale thing: when a system gets sufficiently large, it collapses. Other people speculate that it is human consciousness that causes the collapse, which is puzzling because “consciousness” isn’t a physics-term in any strictly defined way.
In quantum theory, when two particle waves interact, their probability waves become “entangled”. What this means is that you can’t describe the probable state of one particle without describing the probable state of the other. When you measure the state of one of them, you can accurately predict subsequent measurements of the other. It’s as though there becomes only one version of reality between the two particles: it’s not flipping two coins to decide how they will collapse, it’s only flipping one coin. Entanglement spreads virally: as more and more particles interact with each other, they all become entangled, their probability waves caught up in each other. This continues endlessly until there is a measurement and a collapse.
Because entangled particles assume a definite state consistent with one another, it is very hard to experimentally determine when measurement occurs. Hypothetically, the entire world could exist as a set of entangled probabilities right up until the point that it’s observed by you, yourself: you have no way of telling whether or not the collapse occurs at the point that you see things or at some prior point.
So what we have is this really weird situation, where we have two descriptions of the way the universe works, the description of quantum mechanics and the description that we see all around us. They both allow accurate predictions in different circumstances, and at some point one transitions into the other. But when, and how, and why are all unknown. How does “measuring” something cause collapse? Is consciousness involved? It’s a really weird mystery.
My Hypothesis
Imagine a situation where you have a particles bouncing off of each other in a loop. Particle A hits particle B, which hits C, then D, then back to A again. (There are magnetic walls or something that keeps them moving in a circle). Each iteration around the circle, they create a slightly different pattern with their movements, the output of particle D becoming the input of particle A and so on in a feedback loop.
This is a closed system without measurement, so each of our particles are behaving as waves rather than as discrete points. As the particles interact in a loop, the interaction between their probability waves becomes increasingly complex because the probable location of D feeds into the new probable location of A, each time adding onto itself.
Imagine that we set up this circle in such a way (we might need more than four particles, and lots of magnetic fields, but let’s hypothesize that this is possible) such that each time through the feedback loop, the space of possible positions of each particle shrinks. As there are more and more iterations around the feedback loop, we become increasingly certain of each particle’s location. For analogy, when you have a chunk of uranium, you don’t know what percentage of its atoms will have decayed at any given point in time, but as time approaches infinity, the amount approaches 100%. Likewise, you won’t ever know where any of the particles are, but as iterations around the circle approaches infinity, our uncertainty over their locations approaches zero.
In this little system we’ve imagined, the wave function never “collapses”, per se, but the system increasingly behaves like the particles are actually particles instead of waves.
Speculation 1 is that a system such as this is possible. To test this speculation we’d either need to design a system that we can mathematically prove behaves like this, or prove that given what we know about physics no such system is possible.
Speculation 2 is that such a system might be capable of computation. In other words, by arranging the particles in certain ways, you can have them interact such that they end up in an arrangement that represents a solution to a computable function. Per the Church-Turing thesis, all computing systems are essentially equivalent: any programmable system capable of computing a certain basic set of things is capable of computing anything that can be computed. Something is known as “Turing equivalent” if it can compute the same set of things that a Turing machine can; desktop computers, for instance, are Turing equivalent. Stephen Wolfram demonstrated that you can build Turing equivalent systems out of very simple sets of interacting things: he’s famous for “rule 110” which is a very simple simulation of “cellular automata” that give rise to highly complex patterns capable of doing computations.
Likewise, our particles are simple, but interact in ways that could possibly lead to computation. One of the major requirements for building a Turing-equivalent system is there is some kind of feedback loop. So, our particles spinning around in a circle and bouncing off each other could quite possibly be a Turing machine. To test this speculation we would need to revisit our proof of speculation 1 and see if we can construct a system that both satisfies it and gives rise to computation.
Speculation 3 is that the human mind is a Turing machine doing an indefinitely long calculation (taking in input and spitting out output along the way), and that consciousness is the result of a feedback loop going in on itself over and over again. This is in line with what we know about how the brain works: our neurons interact with each other electrically, and there are feedback loops between them, creating a cycle of perpetual feedback. This is how humans give rise to truly unpredictable behavior. Turing proved that the only way to always figure out, for any given Turing machine, what it will output, is to actually run the calculation (so if the calculation never ends, you’ll never know it is not going to end). For an indefinitely long calculation, the output becomes indefinitely unpredictable; to figure out what a person is going to do 100% reliably you actually have to reconstruct their brain.
Speculation 4 is that any feedback system capable of consciousness — able to form representations about itself and communicate about itself — has the characteristics of a system described by speculation 1: as the degree of feedback approaches infinity, the space of quantum probabilities of the state of the system approach a single possibility. This speculation, I imagine, would be extremely hard to test, unfortunately: this is really where my theory becomes a leap of faith. Below I’ll explain why I think it is a speculation worth entertaining.
When you put these four speculations together, what you get is this hypothesis: quantum mechanics is an accurate description of the way the world works, both on a micro and a macro scale. There is no such thing as wave function “collapse”. The entire universe is entangled with itself, and all there is are possible states of things. However, the existence of consciousness in the universe creates a feedback loop that causes a mathematical convergence of the field of possibilities. From our vantage points as conscious beings, we see a unified view of reality because the nature of our minds is such that the act of being aware (which I view as a highly rapid feedback loop, our mind’s engines spinning at thousands of RPMs per second), creates a convergence on one particular state. We never achieve perfect convergence, because we don’t self-reflect infinitely fast. However, we can get arbitrarily close to convergence. Moreover, the harder we look, the more aware we are, the greater the degree of convergence. So even though it never converges, we never catch it out as not converging, because it’s always just-converged-enough.
The appeal of this theory for me is in Occam’s razor. Although it is a little out there, I find it to be a really elegant explanation for the relationship between our minds and the universe. I have a hard time believing that there isn’t some connection between quantum mechanical collapse and human consciousness. It just seems so arbitrary that at a certain scale, all of a sudden things start acting like objects instead of probabilities. Moreover, consciousness seems like a fundamental component in any explanation of reality, because everything physics describes we know about only because we are there to witness it as a concious observer. However, on the flip side, consciousness seems to be a physical phenomenon: our brains obey the laws of physics as far as we can tell, and our minds seem to work on computational principles. And it doesn’t seem to be an all-or-nothing phenomenon: we can become increasingly conscious as we wake from sleep, for instance, and as we transition from embroyee to baby to adult. So this thing, human consciousness, is central to everything we know about the physical world, is the only reason that we think things actually exist as definites rather than as probabilities, and yet from a physical point of view it’s not “special”, it’s just another physical phenomenon.
So, it would be cool to explain how this organic, piecemeal phenomenon we call consciousness could possibly have the “measurement” / “collapse” effect that we see when we observe quantum systems. My theory provides a (somewhat, probably really hard to do in practice) testable hypothesis for one way this could work. Moreoever, it provides a precise definition of consciousness: a conscious system is one that creates a convergence of its probability wave as it continues to observe and reflect on itself. Whether or not this definition of consciousness corresponds precisely with our everyday definition is the open question. It’s a big leap, but so far I haven’t heard anything more plausible.