Sunday, May 22, 2011

Hugh Everett's many worlds interpretation of QM

Among many other things, Brian Greene's new book, The Hidden Reality, shows very clearly that the author is kind of obsessed, to put it mildly, with Hugh Everett's interpretation of quantum mechanics.

Everett's adviser was John Wheeler. By 1957, Everett completed his PhD thesis. It was considered wrong and worthless by Bohr and pretty much everyone else - with John Wheeler playing the natural role of a loving boss - so he left research and became a key figure defining the U.S. nuclear weapons policies behind the scenes. Scientific American described his personal life as tragic - silence, disconnect from the family, alcohol etc. Fine, let's stop this irrelevant stuff.

I wanted to know what the thesis was actually saying so I began to read it:
To summarize my impressions in two short sentences: I noticed that Brian Greene was affected much more than I could have thought - even his description of a physical theory being a bound state of a formal part and a physical part is taken from this thesis; second, Everett's thesis is much more obvious gibberish than I thought.




As far as I can say, the ideas that Everett was a forefather of decoherence or consistent histories are just full-fledged misconceptions. For some time, I was happy to realize that gibberish was being written at all times. However, my optimism about the current situation was quickly replaced by pessimism because the difference between the 1950s and the 2010s is that in the latter decade, this gibberish would be - and, in fact, is - being promoted via lots of official channels. Everett was at least original and his prose was very clear. Of course, he may deserve to have been a top philosopher of science if we accept that such a field may exist. But as I will argue below, his thesis wasn't a good science.

Now, Brian is a good person and I can't get rid of the impression that he's partly trying to revive some of Everett's stuff because of some kind of compassion. But if that's the case, I don't think it belongs to physics. Right is right and wrong is wrong and redefining those words by some emotional criteria means to give up science.

Looking at the dissertation

In the introduction, Hugh Everett III complains about quantum mechanics with the help of a two-observer experiment I will discuss momentarily. But even before this portion of the thesis, there is his picture what he considered conventional quantum mechanics he wants to challenge. He also says:
The state function "psi" is thought of as objectively characterizing the physical system, i.e., at all times an isolated system is thought of as possessing a state function, independently of our state of knowledge of it.
It is not only wrong; I believe that it was flagrantly dishonest for him to write it. Everett had to know that this wasn't what the true founding fathers of quantum mechanics were saying. In particular, Werner Heisenberg was always emphasizing that the wave function is not an objectively existing wave but rather a description of our subjective state of knowledge.

Niels Bohr, John von Neumann, Max Born, and probably also Wolfgang Pauli, Paul Dirac, and others would agree. But even if it were just Bohr and Heisenberg, it's just not possible to imagine that Everett had been unaware of their actual perspective on these matters. And it is unforgivable that Everett has actually tried to deny that this "school of thought" actually existed. It follows that he was fighting a strawman. Everett's even more confused classmates could have been saying that "psi" was a classical wave - but even if that's the case, one shouldn't write a PhD thesis based on classmates' confusions.

Observer B observing observer A observing system S

Fine. So Everett hasn't honestly described the theory or interpretation he was actually trying to challenge. But his treatment gets more detailed. The main "paradox" he wants to solve is the following situation (which is not really his invention, it's essentially the well-known Wigner's friend which is just an extension of Schrödinger's cat):
Observer A measures and studies the evolution of a microscopic system S in a lab. Observer B measures and studies the evolution of the whole composite system A+S. According to A, the measured properties of S become a reality as soon as they are measured. According to B, things only become sharp once B looks inside the lab; before that, even A is found in a linear superposition of states with different properties. Because A,B have different answers to the question when did the properties of A became real facts, there is a contradiction.
That's what he says. Now, to make his demagogy look "powerful", he offers - and debunks - four alternative solutions and chooses the fifth one that, as he claims later, makes the many worlds inevitable. The only problem is that none of the four alternative solutions is the correct quantum mechanical solution. So by accumulating four piles of junk, he believes that the reader will feel lost and accept whatever he will offer them. Lee Smolin has extended this demagogic technique to stunning dimensions.

So that's why Lee Smolin often tells you that there are e.g. 12 or 144 theories of quantum gravity - 11 or 143 mutations of some completely idiotic crackpot papers plus string theory - so string theory is meant to be diluted by this "argument" and 11/12 or 143/144 of the physicists are supposed to study the Smolin-like shit.

Fine. What were Everett's alternative answers?
  1. Solipsism - there's only one observer in the Universe.
  2. QM fails when acting on apparatuses or observers.
  3. B cannot measure the state of A+S without killing A, or stripping him of the ability to observe.
  4. Add hidden variables.
  5. Apply QM everywhere - but, as Everettt says, it means that it has to be non-probabilistic.
Now, 1) is wrong because many of us are qualitatively similar and all of us may use quantum mechanics to predict things. If I were the only person who can use it, that's fine but I would still have no explanation why other people look so qualitatively similar and apparently share common ancestry with me. ;-)

The point 2) is wrong because QM applies to arbitrarily large systems, geometrically speaking.

The point 3) is wrong because we may create a computer with some artificial intelligence that should be admitted to behave qualitatively similarly to us - and all the important properties of the computer may be measured.

Hidden variables 4) don't exist because of various reasons, inequalities, tests.

The first part of Everett's answer 5) is correct - the previous 4 alternative solutions are wrong - but everything he adds to it is wrong, too. In particular, it's not true that quantum mechanics should or can work without the notion of probability built into its basic framework.

But in his answer 5), he hasn't really solved the "paradox" yet. The correct solution - one that he doesn't even mention as one of five alternatives - is the following:
Indeed, the correct way for observer A to use quantum mechanics is to consider the measured properties of S to be facts as soon as A measured them. And indeed, the canonical correct way for B to treat the system A+S is to evolve it into the relevant superpositions and only accept the properties of A+S as facts once B measures them. Indeed, A,B have different ideas about "what is real". But this can lead to no contradictions. In particular, "what is real" only means "what is already known", and no surprise that this question is subjective and has different answers for A,B. The particular proof of the absence of demonstrable contradictions also depends on the fact that A could have only perceived properties that had decohered, and because decoherence is irreversible, B won't be able recohere them. So it is up to B whether he imagines that some properties of A+S were behaving "classically" even before B measured them; he doesn't have to make this assumption.
Whether "something is real before it was measured" cannot be measured :-), because of a logical tautology, so it is obvious that this question can lead to no physical contradictions, and that's totally enough for physics to work as it should.

Trying to make physics "more consistent than needed" and "overshoot" by claiming that it must also have answers to all questions that can't be measured is just a totally misguided recipe. Physics isn't obliged to answer physically meaningless questions and indeed, one may use quantum phenomena to show that all such questions whether "something was real before it was seen" are meaningless and can't have objective answers.

Whether "something was known" before it was measured is clearly subjective, so A and B have different answers. And this is indeed reflected by their different "psi" at different times because "psi" is the state of their knowledge. In practice, when you care whether you may run into contradictions by assuming that there was a real reality - some objectively real property - before it was seen, the answer is that as long as the property had decohered from its alternatives, you may always assume that it was objectively real.

But incidentally, this philosophical assumption won't help you to learn a single damn thing. It's just a physically worthless philosophical dogma, and indeed, this dogma will cause you trouble when you try to study the detailed behavior of coherent enough quantum systems because in those systems, the objective reality cannot be assumed to exist before the measurements - otherwise you really get wrong predictions.

Correct interpretation of predictions

In quantum mechanics, one learns something about the system and constructs the initial "psi", the state vector, or its tensor second power "rho", the density matrix. It can be evolved in time via the equations everyone agrees with. And when we want to ask any physical Yes/No question, and every question in physics may be divided to a collection of Yes/No questions, we compute the expectation value of a corresponding linear projection operator in that state - "psi" or "rho" (in the latter case, it's Tr(rho.P). We obtain the probability that the answer will be Yes. This is a recipe to answer all questions.

In practice, we may also ask questions about whole histories, so "P" may be replaced by some product of projection operators at different times, according to the detailed rules of the Consistent Histories. I view this is a problem-free extension of the single measurement with a single set of projection operators.

Anyone who is trying to answer some physical questions by something else than by looking at expectation values of linear operators in the quantum states is simply not doing quantum physics, or not doing it correctly.

Splitting to many worlds

When an observer observes something, Everett must say that the world splits into worlds where different alternative outcomes occur. Now, this is demonstrably at least as problematic as the "materialist collapse" of a "real wave". Why? Because he must be careful that the worlds don't split into sharply defined alternative worlds before the properties decohere - otherwise he would spoil the interference patterns etc. and everything that is quantum about quantum mechanics.

On the other hand, he must make sure that the splitting of the worlds occurs as soon as decoherence is over. But the "moment" when decoherence is over isn't sharply defined. Decoherence is never "absolute". Decoherence is a continuous process that becomes "almost totally complete" after a certain time scale but it is never complete.

The defenders of the many worlds such as Brian Greene complain about Bohr's phenomenological rule urging you to "collapse the wave function" when you measure something. The rule is creating an arbitrary boundary between the microscopic world and the macroscopic world, they complain.

But it's totally obvious that Everett's picture needs this boundary as well. The boundary tells you when the worlds actually split. They don't split when a microscopic particle remains coherent. They only split when something decoheres. (I am using the modern arguments with the full knowledge of decoherence because I want to judge the validity of the interpretations according to everything we understand today. Of course that some of these issues were misunderstood by everyone in the 1950s.)

So Everett's picture does need a boundary between the microscopic and macroscopic world. And indeed, decoherence is a way to show that there is a boundary. At some moment, the quantum processes - such as interference between different alternatives - become impossible. Decoherence exactly calculates the time scale when this occurs for a given system. It depends on the system, its Hamiltonian, and the parameters. It's a fully dynamical question.

It's very clear from the wording that Everett - and even his disciples today - find it unacceptable to claim that there is a boundary between the microscopic and macroscopic world. That was really a major driver of Everett's efforts. He didn't like that Bohr's interpretation depended on a different treatment of the small and large objects.

But decoherence has definitively demonstrated that Bohr was right on this point. There is a boundary. There is something different about the behavior of the large systems. But the difference doesn't mean that Schrödinger's equation doesn't apply to them. It always applies to all systems - including the system A+S studied by B. What's different for the large systems is that one may use an approximation scheme that restores some notions from classical physics. It's analogous to the fact that for large enough systems, you may approximate (statistical) physics of the building blocks by the macroscopic thermodynamic description.

But you don't have to.

When Bohr et al. were telling you that you should use a different logic for the small observed systems and the large objects such as observers, they were correctly saying that there's something about the large objects that isn't true for the small ones. But they were not saying that Schrödinger's equation can't be applied to large bound states of many particles. Of course that it can and many of the same people close to the founding fathers of QM were investigating exactly those issues.

Bohr couldn't crisply formulate and quantify how decoherence acts but it is very clear that he understood that there is some real statistical argument that follows from proper quantum mechanics that justifies his different phenomenological treatment of large objects - the fact that they often behave similarly to objects in classical physics.

Assigning probabilities to many worlds

Brian Greene is well aware of this problem. But it is such a huge problem of the many-worlds scenario that I can't believe that someone could disagree that the problem really kills the MWI picture.

Everything we learn in theoretical physics is about taking some usually quantitative information about the system and its state, and predicting the state at another time - usually a later time. So "physics" is all about the numerical values of things such as the S-matrix elements - the scattering amplitudes for incoming and outgoing particles of given types, momenta, and spins. In the quantum context, all the detailed information that the theory actually spits is about the probabilities of different things.

So you would think that an interpretation of quantum mechanics will be able to say where those numbers - everything we know and we can calculate about a quantum mechanical theory - enter the interpretation. But they really don't enter the MWI interpretation at all!

MWI is just a way to visualize that there could have been other outcomes of a measurement. You just declare that they live in "separate worlds". Of course, by definition, all those worlds with different outcomes are inaccessible. They will never affect you again - even in principle - which is why they're unphysical according to the standard interpretation of the word "physical".

Fine. You visualize the qualitative fact that there could have been other outcomes. The visualization is totally useless for any prediction because everything you will do will be constrained by the outcomes that you have already learned to be facts in your world - because you have measured it.

But is there a room for the actual numbers? Take a wave function that says that a particle has a 64% probability to be at C and 36% probability to be at D. There are two "many worlds", C and D. Now, if there are really just two, it's clear that the very philosophy should tell you that C and D are predicted to be equally likely. You can't hide the numbers 64%, 36% anywhere in the theory. That's a big problem because all of our knowledge of any quantum system is composed out of such numbers! All of physics is in these numbers. Qualitative visual aids involving outcomes that have been ruled out may be fine for someone but they have nothing to do with the actual calculations and predictions in physics.

An alternative is that you split the world to 64,000 worlds where you measure C and 36,000 worlds where you measure D. Well, it's awkward because if the 64,000 worlds are really identical, they're really one of them - because such a symmetry must be a gauge symmetry in quantum gravity, and so on.

But just accept that there are 64,000 parallel worlds where the observer measures C and 36,000 worlds where he measures D. Does it prove that the odds will be 64% and 36%? The answer is, of course, that it doesn't. And those who say "it does" suffer from the same basic confusion about all of physics and all of rational thinking as the advocates of a natural high-entropy beginning of the Universe; and as the most hardcore defenders of the "mediocrity principle" version of the anthropic principle on steroids. I will call it

The egalitarian misconception

To say that the 64,000 worlds of C-type and 36,000 worlds of D-type will lead to odds that are 64% vs 36%, you have to assume that it's "equally likely" for you to be in any of these worlds. But where does this assumption come from?

Of course, it doesn't come from anywhere. It's just a dogma, and a totally wrong and irrational one. Those people believe that some states or objects are "created equal". The believers that the entropy of the early Universe is predicted to be high believe that all microstates are always equal - like in an egalitarian society. Those who believe that we're the "typical observers" think that every observer in the Universe, every skunk who lives on Jupiter or Gliese 5835235bcz or anywhere has the same democratic rights and weight as a U.S. citizen. In the many-worlds context, the same believe leads Brian Greene and others to think that the multiplicity of the worlds would imply that the odds will be proportional to the ratios of the number of Universes.

But in all three cases, the conclusion is just completely wrong.

Egalitarian or uniform distributions are just one distribution among infinitely many. In fact, I can show that the egalitarian assumptions are totally inconsistent. There are infinitely many (infinity to the infinite power, in fact) different distributions of the strength of the vote on Earth. And egalitarianism is just one of them. Using the egalitarian principle, all distributions should be treated as equal. Because the egalitarian distribution has a negligible vote - measure zero in the set of distributions - it follows that it is not realized.

This sounds like an argument of a witty child but it is true. There is absolutely no self-consistent reason why you could assume that the egalitarian treatment of the "microstates"; "copies of you in many worlds"; "different observers in the inflating multiverse" should be justified.

In fact, every time we see something "equal" in physics, there has to be a rather nontrivial enforcement mechanism that explains the inequality. In other words, the default state of the affairs is that the different objects in large sets are totally unequal. If you want to say that something is equal about them, it is a bold and nontrivial assertion and you must do hard work to prove it. In an overwhelming majority of cases, you will fail because your statement is just incorrect.

The case in which egalitarianism works is a totalitarian society that restricts or kills everyone who differs from the average. With the help of a few Gulags, a society may come pretty close to the "ideal" of egalitarianism - the despicable idea that people and their lives should look equal. Well, it just happens that there usually has to be a person or a group who controls this unhuman experiment with the humans and who remains damn "unequal" to them - a kind of Stalin or Hitler or Gore dictating people how to reduce their dark skin, ownership of factories, or carbon footprint or something of the sort.

Egalitarianism of elements of a random large set is never natural in Nature - and in a properly functioning society.

Another exception is thermal equilibrium. If you achieve it, all microstates with the same values of conserved quantities become equally likely; the logarithm of their number is known as the entropy. But the equal number is not due to some a priori egalitarianism that applies to the microstates. It's a result of a mechanism we may describe - thermalization.

If a classical system evolves in a sufficiently chaotic way, it will randomly try all places within a slice of the phase space (the quantum discussion is analogous but uses very different mathematical objects to describe what's going on). So by the ergodic hypothesis, at a random moment in the future, you will get a random state on the slice - a random microstate.

But it takes some time to "enforce" this inequality. The microstates must be intensely mixed with each other. Such an equality between the microstates only occurs in the future - after some time spent by thermalization. This equality surely doesn't hod for the Big Bang because there was no thermalization prior to the Big Bang. And indeed, the entropy of the Universe at the beginning is correctly predicted - by Bayesian inference that reformulates the usual proofs of the second law - to be low.

The thermalization plays the analogous role as Hitler's liquidation of the people in extermination camps or Stalin's Gulags. There is a mechanism that does something. If you understand how it works, you will see that it makes the set more uniform. But if you have no argument like that, the set is almost certainly not uniform.

Of course, I kind of think that all the people who assume the "egaitarian misconception" are kind of driven by the fact that deeply inside their souls, they're fanatical Marxists. But Marxism is not compatible with all the details of the way how Nature works - much like most other ideologies.

When those Marxist people try to prove something, they think that it's the "default state" that X=Y for any two objects X,Y that look kind of qualitatively similar. They think that if they say X=Y, they don't need to prove anything. On the contrary, if someone says that X isn't equal to Y, they attack him - sometimes, they send him to the Gulag - because he dared to say that things are not equal.

But the matter of fact is that in maths or science or anything else, saying that X is equal to Y is a much bolder and less likely proposition than the statement that X is not equal to Y. There are many more ways how X may be unequal to Y. ;-) So the default assumption is that X is almost certainly not equal to Y. If you want to prove that X=Y, you need some argument - symmetry, ergodic hypothesis, Gulag, or something like that. But it is never "automatic". Every rational person knows that saying X=Y is, in most cases how to randomly choose X,Y, crazy. The Marxists just don't get this simple point. The Marxist ideology has so hopelessly eaten a big portion of their brains that they don't even realize that they're making an unjustified assumption.

But they are making nontrivial, crazy, and in all the cases above, fundamentally wrong assumptions, indeed. In the case of the many worlds, one of the consequences is that they totally revert what is fundamental and what is not. They want some "unjustified egalitarianism" between Universe - that couldn't undergo any thermalization via ergodic hypothesis; and whose inhabitants didn't face the threat of a Gulag - to be the assumption that implies the right probabilistic predictions.

It can't work in this way. The real world works exactly in the opposite way. The probabilities are computed from the quantum mechanical formulae are the main tools that allow us to derive things about the real world. In particular, if some things are equal in the real world, we have to reduce the proof of the equality to some calculation that ultimately deals with the probabilities because the probabilities as calculated from the laws of physics are fundamental, and all their macroscopic or political "corollaries" are just emergent and derived facts.

For example, one may derive that after some time spent with thermalization, all microstates will be approximately equally likely. One may calculate this thing by a correct calculation based on quantum mechanics. In the same way, one may prove that after some time spent by shooting rich or skillful people, a communist nation becomes a nearly uniform conglomerate of mediocre citizens, with the exception of a Stalin who is of course different in some respects.

But one cannot find a similar proof that the microstates were equally likely during the Big Bang; or that inhabitants of different planets have the same vote in the global political elections deciding who of them is us :-); or that copies of you in Everett's "many worlds" have the same odds to be thinking that they are you. It's not only true that there's no proof of such things as of today. In fact, none of these things can be proved - and the reason is that none of these things is really true.

I can't believe that these simple points may be controversial.