The Multiverse Interpretation of Quantum MechanicsThe ordinary multiverse with its infinitely many bubbles whose possible vacuum states are located in 10^{500} different stationary points of the stringy configuration space was way too small for them. So they invented a better and bigger multiverse, one that unifies the "inflationary multiverse", the "quantum multiverse", and the "holographic multiverse" from Brian Greene's newest popular book, The Hidden Reality.
Yes, their very first bold statement is that parallel universes in an inflating universe are the same thing as Everett's many worlds in quantum mechanics! ;-)
Sorry to say but the paper looks like the authors want to stand next to Lee Smolin whose recent paper - as much crackpottish as any paper he has written in his life so far - is about "a real ensemble interpretation" of quantum mechanics. Bousso and Susskind don't cite Smolin - but maybe they should! And in their next paper, they should acknowledge me for pointing out an equally sensible and similar paper by Smolin to them. ;-)
Just like your humble correspondent would always emphasize that the "many worlds" in Everett's interpretation of quantum mechanics are completely different "parallel worlds" than those in eternal inflation or those in the braneworlds, these famous physicists say - On the contrary, they're the same thing!
However, at least after a quick review of the paper, the drugs seem to be the only tool that you can find in the paper or in between its lines to convince you that it's the case. ;-)
It's a modern paper involving conceptual issues of quantum mechanics, so it treats decoherence as the main mechanism to address many questions that used to be considered puzzles. Good. However, everything that they actually say about decoherence is a little bit wrong, so their attempts to combine those new "insights" with similar "insights" resulting from similar misunderstandings of the multiverse - and especially the way how outcomes of measurements should be statistically treated in a multiverse - inevitably end up being double gibberish that is cooked from two totally unrelated components such as stinky fish and rotten strawberries.
In what sense decoherence is subjective
One of the first starting points for them to unify the "inflationary multiverse" and the "many worlds" of quantum mechanics is the following thesis about decoherence:
Decoherence - the modern version of wave-function collapse - is subjective in that it depends on the choice of a set of unmonitored degrees of freedom, the "environment".That's a loaded statement, for many reasons. First of all, decoherence isn't really a version of the collapse. Decoherence is an approximate description of the disappearing "purity" of a state in macroscopic setups with various consequences; one of them is that there is no collapse. The probabilities corresponding to different outcomes continue to be nonzero so nothing collapses. They're nonzero up to the moment when we actually learn - experimentally - what the outcome is. At that point, we must update the probabilities according to the measurement. Decoherence restricts which properties may be included in well-defined questions - for example, insane linear superpositions of macroscopically different states are not good "basis vectors" to create Yes/No questions.
As first emphasized by Werner Heisenberg and then by anyone who understood the basic meaning of proper quantum mechanics, this "collapse" is just about the change of our knowledge, not a real process "anywhere in the reality". Even in classical physics, dice may have probabilities 1/6 for each number, but once we see "6", we update the probabilities to (0,0,0,0,0,1). No real object has "collapsed". The only difference in quantum physics is that the probabilities are not "elementary" but they're constructed as squared absolute values of complex amplitudes - which may interfere etc.; and in classical physics, we may imagine that the dice had the state before we learned it - in quantum physics, this assumption is invalid.
It may help many people confused by the foundations of quantum mechanics to formulate quantum mechanics in terms of a density matrix "rho" instead of the state vector "psi". Such a "rho" is a direct generalization of the classical distribution function on the phase space "rho" - it only receives the extra off-diagonal elements (many of which go quickly to zero because of decoherence), so that it's promoted to a Hermitian matrix (and the opposite side of the coin is that the indices of "psi" may only involve positions or only momenta but not both - the complementary information is included in some phases). But otherwise the interpretation of "rho" in quantum mechanics and "rho" in classical statistical physics is analogous. They're just gadgets that summarize our knowledge about the system via probabilities. Now, "psi" is just a kind of a square root of "rho" so you should give it the same qualitative interpretation as to "rho" which is similar to "rho" in classical statistical physics.
Second, is decoherence "subjective"? This is a totally equivalent question to the question whether "friction", "viscosity" (or other processes that dissipate energy) is subjective. In fact, both of these phenomena involve a large number of degrees of freedom and in both of them, it's important that many interactions occur and lead to many consequences that quickly become de facto irreversible. So both of these processes (or their classes) share the same arrow of time that is ultimately derived from the logical arrow of time, too.
First, let's ask: Is friction or viscosity subjective?
Well, a sliding object on a flat floor or quickly circulating tea in a teacup will ultimately stop. Everyone will see it. So in practice, it's surely objective. But is it subjective "in principle"? Do the details depend on some subjective choices? You bet.
Focusing on the tea, there will always be some thermal motion of the individual molecules in the tea. But what ultimately stops is the uniform motion of bigger chunks of the fluid. Obviously, to decide "when" it stops, we need to divide the degrees of freedom in the tea to those that we consider a part of the macroscopic motion of the fluid and those that are just some microscopic details.
The separation into these two groups isn't God-given. This calculation always involves some choices that depend on the intuition. The dependence is weak. After all, everyone agrees that the macroscopic motion of the tea ultimately stops. In the same way, the information about the relative phase "dissipates" into a bigger system, a larger collection of degrees of freedom - the environment - during decoherence. The qualitative analogy between the two processes is very tight, indeed.
But a punch line I want to make is that decoherence, much like viscosity, isn't an extra mechanism or an additional term that we have to add to quantum mechanics in order to reproduce the observations. Instead, decoherence is an approximate method to calculate the evolution in many situations that ultimately boils down to ordinary quantum mechanics and nothing else. It's meant to simplify our life, not to add some extra complications. Decoherence justifies the "classical intuition" about some degrees of freedom - what it really means is that interference phenomena may be forgotten - much like the derivation of equations of hydrodynamics justifies a "continuum description" of the molecules of the fluid.
Clearly, the same comment would be true about friction or viscosity. While the deceleration of the car or the tea is usefully described by a simplified macroscopic model with a few degrees of freedom, in principle, we could do the full calculation involving all the atoms etc. if we wanted to answer any particular question about the atoms or their collective properties. However, we should still ask the right questions.
When Bousso and Susskind say that there is an ambiguity in the choice of the environment, they misunderstand one key thing: the removal of this ambiguity is a part of a well-defined question! The person who asks the question must make sure that it is well-defined; it's not a job for the laws of physics. Returning to the teacup example, I may ask when the macroscopic motion of the fluid reduces to 1/2 of its speed but I must define which degrees of freedom are considered macroscopic. When I do so, and I don't have to explain that there are lots of subtleties to be refined, the question will become a fully calculable, well-defined question about all the molecules in the teacup and quantum mechanics offers a prescription to calculate the probabilities.
The case of decoherence is completely analogous. We treat certain degrees of freedom as the environment because the state of these degrees of freedom isn't included in the precise wording of our question! So when Bousso and Susskind say that "decoherence is subjective", it is true in some sense but this sense is totally self-evident and vacuous. The correct interpretation of this statement is that "the precise calculation [of decoherence] depends on the exact question". What a surprise!
In practice, the exact choice of the degrees of freedom we're interested in - and the rest is the environment - doesn't matter much. However, we must obviously choose properties whose values don't change frantically because of the interactions with the environment. That's why the amplitude in front of the state "0.6 dead + 0.8i alive" isn't a good observable to measure - the interactions with the environment make the relative phase terribly wildly evolving. Decoherence thus also helps to tell us which questions are meaningful. Only questions about properties that are able to "copy themselves to the environment" may be asked about. This effectively chooses a preferred basis of the Hilbert space, one that depends on the Hamiltonian - because decoherence does.
To summarize this discussion, at least in this particular paper, Bousso and Susskind suffer from the same misconceptions as the typical people who deny quantum mechanics and want to reduce it to some classical physics. In this paper's case, this fact is reflected by the authors' desire to interpret decoherence as a version of the "nice good classical collapse" that used to be added in the QM framework as an extra building block. But decoherence is nothing like that. Decoherence doesn't add anything. It's just a simplifying approximate calculation that properly neglects lots of the irrelevant microscopic stuff and tells us which parts of classical thinking (namely the vanishing of the interference between 2 outcomes) become approximately OK in a certain context.
Let's move on. They also write:
In fact decoherence is absent in the complete description of any region larger than the future light-cone of a measurement event.If you think about it, the purpose of this statement is inevitably elusive, too. Decoherence is not just "the decoherence" without adjectives. Decoherence is the separation of some particular eigenstates of a particular variable and to specify it, one must determine which variable and which outcomes we expect to decohere. In the real world which is approximately local at low energies, particular variables are connected with points or regions in spacetime. What decoheres are the individual possible eigenvalues of such a chosen observable.
But the observable really has to live in "one region" of spacetime only - it's the same observable. The metric in this region may be dynamical and have different shapes as well but as long as we talk about eigenvalues of a single variable, and in the case of decoherence, we have to, it's clear that we also talk about one region only. Decoherence between the different outcomes will only occur if there's enough interactions, space, and time in the region for all the processes that dissipate the information about the relative phase to occur.
So it's completely meaningless to talk about "decoherence in spacelike separated regions". Decoherence is a process in spacetime and it is linked to a single observable that is defined from the fundamental degrees of freedom in a particular region. Of course, the region B of spacetime may only be helpful for the decoherence of different eigenvalues of another quantity in region A if it is causally connected with A. What a surprise. The information and matter can't propagate faster than light.
However, if one restricts to the causal diamond - the largest region that can be causally probed - then the boundary of the diamond acts as a one-way membrane and thus provides a preferred choice of environment.This is just nonsense. Even inside a solid light cone, some degrees of freedom are the interesting non-environmental degrees of freedom we're trying to study - if there were no such degrees of freedom, we wouldn't be talking about the solid light cone at all. We're only talking about a region because we want to say something about the observables in that region.
At the same moment, for the decoherence to run, there must be some environmental degrees of freedom in the very same region, too. Also, as argued a minute ago - by me and by the very authors, too - the spatially separated pieces of spacetime are completely useless when it comes to decoherence. It's because the measurement event won't affect the degrees of freedom in those causally inaccessible regions of spacetime. Clearly, this means that those regions can't affect decoherence.
(A special discussion would be needed for the tiny nonlocalities that exist e.g. to preserve the black hole information.)
If you look at the light sheet surrounding the solid light cone and decode a hologram, you will find out that the separation of the bulk degrees of freedom to the interesting and environmental ones doesn't follow any pattern: they're totally mixed up in the hologram. It's nontrivial to extract the values of "interesting" degrees of freedom from a hologram where they're mixed with all the irrelevant Planckian microscopic "environmental" degrees of freedom.
They seem to link decoherence with the "holographic" degrees of freedom that lives on the light sheets - and a huge black-hole-like entropy of A/4G may be associated with these light sheets. But those numerous Planckian degrees of freedom don't interact with the observables we're able to study inside the light cone, so they can't possibly contribute to decoherence. Indeed, if 10^{70} degrees of freedom were contributing to decoherence, everything, including the position of an electron in an atom, would be decohering all the time. This is of course not happening. If you associate many degrees of freedom with light sheets, be my guest, it's probably true at some moral level that the local physics can be embedded into physics of the huge Bekenstein-Hawking-like entropy on the light sheet - but you must still accept (more precisely, prove) that the detailed Planckian degrees of freedom won't affect the nicely coherent approximate local physics that may be described by a local effective field theory - otherwise your picture is just wrong.
The abstract - and correspondingly the paper - is getting increasingly more crazy.
We argue that the global multiverse is a representation of the many-worlds (all possible decoherent causal diamond histories) in a single geometry.This is a huge unification claim. Unfortunately, there's not any evidence, as far as I can see, that the many worlds may be "geometrized" in this way. Even Brian Greene in his popular popular book admits that there is no "cloning machine". You can't imagine that the new "many worlds" have a particular position "out there". The alternative histories are totally disconnected from ours geometrically. They live in a totally separate "gedanken" space of possible histories. By construction, the other alternative histories can't affect ours, so they're unphysical. All these things are very different from ordinary "branes" in the same universe and even from other "bubbles" in an inflating one. I don't know why many people feel any urge to imagine that these - by construction - unphysical regions (Everett's many worlds) are "real" but at any rate, I think that they agree that they cannot influence physics in our history.
We propose that it must be possible in principle to verify quantum-mechanical predictions exactly.Nice but it's surely not possible. We can only repeat the same measurement a finite number of times and in a few googols of years, or much earlier, our civilization will find out it's dying. We won't be able to tunnel our knowledge elsewhere. The number of repetitions of any experiment is finite and it is not just a technical limitation.
There are many things we only observe once. Nature can't guarantee that everything may be tested infinitely many times - and it doesn't guarantee that.
This requires not only the existence of exact observables but two additional postulates: a single observer within the universe can access infinitely many identical experiments; and the outcome of each experiment must be completely definite.In de Sitter space, the observables are probably not exactly defined at all. Even in other contexts, this is the case. Observers can't survive their death, or thermal death of their surrounding Universe, and outcomes of most experiments can't be completely definite. Our accuracy will always remain finite, much like the number of repetitions and our lifetimes.
In the next sentence, they agree that the assumptions fail - but because of the holographic principle. One doesn't need a holographic principle to show such things. After all, the holographic principle is an equivalence of a bulk description and the boundary description so any physically meaningful statement holds on both sides.
At the end, they define "hats" - flat regions with unbroken supersymmetry - and link their exact observables to some approximate observables elsewhere. Except that this new "complementarity principle" isn't supported by any evidence I could find in the paper and it isn't well-defined, not even partially. In the quantum mechanical case, complementarity means something specific - that ultimately allows you to write "P" as "-i.hbar.d/dx" - a very specific construction that is well-defined and established. In the black hole, complementarity allows you to explain why there's no xeroxing; the map between the degrees of freedom isn't expressed by a formula but there is evidence. But what about this complementarity involving hats? There's neither definition nor evidence or justification (unless you view the satisfaction of manifestly invalid and surely unjustified, ad hoc assumptions to be a justification).
If you read the paper, it is unfortunately motivated by misunderstandings of the conceptual foundations of quantum mechanics. In the introduction, they ask:
But at what point, precisely, do the virtual realities described by a quantum mechanical wave function turn into objective realities?Well, when we measure the observables. Things that we haven't measured will never become "realities" in any sense. If the question is about the classical-quantum boundary, there is obviously no sharp boundary. Classical physics is just a limit of quantum physics but quantum physics fundamentally works everywhere in the multiverse. The numerical (and qualitative) errors we make if we use a particular "classical scheme" to discuss a situation may be quantified - decoherence is one of the calculations that quantifies such things. But classical physics never fully takes over.
This question is not about philosophy. Without a precise form of decoherence, one cannot claim that anything really "happened", including the specific outcomes of experiments.Oh, really? When I say that it's mostly sunny today, it's not because I preach a precise form of decoherence. It's because I have made the measurement. Of course, the observation can't be 100% accurate because "sunny" and "cloudy" haven't "fully" decohered from each other - but their overlap is just insanely negligible. Nevertheless, the overlap never becomes exactly zero. It can't. For more subtle questions - about electrons etc. - the measurements are more subtle, and indeed, if no measurement has been done, one cannot talk about any "reality" of the property because none of them could have existed. The very assumption that properties - especially non-commuting ones - had some well-defined properties leads to contradictions and wrong predictions.
Decoherence cannot be precise. Decoherence, by its very definition, is an approximate description of the reality that becomes arbitrarily good as the number of the environmental degrees of freedom, their interaction strength, and the time I wait become arbitrarily large. I think that none of the things I say are speculative in any way; they consider the very basic content and meaning of decoherence and I think that whoever disagrees has just fundamentally misunderstood what decoherence is and is not. But the accuracy of this emergent macroscopic description of what's happening with the probabilities is never perfect, just like macroscopic equations of hydrodynamics never exactly describe the molecules of tea in a teacup.
And without the ability to causally access an infinite number of precisely decohered outcomes, one cannot reliably verify the probabilistic predictions of a quantum-mechanical theory.Indeed, one can't verify many predictions of quantum mechanical properties, especially about cosmological-size properties that we can only measure once. If you don't like the fact that our multiverse denies you this basic "human right" to know everything totally accurately, you will have to apply for asylum in a totally different multiverse, one that isn't constrained by logic and science.
The purpose of this paper is to argue that these questions may be resolved by cosmology.You know, I think that there are deep questions about the information linked between causally inaccessible regions - whether black hole complementarity tells you something about the multiverse etc. But this paper seems to address none of it. It seems to claim that the cosmological issues influence even basic facts about low-energy quantum mechanics and the information that is moving in it. That's surely not possible. It's just a generic paper based on misunderstandings of quantum mechanics and on desperate attempts to return the world under the umbrella of classical physics where there was a well-defined reality where everything was in principle 100% accurate.
But the people who are not on crack will never return to the era before the 1920s because the insights of quantum mechanics, the most revolutionary insights of the 20th century, are irreversible. Classical physics, despite its successes as an approximate theory, was ruled out many decades ago.
I have only read a few pages that I considered relevant and quickly looked at the remaining ones. It seems like they haven't found or calculated anything that makes any sense. The paper just defends the abstract and the introduction that they have apparently pre-decided to be true. But the abstract and and introduction are wrong.
You see that those would-be "revolutionary" papers start to share lots of bad yet fashionable features - such as the misunderstanding of the conceptual issues of quantum mechanics and the flawed idea that all such general and basic misunderstandings of quantum physics (or statistical physics and thermodynamics) must be linked to cosmology if not the multiverse.
However, cosmology has nothing to do with these issues. If you haven't understood a double-slit experiment in your lab or the observation of Schrödinger's cat in your living room and what science actually predicts about any of these things, by using the degrees of freedom in that room only, or if you haven't understood why eggs break but don't unbreak, including the degrees of freedom of the egg only, be sure that the huge multiverse, regardless of its giant size, won't help you to cure the misunderstanding of the basics of quantum mechanics and statistical physics.
The right degrees of freedom and concepts that are linked to the proper understanding of a breaking egg or decohering tea are simply not located far away in the multiverse. They're here and a sensible scientist shouldn't escape to distant realms that are manifestly irrelevant for these particular questions.
And that's the memo.