Saturday, January 21, 2006

Lee Smolin about Nicolai-Peeters

Let me comment on Lee Smolin's remarks about the paper by Nicolai and Peeters (NP):
  • On reading NP I am grateful for the hard work that they put in, but I end up feeling that they still miss the point, because they have prejudices about what a quantum theory of gravity should do coming from old expectations.
As far as I understand, the "prejudices" refer to what I normally call the knowledge of quantum field theory and its technical and advanced features such as the renormalization group or locality.
  • They appear to evaluate LQG and spin foam models as if they were proposed as a unique theory which was a proposals for a final theory of everything.
No, NP analyze the ability of LQG to be a theory of at least something, namely its ability to make a single prediction, at least in principle, that cannot be obtained from the classical limit - i.e. to say anything at all about quantum gravity.
  • This is in my view a misunderstanding. One should understand these as a large set of models for studying background and diffeo invariant QFT’s.
Using the terminology of physics, there exist no diffeomorphism invariant local QFTs in 4 dimensions or higher. LQG is not a QFT in the technical sense we are using today.
  • These are based on quantization of a set of classical field theories which are constrained topological field theories.
When one obtains a theory with bulk degrees of freedom by modifying a topological field theory, then the topological starting point is irrelevant.
  • There are three key claims: 1) these theories exist, rigorously. i.e. there are uv finite diffeo invariant QFT’s based on quantization of constrained TQFT’s.
They may exist "rigorously", whatever it means, but they do not exist "actually". On the contrary, NP explain that they fail to exist exactly in the same sense in which nonrenormalizable QFTs do not exist. This point that is simple for some of us but not for others will unfortunately appear many times, again and again.
  • 2) there is a common mathematical and conceptual language and some calculational tools which are useful to study such models and
This vague "common descent" is certainly a bad thing, not a good thing. The exciting developments in theoretical physics have made, on the contrary, completely new parts of mathematics relevant (Riemann geometry; group theory; bundles; K-theory; modular forms etc. etc.). If one can show that a previously disconnected portion of math is relevant for a physical theory, it is always exciting news. And it has happened many times in the context of gauge theories, string theory, and at many other places.

The comment that the LQG papers share some general mathematical and conceptual language is a purely sociological assertion that essentially means that the LQG researchers have not had time to learn other portions of mathematics or other concepts and all of them seem to be confined by similar limitations. It is certainly not a good thing, and it does not suggest that LQG fits together. Narrow-mindedness of the mathematical methods is neither a necessary nor a sufficient condition for a physical theory to be logically consistent. These are completely different things.
  • 3) there are some common generic consequences of these models, which are relevant for physics.
Such as the violation of Lorentz invariance, absence of a flat space limit, non-existence of other particles and forces, or the presence of infinitely many undetermined parameters.
  • Nothing NP say questions these key claims. Unfortunately, they do not mention key papers which support these key claims, such as the uniqueness theorems (gr-qc/0504147, math-ph/0407006) which show the necessity of the quantization LQG uses.
This is nothing else than fog. These papers are purely about the kinematical Hilbert space before the Hamiltonian constraint is imposed. This is treated as the trivial part of the problem by NP. The papers about "uniqueness" have nothing to do with the actual main problems of LQG that NP study. Incidentally, the kinematical Hilbert space is not separable, and I would certainly not count it as one of the attractive features of LQG either. NP discuss this issue in the context of "ultralocality".
  • And while they mention the non-seperability of the kinematical Hilbert space they fail to mention the seperability of the diffeomorphism invariant Hilbert space, (gr-qc/0403047).
Lee is trying to mix two different things and confuse the reader entirely. The uniqueness theorems above apply to the non-separable kinematic Hilbert space only. No such theorems or any other encouraging results are known once the constraints start to be imposed. When you see a kid who learns to read and a dog who learns to eat, you still have not found a dog that knows how to read. It is extremely easy to find solutions to "quantum gravity" if a subset of its requirements is erased.
  • It is unfortunate that they omit reference to such key results which resolve issues they mention.
They don't resolve any of these issues whatsoever.
  • A second misunderstanding concerns uv divergences. NP do not discuss the results on black hole entropy, so they miss the point that the finiteness of the black hole entropy fixes the ratio of the bare and low energy planck length to be a finite number of order one.
Except that one may derive that this value is any number we want, including numbers proportional to log(2), log(3), log of various functions of the charges of the black hole, and logs of various transcendental numbers that have been claimed to be the answer in literature. Currently, there exist at least 10 contradictory proposals what the Immirzi parameters should be. It is now known that the original papers about the black hole entropy in loop quantum gravity have been entirely incorrect - for example because they incorrectly neglected the microstates with non-minimal spins carried by the punctures. I am telling you because some people consider me an expert in this area.
  • Calculations on a class of semiclassical states they do not discuss-the weave states-lead to the same conclusion (A. Ashtekar, C. Rovelli, L. Smolin, Weaving a classical metric with quantum threads,” Phys. Rev. Lett. 69 (1992) 237.).
These old papers led to the same conclusions as the results from the black holes that are known to be incorrect today.
  • So there can be no infinite refinement of spin foams and no infinite renormalization. These theories are uv finite, period. This is one of the generic features I mentioned.
One can say the same thing about a nonrenormalizable quantum field theory as long as any type of cutoff or regularization is adopted. The problem is not whether the results look finite, which is a purely technical issue. The real problem is whether the regularization or cutoff introduces an infinite number of unknown parameters or whether the regularization is unique or almost unique. It is unique neither in nonrenormalizable QFTs nor in LQG, and LQG is therefore exactly at the same level as a generic UV-incomplete effective field theory.
  • Thus, their main claim, that the fact that there are many LQG or spin foam models is the same as the problem of uv divergent is just manifestly untrue.
It is manifestly true for everyone who has learned the difference between regularization and renormalization.
  • The freedom to specify spin foam amplitudes does not map onto the freedom to specify parameters of a perturbatively non-renormalizable theory.
Of course it does. Take all spin foam Feynman rules that lead to long-range physics resembling smooth space and assume that the space is not empty. This may be a codimension infinity set but its dimension will still be infinity. The parameters of higher-derivative terms at low energies will be functions of the parameters defining the spin foam Feynman rules. There is a one-to-one correspondence between them. The only way how you can get a different result is that you actually accidentally hit a completely consistent theory of quantum gravity - but it seems impossible because you can't really get string theory out of LQG.
  • For one thing, few if any spin foam models are likely to have a low energy limit which is Poincare invariant, a property shared by all perturbative QFT’s, renormalizable or not, defined in Minkowski spacetime.
I agree. Despite the infinite number of parameters, the LQG Ansatz still misses the theories that are actually physically relevant, for example the Poincare-invariant ones. Exact Poincare invariance never works in LQG and approximate low-energy Poincare-invariance could only work once very many UV parameters (probably infinitely many) are adjusted.
  • In fact, we know from recent results that in 2+1 none do-the low energy limit of 2+1 gravity coupled to arbitrary matter is DSR. So their argument is false.
Not sure exactly which of their arguments is being discussed, but the failure to reproduce a Poincare-invariant theory - something that Lee seems to confirm about LQG - is a pretty serious one, too.
  • They do get a number of things right. The following are open issues, much discussed in the literature: 1) whether there is any regularization of the Hamiltonian constraint that leads to exchange moves,
It reminds me of the wooden earphones of the tribes who worship the cargo cult. Even if you modify the UV details of the Hamiltonian constraint to get rid of ultralocality, you will still not have solved the key problems above.
  • 2) whether thus there are any links between the spin foam amplitudes and Hamiltonian evolution,
Imagine that in QFT or string theory, we would have serious doubts whether the path integral approach is equivalent to the Hamiltonian approach. That would be pretty devastating.
  • 3) whether the sum over spin foam diagrams is convergent or, more likely, Borel resummmable (although they miss that this has been proven for 2+1 models, hep-th/0211026).
Has not it been shown by Baez, Christensen, and Egan that the sum is dominated by degenerate simplices? This fact manifests itself as a divergence in the continuum limit.
  • I don’t agree with all the details of their discussion of these issues, but these certainly are open issues.
Some of them are not open anymore. One can't consider a question open many years after a conjecture has been falsified.
  • NP seem to argue as if one has to prove a QFT rigorously exists in order to do physics with it, by which standard we would believe no prediction from the standard model.
The word "rigorously" is a complete bogus designed to make not-too-serious things sound serious. NP only use the word once, when they want the reader to assume that a rigorous definition of H has been given. Of course, neither NP nor anyone else wants to define these physical theories rigorously in the mathematical sense. The path integral does not exist in the rigorous treatment, for example, even though we want to use it. They are using the same level of rigor that has been standard in theoretical physics for many decades. What they care about is whether a physical theory that can say anything actually exists, not about some formalities whether one can write some formulae to satisfy picky mathematicians - which is often a very independent criterion from the question whether a theory makes sense in physics.
  • They mention that there are no rigorous constructed, semiclassical states, which are exact solutions to the dynamics, but this is the case in most QFT’s.
If we use the word "rigorously" in the same rational way as physicists always do - and NP obviously use the physics language - the comment above is incorrect.
  • This does not prevent us from writing down and deriving predictions from heuristic semiclassical states (hep-th/0501091),
Well, it may be that other physicists would be prevented from writing down such predictions (of violations of the rules of special relativity etc.) by certain other forces. ;-)
  • or from constructing reduced models to describe black holes or cosmologies and likewise deriving predictions (astro-ph/0411124),
Most of us don't believe these papers but more importantly, this particular paper has nothing to do with LQG. The only relation with LQG is that some people who are otherwise counted to be LQG researchers have written papers analogous to the proposals by Brandenberger et al. But there is no LQG (new variables, spin networks etc.) in these papers.
  • Nor does it prevent Rovelli et al from computing the graviton propagator and getting the right answer, showing there are gravitons and Newtonian gravity in the theory (gr-qc/0502036).
This paper assumes that nice configurations similar to smooth space dominate the path integral which is then used to derive that the path integral is approximately equal to smooth space. The reasoning is completely circular.
  • But, someone may ask, if LQG is the right general direction, shouldn’t there be a unique theory that is claimed to be the theory of nature? Certainly, but should the program be dismissed because no claim has yet been made that this theory has been found?
I, for one, have 496 different reasons to dismiss it. Everyone can think about anything she or he wants, and others will always have their opinions whether something can come out of it.
  • To narrow in on the right theory there are further considerations, all under study:
  • - Not every spin foam model is ir finite.
  • - Not every spin foam model is likely to have a good low energy limit.
  • - The right theory should have the standard model of particle physics in it.
You can always sketch a program in physics in this way. One can argue that the Universe is made of little green men, and when we impose the constraint that the little green men produce the muon/electron mass ratio around 206.8, we may find the correct theory. Anyone can believe these things - but still, I think that there exists something such as rational thinking that tells us that without having any encouraging evidence, it is extremely unlikely that a correct theory may be obtained from such a highly special - yet infinitely undetermined - starting point. The Bayesian probability for such a program to work is about 10^{-2000}.
  • In addition it must be stressed that there can in physics be generic consequences of classes of theories, leading to experimental predictions.
I agree with that, but in the LQG case, all of these predictions seem to be wrong (such as the violation of special relativity).
  • Here are some historical examples: light bending, weak vector bosons, confinement, principle of inertia, existence of black holes.
These are not really generic consequences. They're consequences of theories that looked very special when they were written down. When GR was written down, it was essentially a unique theory, and it predicted light bending. Nowadays we often extend the theory but the light bending is still an effect of the same Einstein-Hilbert action as before. The same statement holds not only for light bending but also the existence of black holes. In the very same way, weak bosons only occur in very special theories. Regardless of our minor disagreement about the nature of these predictions, the example is irrelevant for LQG because it has no predictions of this kind that have not yet been falsified.
  • All of these observable features of nature are predicted by large classes of theories, which can be as a whole confirmed or falsified, even in the absence of knowing which precise theory describes nature, and prior to proving the mathematical consistency of the theory.
In principle, I agree with that.
  • LQG predicts a number of such generic features: discreteness of quantum geometry,
Discreteness of quantum geometry is first of all incorrect because it contradicts Lorentz invariance of local physics and other principles, but even if it were correct, one could not directly measure it, not even in principle. It's because if there is a universal law that nothing is shorter than the Planck length, the principle holds for the experimental apparata, too. In turn, this prevents us from measuring the geometrical parameters with sufficient accuracy to determine whether the allowed values form a discrete set or not. The statement would only become testable if it were translated to a scattering or another doable experiment which can't be done because the predictions of scattering can't be derived from LQG.
  • horizon entropy,
The horizon entropy calculations in LQG have been a complete failure, starting with several papers that are known to be wrong today, continuing with conjectured links with quasinormal modes that have also been falsified, and with many other confusing proposals that contradict each other and that contradict rational thinking.
  • removal of all spacelike singularities,
Lee seems to be obsessed with the removal of infinities. But if the resulting theory is dependent on the details how we remove them, then we have not solved any problem whatsoever. We can always impose a cutoff - in our theories as well as in particular evolution of our Universe. But if it does not allow us to say what actually comes from the singularity, then we have not moved a millimeter.
  • and I believe will soon predict more including DSR, emergence of matter degrees of freedom.
I am ready to make a 5:1 bet that in the next 5 years, no one will propose a respectable framework for matter degrees of freedom to emerge from LQG. I am equally sure that no experimental evidence in favor of DSR will be found in the same time period.
  • One reason for this is of course that most of the parameters in such classes of such theories are irrelevant in the RG sense, and do not influence large scale predictions.
If read rationally, this sentence is equivalent to the statement that we know the classical limit of GR at low energies. But this classical limit was not discovered by Ashtekar, Smolin et al. but by a former patent clerk back in 1915. Going to quantum gravity means that we actually want to learn some phenomena that are not predicted by the classical limit, and LQG can't do it. Moreover, it does not even imply the classical Einstein-Hilbert term.
  • Since we know the theory is uv finite this does not affect existence.
As explained many times, the UV finiteness has no value if the resulting theory depends on the regulator, and LQG does depend on it.
  • The lack of a uv unique theory does not prevent us from testing predictions of QFT in detail,
It prevents us from making predictions. We only know the Standard Model so well because we can classify all marginal and relevant operators involving the well-known degrees of freedom - there is just a couple of them - and we know that all others are very small because they are suppressed by some high-energy scale. The converse comment explains why we do not have a good (or predictive) low-energy theory of pions: it is because the terms are not under control.

We can write down non-renormalizable theories and non-renormalizable interactions, but unless we have a UV complete theory, these extra interactions cannot be predicted. They're purely a phenomenological description of the deviations from the "simpler" theory, and if we have infinitely many of such unknown interactions with coefficients of the same order, then it is equivalent to a complete ignorance. The theory is just about parameterizing our ignorance in a different way.
  • and it is likely to be the same for quantum gravity.
It is not only unlikely: we know that it is not true. That's about the very difference between renormalizable and non-renormalizable theories. Quantum gravity has infinitely many higher-derivative terms in the effective action or, equivalently, infinitely many parameters of the Hamiltonian constraint or the spin foam Feynman rules. All of them are equally important if we want to study the physics near the Planck scale, and knowing nothing about them means knowing nothing about the Planck scale physics and/or about quantum gravity.
  • The old idea that consistency would lead to a unique uv theory that would give unique low energy predictions was seductive, but given the landscape, it is an idea that is unsupported by the actual results.
Sorry but the landscape - independently of the controversial issues about its size and about its physical relevance - is not a set of different theories. The landscape is a (discrete) set of solutions to a single, unique and UV complete theory. This is not a detail but a critical point. It has been firmly established that all semi-realistic vacua that the string theorists found are a part of a single UV-complete theory. This statement is supported roughly by 5,000 papers about dualities and transitions in all corners of the string-theoretical "landscape", and saying that the uniqueness of string theory is "unsupported" sounds like an idea from someone who has lived in the vacuum during the last 10 years, to say the least.
  • Having said all this, I hope that NP will put their hard won expertise to work, and perhaps get their hands dirty and do some research in the area.
They have made some work - it's just the results of their work that some colleagues do not quite like. Let's again mention that their previous paper with Zamaklar is the most cited LQG paper of 2005.