Wednesday, October 20, 2010

Why there are no classicalons by Dvali et al.

Jester at Résonaances discussed a recent paper,
UV-completion by classicalization,
by Gia Dvali, Gian F. Giudice, Cesar Gomez, and Alex Kehagias. They want to claim that one can construct a Higgsless Standard Model and solve the unitarity problem with the W_L-W_L scattering - or any UV problem in any field theory, for that matter - by a simple idea inspired by black holes.

Well, the new objects postulated by their scenario are not too simple - and they can be viewed as a nearly infinitely complicated generalization of the Higgs sector - but let us call the idea simple, anyway.




Quantum gravity prevents you from probing distances shorter than the Planck length. This fact may be illuminated from many angles but the induced non-locality may also be blamed on the large number black hole microstates. Dvali et al. state that even in a non-gravitational theory, you can invent or observe similar black-hole-like states and they will soften the UV behavior in the same peaceful way as the black holes manage to do in quantum gravity.

Their non-gravitational "classical" objects that play the role of the virtual black holes are called "classicalons" and their appearance in the set of virtual particles is called "classicalization".

Needless to say, this statement is just a reflection of the authors' misunderstanding how quantum gravity work. You can't just make quantum gravity out of a non-gravitational theory. You can't make "large objects" relevant for high-energy scattering whenever you want and at any scale you want, by using your wishful thinking as the only tool.

The reason is that the black holes have many properties that are completely essential for their ability to modify the trans-Planckian scattering, especially their
  1. longevity
  2. large entropy
  3. fast thermalization.
All of these - largely but not quite independent - conditions are actually needed for the black holes to play the role that they play.

A black hole was first discovered by Karl Schwarzschild, a German warrior on the Russian front of the World War I, as a classical solution to Einstein's equations. It was a static solution - so the lifetime is classically infinite. When the quantum effects, especially the Hawking radiation, is taken into account, the black hole lifetime remains long, especially if the black hole is really large.

You can find many other spatially extended static solutions to classical equations - but they will simply not have a measurable impact on the high-energy scattering of ordinary particles. Why? Because the inclusion of an extended object among the intermediate states - or virtual particles - may be interpreted as a kind of an instanton. Even in quantum gravity, this is the case, at least morally.

The instanton can induce new interactions - or new contributions to old interactions. The 't Hooft interaction is an example from gauge theories. The ordinary instanton may be qualitatively interpreted as a virtual monopole-antimonopole pair. And such an instanton can induce the direct interaction proportional to the product of all fermionic species in your theory.

It's my understanding that Dvali et al. understand that the intermediate objects have to be stable or meta-stable to contribute to the high-energy scattering i.e. that the condition 1 has to be satisfied.

However, a basic property of an instanton is that its effects are suppressed by "exp(-S)", the exponential of its action with a minus sign. For very large instantons in realistic theories, this action is inevitably huge, and the suppression is gargantuan. This is also true for the black hole intermediate states whose action is of order
S = A/4G
where A is the area of the intermediate black hole's event horizon. Imagine that the instanton we consider for the scattering with intermediate black hole microstates is a kind of a Euclideanized fuzzball. It's a coincidence that the action uses the same letter "S" as the entropy but it's no coincidence that the same value has appeared for the action as what we know as the black hole entropy.

So the effect of an extended intermediate object on a high-energy scattering process is tiny. How can such a black hole influence the high-energy scattering of two particles? Well, it does because the black hole is effectively being born. But why does it get born even though its probabilities are suppressed by exp(-S), as argued above?

Well, it's because there are many black hole microstates, roughly exp(S) of them, where
S = A/4G
stands for the black hole entropy in this case.

Clearly, to beat the exponential suppression from the high action, there has to be an exponentially large number of microstates. The factors of exp(S) and exp(-S) pretty much cancel. The condition 2 has to be obeyed for the entropy to be high and for this amplification to occur.

Some "cold" unique solutions of the fundamental equations simply won't carry a high enough entropy, so their contribution to the high-energy scattering will resemble an instanton, and can be completely neglected.

But can't you have many non-black-hole states that still carry a high enough entropy? Well, surely, you can. A ball of of quark-gluon plasma in a gauge theory may carry a huge entropy which is systemically analogous to the black hole entropy. After all, these two may be equivalent via a version of the AdS/CFT correspondence.

However, if the intermediate object is a high-entropy, "hot", non-gravitational object such as a ball of the quark-gluon plasma, you would be double counting. The diagrams contributing to the high-energy scattering actually depend on the "locally created" quarks and gluons only: the perturbative diagrams with their small number win (if the 't Hooft coupling is smaller than one or so). The diagrams with large balls of the virtual quark-gluon plasma are subleading because they're nothing else than the higher-order diagrams.

So the balls of the quark-gluon plasma actually don't contribute any sizable correction to the high-scattering amplitudes. As I have mentioned, this conclusion is true if the 't Hooft coupling is smaller than one. If it is greater than one, the dual gravitational picture of the AdS/CFT system is more relevant, the physics should be described as genuine black hole physics in a higher-dimensional spacetime, and it is indeed true that the high-entropy intermediate states soften the high-energy scattering.

However, even this intermediate-black-hole picture is only correct if the wavelength of the scattering particles is much shorter than the AdS radius - but it must still be longer than the Planck length. If the energy is higher than the Planck scale, physics gets averaged over the positions in the holographic dimension, and we're back to the asymptotically free Bjorken-scaling-like scattering where the gluons or other partons are the only relevant intermediate objects.

Whatever you do with these ideas, it's guaranteed that there won't be any new "large terms" from new extended intermediate objects unless it is "geometrically clear" that these intermediate objects are black holes. Black holes are the only intermediate objects whose relevant instantons may become significant because the smallness of the instanton contributions may be compensated by the large black hole entropy.

This leads me to the final point, the condition 3.

The reason why the black holes were "irreducibly new" virtual objects that influenced the behavior differently than their "elementary building blocks" is that there are no simple elementary building blocks of the black holes. At the event horizon, the information gets rapidly spread among all the degrees freedom - in a logarithmically short time (as a function of the number of degrees of freedom). Black holes are the fastest scramblers in Nature.

This property is necessary for the creation of a non-locality in a consistent theory, and for the corresponding "irreducibility" of black hole intermediate states. If the time needed for the thermalization were governed by a power law rather than a logarithm, then the extended object would actually be a literate "composite" and the correct calculation of the effect of the intermediate states would be given purely in terms of the intermediate "elementary building blocks".

In the paper about scramblers, Susskind et al. explain that the black holes - and their non-gravitational duals that involve "infinitely" large matrices with non-integrable actions (that's the same thing as the case of a large 't Hooft coupling mentioned above) - are the only objects that can "scramble" in the logarithmic time. I am convinced that this conclusion also means that they're the only "qualitatively new" extended objects that may change the rules of the game if they appear as intermediate objects.

Note that the degrees of freedom of the black holes are "hidden" so that the independence of the black hole microstates from the "elementary building blocks" is manifest. In the bulk picture, the degrees of freedom are stored at the event horizon; in the large-N matrix picture, the extra degrees of freedom are the multi-valued color indices.

So I think that if Dvali et al. actually correctly calculate how big an effect they can get from their "classicalons", it won't be big enough for the "miraculous cure" of UV divergences that they wanted to attribute to these "classicalons". The inability of "classicalons" to achieve this job comes from the many special properties that the black holes have but the "classicalons" don't.

And that's the memo.