Friday, May 12, 2006

Wati Taylor: correlations in the landscape

Washington Taylor was speaking about the problem to enumerate a particular perturbative class of vacua, to prove its finiteness, and to find some correlations - a work he has done with Michael Douglas and that will appear soon as a preprint.

He was considering the "Z2" orientifold of a "Z2 x Z2" orbifold of the six-dimensional torus in type IIA string theory. The "Z2" groups defining the orbifold reflect the coordinates as "----++" and "++----", so that their product (the third nontrivial element of the "Z2 x Z2" group) flips their signs according to the rule "--++--". The orientifold also reflect the toroidal directions as "+-+-+-".

You wrap D6-branes on various cycles. Each of these D-branes is parameterized by six integers "m1,m2,m3,n1,n2,n3" that encode the winding numbers on various allowed cycles: the three-cycle is a product of
  • (m1 a + n1 b) x (m2 c + n2 d) x (m3 e + n3 f)
various cycles ("a,b,c,d,e,f") of the six-torus. The tadpole cancellation condition seems to have four parts. You impose all of these conditions and ask how many different (supersymmetric) configurations of an arbitrary number of intersecting D-branes satisfy the constraints.

By one solution, you mean the information about the number of D-branes and their individual winding numbers (m_i,n_i); each such a solution typically becomes a moduli space. After you generate a potential, it can produce a larger number of stable vacua. But at this stage, you ask how many groups of vacua can you get.

The conditions define Diophantine equations - algebraic equations whose solutions are required to be integers - and Wati could prove that the number of solutions is indeed finite although it could a priori be infinite, using elegant methods from the high school mathematical olympiads. The general conclusion is that one can find a finite subset of the set of vacua M. I still think that this statement holds for every finite and for every infinite set M. ;-)

Moreover, if you want to consider not only random vacua from a set of 10^{350} vacua but even random subsets of this set, let me warn you that there are 2^{10^{350}} - which is almost 10^{10^{350}} - different subsets of the set of vacua. Thanks, Marcel, for being so incredibly picky. :-)

Eventually, Wati and Mike want to enumerate all the vacua that are orientifold-orbifolds of the six-torus with intersecting D-branes. They find about ten million solutions (for the winding numbers of the D-branes) whose number of generations is three, as estimated from the intersection number. Because they decide to only look at the solutions that respect a particular feature of the Standard Model, it turns out that they can list all of them in a polynomial time, evading "unsolvability" constraints that Denef and Douglas that were sort of codified for the "impossible mission" to find the vacuum with the right cosmological constant.

It turns out that approximately 10 models from this set survive because they have a semi-realistic gauge group. If the investigation of this class of vacua were motivated by a reliable and deep physical principle, you would be happy, of course. You would ask 10 students to look at each of them and check which of them is the right one. Unfortunately, it seems that Wati realizes - much like those of us who don't like the landscape discourse - that this whole exercise is physically unmotivated and there is no good reason to expect that one of these 10 particular vacua will be the right one.

But maybe someone should look at these ten vacua - they are quite simple!

So instead Mike and Wati want to apply analogous methods to look at broader classes of vacua with 10^{20} elements or so because this larger number may cover the parameter space finely enough so that one of these models (or more) will be indistinguishable from the observed low-energy action. Wati explains it differently:
  • The point of looking at a larger ensemble of models which agree with observed physics is to see whether they share additional features which may help make predictions beyond already observed physics. If we find 5 models with features X and Y of the standard model which all have feature Z which is not yet observed it is not very definitive. If we look in different parts of the string theory landscape and find that all 10^{20} models we know how to construct with features X and Y of the standard model have feature Z also it begins to carry some weight as a possible prediction.
I just don't understand this way of thinking. If you have five different models, then at least four of them are incorrect and cannot be used to make any predictions. If you have 10^{20} models, then it's even worse because at least 10^{20}-1 are wrong models which is incidentally a higher percentage than in the case of five models.

Whether a majority or a minority of the incorrect vacua shares some features is physically inconsequential - and if I gave the wrong models to influence the predictions, they could only influence them in the negative way. Whenever there is a plausible model that does not satisfy a certain rule, you can't trust the rule regardless of the number of models that do satisfy it.

We have lots of examples in which the reasoning above leads to incorrect conclusions - and moreover, most of the conclusions will depend on the chosen "representative class" of vacua and the measure (and relative weights of the classes) that we use to make the "predictions".

"Most" effective field theories are UV incomplete - and still, QCD is shockingly described by an asymptotically free, UV complete gauge theory. The correct theories or vacua describing the real world are sometimes generic but sometimes - in the more interesting and "elegant" cases in which we actually learn something new - they are extremely special and beautiful among the classes of theories of "similar complexity". The whole structure of string theory is an example because it is not only special but unique. The idea that we should look for "generic" vacua is nothing else than an irrational bias, and it is a kind of bias that goes in the opposite direction than everything good that has ever happened in the history of physics.

If you have a large enough class of random theories, surely there is a significant probability that you will approximate the currently known effective theory well enough. But that's something extremely different from finding the correct model or from making progress in physics.

In physics, one only finds the correct theory if it not only agrees with the known data but if it also agrees with the extra digits of quantities that will be measured with higher accuracy in the future or with completely new quantities and phenomena that are not known today. From this predictive viewpoint, it seems pretty clear to me that the classes of 10^{20} vacua - whatever they are - are as (un)motivated as the classes of 10 vacua, and the chance that the investigation of these two - populated and less populated - classes will turn out to be the right strategy is approximately the same in both cases.

The number of vacua on one side and the physical motivation to expect that a class of vacua is correct are two entirely different things, and indeed, I believe that if these two things are correlated, they are negatively correlated, not positively correlated. If someone in Pennsylvania finds the correct heterotic MSSM with the right fermion masses, will Mike and Wati argue that it does not matter because Wati and Mike can beat the heterotic guys by having 10^{350} wrong models that consensually predict different physics? I don't think so. 10^{350} wrong models is still less than one correct model.

Predictions from correlations?

Wati has also showed a different concept that I find as hard to swallow as the intentional search for the highly degenerate classes of vacua. By analyzing the large set of solutions for his orbifolds statistically, he wants to find correlations between various quantities such as the number of generations and something else. Wati describes the point of this enterprise as follows:
  • The point of looking for strong correlations is to ask whether in the context of the landscape we can find concrete predictions from string theory. In particular, if all string vacua with features X and Y also have feature Z and we observe features X and Y in nature then we can say that string theory predicts feature Z. If all the parameters of the standard model and beyond the standard model physics are uniformly and independently distributed in string theory vacua then we cannot use this approach to make predictions.
I agree that if the law "X & Y implies Z" is universal in all of string theory, then "Z" is a prediction of string theory once "X & Y" are observed. This is essentially the goal of the swampland program. But we don't need statistics for that. What Wati showed us was something different: non-universal laws that only hold in "most cases", that only hold "statistically", and I would never trust predictions based on such shaky grounds.

These correlations are expressed as "ENT(xy) - ENT(x) - ENT(y)" where "ENT(x) = -sum (p_i ln(p_i))" is the coarse-grained entropy of a statistical distribution "p_i" of different values "i" of a quantity (or a pair of quantities). Wati's result in his particular examples was that there was virtually no information in the correlations: the difference was one bit and the distributions of different quantities were essentially independent Gaussians.

Surely the physicists have not been working for 30 years to extract 1 bit of information - whose probability of being correct is moreover 50 percent. Even if there were any correlation, I would probably find such a correlation physically uninteresting. We know for sure that some of these correlations would agree with those observed in the real world, and some of them would not.

What will you do with this probable outcome? Will you overhype the "successful" patterns as evidence that the landscape reasoning is good, while you will be silent about the "unsuccessful" ones? I would count this activity as a part of astrology or catastrophic global warming theory, not physics. It's frustrating to see that this is what is apparently being intended.

I wonder whether the people who were producing the very convoluted microscopic theories of the luminiferous aether in the 19th century really believed that this was the way to say anything new about physics - or whether most of them did these things just to do something and keep their jobs. Einstein took over in 1905 and showed not only that the aether was a ludicrous fantasy - but moreover, the absence of the aether is one of the basic principles that underlies his relativistic revolution in physics. Today, all of us - except for those in loop quantum gravity - know that the aether is a silliness that is not realized physically and that was never well motivated.

My feeling about the random model building and random model guessing is somewhat analogous to the random construction of the aether from gears and wheels. We're missing something and we should not fool ourselves into thinking that we're not.