Tuesday, June 14, 2005

Computer models and tossing a coin

Is the following story from a different field something that we can learn anything from?

As described in a press release, a team of researchers from Oxford has published a work - one that should have been done a long time ago - in the journal Global Ecology and Biogeography.

What is their contribution? They were apparently the first ones in the field of climatic biodiversity science who had a great and original :-) idea: to actually test the models. So they imagined that they returned to the 1970s and took 16 current climate models - more concretely, models that predict the impact of climate change on biodiversity - with them, trying to predict the ranges where different species would be found in 1991.

In 90% of the British bird species, the models could not even agree whether their range would expand or shrink. In 10% of cases, the models agreed: in one half of these 10% of cases, the reality (as we know it in 2005) agreed with the models, in the other half, it disagreed. In other words, you could also toss a coin - or even better: ask someone who likes the birds - to make a prediction. The results would be equally solid except that you would save hundreds of millions of dollars.

Better results (75% success rate) are obtained if you choose the "right" subset of the models and decide "collectively", but I don't exactly know whether it is a well-defined algorithm or a joke.

Now, frankly speaking, climate bioscience has never really been a part of the respectable core of sciences (it was always rather a sexy fad), so it should not be surprising when it is identified as cargo cult science or a computer game. But can high energy physics take a lesson from this story?

Difference from high-energy physics

Well, any possible lesson is guaranteed to be very limited. The reason is that we seem to be sure - because of very good reasons - that whatever we know about the past decades in (experimental) physics is explained by the Standard Model and General Relativity. If we took these current theories - the counterparts of those 16 models - to the past, we would almost certainly succeed in our predictions of the past that used to be the future. We think that we have done this gedanken experiment hundreds of times.

But is there something to say about the beyond-the-Standard-Model physics? I think that there is something to say about the principles that are meant to direct our search for new theories, but that are independent of their technical details. This includes the problem of vacuum selection in particular.

For example, one may conjecture that the theory describing Nature should be the simplest theory that allows for the existence of life and satisfies some mild conditions - for example, it contains the last well-established theory as a subset (today, the term means the Standard Model). Or alternatively, another grand principle may say that whatever objects (or terms in the Lagrangian) whose properties we do not understand are fundamental objects and the relevant parameters are determined - or at least we should believe that they are determined - by chance or the requirement that the life exists.

If we returned to the 1960s or the early 1970s - much like the Oxford climate scientists - both of these grand principles would have simply failed. They would have failed to predict some particles and many relations between them. In my opinion, this is an indication that these grand principles are not good enough principles and are likely to be incorrect. Until we actually see some new physics, we must unfortunately rely on these semi-philosophical considerations.

At any rate, one should try to test various choices that have alternatives and various approaches - especially those that are expected to be studied for years or decades - against all past data we actually have. And we should always try to estimate how non-trivial our arguments supporting a certain claim - or even more importantly, arguments supporting a whole direction or paradigm - are.

Obviously, the situation of the climatic biodiversity scientists is much simpler. They should have compared the models with reality a long time ago, conclude that the particular models they had were useless, despite their efforts to make them realistic, and move on. And if no progress in constructing predictable models is made for decades, they should simply temporarily give up and start to think that a quantitative predictable science about their favorite subject is impossible. Our situation is more subtle because it is tougher to compare our thinking with experiments. It is more difficult to show that we are on the right track, but it is also more difficult to show that we are on a wrong track. But these two possibilities should always be studied in a balanced way, I think.

This also applies to string theory where the experiments are replaced by calculations of more concrete examples of something, and these experiments are meant to decide about the fate of more general principles that we want to believe. We have been convinced that the problem of quantum gravity has essentially a unique solution currectly called string/M-theory, and I am still convinced that there is a huge body of evidence that this statement is true if the words are properly defined.

However, being on the right track in the big questions does not imply that we will always be on the right track. The fact that many dualities between string theories and other mathematical structures seem to be obviously correct does not imply that all conjectured dualities are correct. The fact that a particular simple vacuum exists and is consistent does not imply that all proposed vacua - or even all vacua proposed by the same authors - are consistent and exist. We don't seem to ask these questions too often.

Five minutes for a duality

In the mid 1990s, we had the duality revolution. One of the defining features was the following law that Tom Banks explained to me:
  • If you can't show that a conjectured duality is wrong in 5 minutes, it must be correct.

Because Peter Woit did not have a sufficient capacity to understand an essential point, let me emphasize that the theorem above has been really true in the case of tens of important dualities, and the initial five-minutes-long checks were followed by roughly 10 years of other tests. And the dualities have passed all of them.

Unfortunately, this principle was also applied years after the revolution when it was already incorrect. The dualities between the highly supersymmetric descriptions are most likely to be correct because both sides of the duality are extremely constrained and most likely unique. Together with a couple of checks, this proves the story.

But are we really sure that these dualities generalize to the non-supersymmetric context, for example? Most of the constraints go away and the quantitative precise checks are usually impossible. Virtually all conjectured non-supersymmetric dualities (except a few exceptions in the topological context) are suspicious, and even those that are true may be true only because we define one of the sides to be dual to the other - while other equally consistent definitions may exist, too.

One can't really get a result for free and until one finds the correct theory that can be checked arbitrarily precisely, there is a principle of complementarity between the number of tools and possibilities that may be used to explain XY, and the probability that such a constructed explanation is correct. If we just decide do solve some of our problems by adding many new tools (or many backgrounds), it may help, but unless there are independent arguments, such a move also reduces the probability that the solution is correct. Only if an independent argument - that the probability of the model's being correct drops less than the amount of arbitrary assumptions and players we add - exists, one may talk about progress. Otherwise it's just a confusing violation of Occam's razor.