This blog is still rather new, and I am tempted to write a lot. :-)
Well, let me tell you something about the parameters of Nature.
This is a topic that recently emerged from a discussion on the sci.physics.research newsgroup about the predictive power of string theory. These discussions often end up with completely hopeless exchanges because it is often more than 95% of the participants who believe in totally weird things related to very elementary questions in physics, and they are convinced that anything that they misunderstand about particle and fundamental physics must be the particle physicists' fault.
You might say that this interpretation of the disagreement is just my point of view, but it may be more important that it is the correct point of view. :-)
Of course, they are repeatedly mistaken. Don't get me wrong: I don't claim that all other participants of the sci.physics.research newsgroup are absolutely dumb. But the percentage of almost absolute morons on that newsgroup is sufficiently large to destroy a decent debate.
What is the issue now? To emphasize that I am far from being alone with my opinion, let me start with a citation of an authority. At the end of 2000, David Gross, the winner of the 2004 physics Nobel prize, formulated 10 most serious questions in physics in his article (or interview?) in the New York Times. He often repeats - for example on the recent "Future of Physics" conference in Santa Barbara - that the most important question in physics (one that he would ask God) is the following:
Can all fundamental dimensionless continuous parameters of Nature be calculated from theoretical principles, without any input from experiments?
Let me say that I subscribe that this is the biggest question in physics for me, as well. It does not have to be the biggest question for everyone else, but I guess that all particle physicists and string theorists realize why it is big. This biggest question originally comes from Einstein, but I am sure that it has become a part of our knowledge (and our culture).
Let's look at various details of the question: we only talk about dimensionless parameters - such as the fine-structure constant or the proton/electron mass ratio - because the dimensionful constants depend on our choice of units, and because these units are more or less arbitrary (results of coincidences in the history of the human civilization), such numbers of course cannot be explained scientifically (unless you explain why a guy several centuries ago had a thick inch). It's only the dimensionless numbers (those without any units) that all civilizations would have to agree upon.
The sentence also talks about continuous parameters only because the discrete parameters are qualitative, in some sense, and they can be determined absolutely exactly, even with very inaccurate experiments. However, the continuous numbers must be measured totally precisely if we want to predict physics completely accurately. The continuous parameters are the main problem.
How far has physics gotten in this dream to explain and calculate all measurable numbers in Nature? Instead of zillions of seemingly arbitrary parameters describing the properties (spectral lines, permitivity, density, ...) of each element (and perhaps each compound) and each nucleus and so on, we now have a theory where everything follows from roughly 19+10 parameters - namely the Standard Model. The parameters are roughly three couplings, and a lot of elementary particle masses and mixings between them.
String theory, by its very nature, does not admit any continuous dimensionless parameters whatsoever, so once you determine all possible discrete choices (there is a countable number of them), you can decide whether string theory is right or wrong. We don't know which answer is correct yet, at least not with certainty, but the basic feature of string theory - that all apparent parameters are either fixed by consistency, or they are dynamical fields whose values is determined dynamically at the end - is more or less well-established.
Note that even in the anthropic catastrophic scenarios in which we would have to deal e.g. with 10^{300} vacua, string theory is amazingly predictive. Using the usual anthropic counting (and the exponents below will be sort of random guesses which nevertheless describe how the experts are thinking about this problem), the roughly correct cosmological constant can only be found in 10^{200} of them, roughly 10^{100} of them look like the Standard Model. We only have 100 decimal digits to adjust to discretely choose a vacuum, and therefore 30 parameters can only be roughly fine-tuned with the accuracy of 3 decimal digits if string theory is wrong and the agreement is a pure coincidence. It's pretty clear that we can measure more accurately, and therefore it is likely that no vacuum from this ensemble will match string theory with the verifiable precision if string theory is not quite correct. Even though 10^{300} looks like a large number, it is really much less than infinity.
If one of the vacua matches reality, it would be a spectacular success, probably the biggest achievement in the history of science. It would allow us to calculate anything with an arbitrary precision.
(Let me now avoid the controversial questions whether we should believe that our Universe has "average" properties because this question is controversial among the string theorists.)
OK, 10^{300} is large for some purposes, but small for others. It is certainly true that the Standard Model can be pragmatically a more useful model to describe reality - even though the Standard Model can never be determined "exactly" since it depends on continuous parameters (unlike string theory that is completely rigid). But there are some things that should certainly not be controversial:
Realistic theories in physics depend on a finite number of continuous parameters. The dependence of the results on all parameters is smooth almost everywhere. The space of parameters is therefore a manifold that is differentiable almost everywhere, and one can talk about its dimension - the number of parameters. The theories with a few parameters are more constrained, satisfactory, and predictive than the theories with many parameters. The theories with infinitely many parameters - such as the nonrenormalizable theories - are unpredictive and unacceptable as complete theories of anything because one needs infinite input to determine the theory fully.
OK, it was not just these statements that I gave them. One had to explain them all the misleading examples they raised - which includes an explanation of effective theory (where things are approximate, and the results only depend on a finite number of parameters "up to a certain scale" whatever exactly "scale" means).
You would think that these are so obvious statements that no one who is interested in physics should have problems with them. 't Hooft and Veltman received a Nobel prize for 't Hooft's proof of renormalizability of the electroweak theory, and you would therefore think that the importance of renormalizability of a quantum field theory theory - proposed as a fundamental description of anything - is something that must be obvious to all those interested in theoretical physics. You would also think that all of them understand that renormalizability means that there is just a finite number of parameters that must be determined from the experiments, and everything else can be predicted at least to all orders in perturbation theory.
Well, you would be wrong. These obvious facts constitute a barrier that is inpenetrable for many people - and some of them are even rather well-known scientists.
One of the less famous ones starts to argue that one parameter is the same as 30 parameters and the theories with one or 30 parameters are equally predictive and satisfactory. In order to prove that he is not just a moron, but rather a sophisticated moron, he offers the proof that the one-dimensional continuum can be mapped to a higher-dimensional (or infinite-dimensional) continuum, even by continuous (but highly pathological and non-differentiable, and therefore physically irrelevant) functions. He repeats that one number as well as 30 numbers constitute an infinite amount of information, and therefore there is no difference. That's a hopeless case, of course.
But what would you think if you met someone else, someone nice, someone who is around, namely someone who can be described as a leader of quantum computation, who argues that there is really no difference between renormalizable and non-renormalizable theories as far as predictivity goes (he immediately and explicitly gives you the Standard Model and quantized General Relativity with all counterterms up to five loops as examples) - and he even states that drawing a graph of a function (which is a part of the input of a theory) is giving you a more predictive theory than if you know the function analytically, as long as the analytical function looks too complicated to you?
I don't know what you would think, but I am totally stunned and scared. What sort of physics is being taught at the high schools and colleges? Is it really necessary that people - even the famous people - don't understand what does it mean to "understand" or "explain" something in physics? Does he really believe that if we draw a curve describing a black body radiation experiment, so that we intuitively feel that our drawing agrees with our experiments, we have a more predictive theory than Planck who finds an analytical prescription for the function? Does he really think that Kepler's and Newton's laws did not mean any progress in understanding - and predicting - of the paths of celestial bodies, as compared to the phenomenological observations before Kepler (or before epicycles)? What do they consider progress in theoretical physics if it is not obtaining increasingly analytical formulae describing increasingly larger classes of phenomena increasingly accurately with a decreasing number of arbitrary and independent assumptions and parameters?
It's just very hard to discuss physics if people can't agree even about the fact that the renormalizable theories are more predictive than the non-renormalizable ones - and the fact that a correct analytical form of a function is always better (and more predictive) than just a numerical knowledge of this function.