Saturday, December 31, 2005

Testing E=mc2 for centuries

Chad Orzel seems to disagree with my comments about the interplay between the theory and experiment in physics. That's too bad because I am convinced that a person who has at least a rudimentary knowledge about the meaning, the purpose, and the inner workings of physics should not find anything controversial in my text at all.

Orzel's text is titled "Why I could never be a string theorist" but it could also be named "Why I could never be a theorist or something else that requires to use the brain for extended periods of time". Note that the apparently controversial theory won't be string theory; it will be special relativity. The critics who can't swallow string theory always have other, much older and well-established theories that they can't swallow either.

The previous text about the theory vs. experiment relations

Recall that I was explaining a trivial fact that in science in general and physics in particular, we can predict the results of zillions of experiments without actually doing them. It's because we know general enough theories - that have been found by combining the results of the experiments in the past with a great deal of theoretical reasoning - and we know their range of validity and their accuracy. And a doable experiment of a particular kind usually fits into a class of experiments whose results are trivially known and included in these theories. This is what we mean by saying that these are the correct theories for a given class of phenomena. An experiment with a generic design is extremely unlikely to be able to push the boundaries of our knowledge.

When we want to find completely new effects in various fields, we must be either pretty smart (and lucky) or we must have very powerful apparata. For example, in high-energy physics, it's necessary that we either construct accelerators that accelerate particles to high energies above 100 GeV or so - this is why we call the field high-energy physics - or we must look for some very weak new forces, for example modifications of gravity at submillimeter experiments, or new, very weakly interacting particles. (Or some new subtle observations in our telescopes.)

If someone found a different, cheaper way to reveal new physics, that would be incredible; but it would be completely foolish to expect new physics to be discovered in a generic cheap experiment.

Random experiments don't teach us anything

It's all but guaranteed that if we construct a new low-energy experiment with the same particles that have been observed in thousands of other experiments and described by shockingly successful theories, we are extremely unlikely to learn anything new. This is wasting of taxpayers' money especially if the experiments are very expensive.

In the particular case of the recent "E=mc^2 tests", the accuracy was "10^{-7}" while we know experimentally that the relativistic relations are accurate with the accuracy "10^{-10}", see Alan Kostelecky's website for more concrete details. We just know that we can't observe new physics by this experiment.

Good vs. less good experiments

In other fields of experimental physics, there are other rules - but it is still true that one must design a smart enough experiment to be able to see something new or to be able to measure various things (or confirm the known physical laws) with a better accuracy than the previous physicists. There are good experimentalists and less-good experimentalists (and interesting and not-so-interesting experiments) which is the basic hidden observation of mine that apparently drives Ms. or Mr. Orzel up the wall.

Once again: What I am saying here is not just a theorist's attitude. Of course that it is also the attitude of all good experimentalists. It is very important for an experimentalist to choose the right doable experiments where something interesting and/or new may be discovered (or invented) with a nonzero probabilitity. There is still a very large difference between the experiments that reveal interesting results or inspire new ideas and experiments that no one else finds interesting.

Every good experimentalist would subscribe to the main thesis that experiments may be more or less useful, believe me. Then there are experimentalists without adjectives who want to be worshipped just for being experimentalists and who disagree with my comments; you may guess what is the reason.

Of course that one may design hundreds of experiments that are just stamp-collecting - or solving a homework problem for your experimental course. I am extremely far from thinking that this is the case everywhere outside high-energy physics. There have been hundreds of absolutely fabulous experiments done in all branches of physics and dozens of such experiments are performed every week. But there have also been thousands of rather useless experiments done in all these fields. Too bad if Ms. or Mr. Orzel finds it irritating - but it is definitely not true that all experiments are created equal.

Interpreting the results

Another issue is that if something unexpected occured in the experiment that was "testing E=mc^2", the interpretation would have to be completely different than the statement that "E=mc^2" has been falsified. It is a crackpot idea to imagine that one invents something - or does an experiment with an iron nucleus or a bowl of soup - that will show that Einstein was stupid and his very basic principles and insights are completely wrong.

Hypothetical deviations from the Lorentz invariance are described by terms in our effective theories. Every good experimentalist first tries to figure out which of them she really measures. Neither of these potential deviations deserves the name "modification of the mass-energy relation" because even the Lorentz-breaking theories respect the fact that since 1905, we know that there only exists one conserved quantity to talk about - mass/energy - that can have various forms. We will never return to the previous situation in which the mass and energy were thought to be independent. It's just not possible. We know that one can transform energy into particles and vice versa. We can never unlearn this insight.

New physics vs. crackpots' battles against Einstein

Einstein was not so stupid and the principles of his theories have been well-tested. (The two parts of the previous sentence are not equivalent but they are positively correlated.) To go beyond Einstein means to know where is the room for any improvement, clarification, or deformation of his theories and for new physics, and the room is simply not in the space of ideas that "E=mc^2 is wrong" or "relativity is flawed". A good experimentalist must know something about the theory, to avoid testing his own laymen's preconceptions about physics that have nothing to do with the currently open questions in physics.

Whether or not an experimental physicist likes it or not, we know certain facts about the possible and impossible extensions and variations of the current theories - and a new law that "E=mc^2" will be suddenly violated by one part in ten million in a specific experiment with a nucleus is simply not the kind of modification that can be done with the physical laws as we know them. Anyone who has learned the current status of physics knows that this is not how serious 21st century physics looks like. The current science is not about disproving some dogmatic interpretations of Bohr's complementarity principle either.

Chad Orzel is not the only one who completely misunderstands these basic facts. Hektor Bim writes:
  • Yeah, this post from Lubos blew me away, and I’ve been trained as a theorist.

Well, it does not look like a too well-trained one.

  • As long as we are still doing physics (and not mathematics), experiment rules.

Experiments may rule, but there are still reasonable (and even exciting) experiments and useless (and even stupid) experiments. If someone thinks that the "leading role" of the experiments means that the experimentalists' often incoherent ideas about physics are gonna replace the existing theories of physics and that every experiment will be applauded even if it is silly, is profoundly confused. Weak ideas will remain weak ideas regardless of the "leading role" of the experiments.

  • What also blew me away is that Lubos said that “There is just no way how we could design a theory in which the results will be different.” This is frankly incredible. There are an infinite number of ways that we could design the theory to take into account that the results would be different.

Once again, there are no ways how to design a scientific theory that agrees with the other known experiments but that would predict a different result of this particular experiment. If you have a theory that agrees with the experiments in the accelerators but gives completely new physics for the iron nucleus, you may try to publish it - but don't be surprised if you're described as a cook.

Of course that crackpots always see millions - and the most spectacular among them infinitely many ;-) - ways to construct their theories. The more ignorant they are about the workings of Nature, the more ways to construct the theories of the real world they see. The most sane ones only think that it is easy to construct a quantum theory of gravity using the first idea that comes to your mind; the least sane ones work on their perpetuum mobile machines.

I only mentioned those whose irrationality may be found on the real axis. If we also included the cardinal numbers as a possible value of irrationality, a discussion of postmodern lit crits would be necessary.

Scientific theories vs. crackpots' fantasies

Of course someone could construct a "theory" in which relativity including "E=mc^2" is broken whenever the iron nuclei are observed in the state of Massachusetts - much like we can construct a "theory" in which the law of gravity is revoked whenever Jesus Christ is walking on the ocean. But these are not scientific theories. They're unjustifiable stupidities.

The interaction between the theory and experiments goes in both ways

It is extremely important for an experimental physicist to have a general education as well as feedback from the theorists to choose the right (and nontrivial) things to measure and to know what to expect. It is exactly as important as it is for a theorist to know the results of the relevant experiments.

Another anonymous poster writes:

  • What Lumo seems to argue is that somehow we can figure out world just by thinking about it. This is an extremely arrogant and short-sighted point of view, IMPO – and is precisely what got early 20th century philosophers in trouble.

What I argue is that it is completely necessary for us to be thinking about the world when we construct our explanations of the real world as well as whenever we design our experiments. And thinking itself is responsible at least for one half of the big breakthroughs in the history of science. For example, Einstein had deduced both special relativity as well as general relativity more or less by pure thought, using only very general and rudimentary features of Nature known partially from the experiments - but much more deeply and reliably from the previous theories themselves. (We will discuss Einstein below.)

Thinking is what the life of a theoretical physicist is mostly about - and this fact holds not only for theoretical physicists but also other professions including many seemingly non-theoretical ones. If an undereducated person finds this fact about the real world "arrogant", it is his personal psychological problem that does not change the fact that thinking and logical consistency are among the values that matter most whenever physical theories of the real world are deduced and constructed.

The anonymous poster continues:

  • By the same reasoning the orbits of the planets must be circular – which is what early “philosophers” argued at some point.

Circular orbits were an extremely useful approximation to start to develop astrophysics. We have gone through many other approximations and improvements, and we have also learned how to figure out which approximations may be modified and which cannot. Cutting-edge physics today studies neither circular orbits nor the questions whether "E=mc^2" is wrong; it studies very different questions because we know the answers to the questions I mentioned.

Pure thought in the past and present

A wise physicist in 2005 respects the early scientists and philosophers for what they have done in the cultural context that was less scientifically clear than the present era, but she clearly realizes their limitations and knows much more than those early philosophers. On the other hand, a bad and arrogant scientist in 2005 humiliates the heroes of the ancient science although he is much more dumb than they were, and he is asking much more stupid questions and promoting a much more rationally unjustifiable criticism of science in general than the comparably naive early philosophers could have dreamed about.

Of course that in principle, one can get extremely far by pure thought, if the thought is logically coherent and based on the right principles, and many great people in the history of science indeed had gotten very far. These are the guys whom we try to follow, and the fact that there have been people who got nowhere by thinking cannot change the general strategy either.

  • Anthropic principle completely destroys whatever is left of the “elegance” argument, which is why it’s entertaining to see what will happen next.

I know that some anti-scientific activists would like to destroy not only the "elegance" of science but the whole science - and join forces with the anthropic principle or anything else if necessary - but that does not yet mean that their struggle has any chance to succeed or that we should dedicate them more than this single paragraph.

Another anonymous user writes:

  • As far as what Lubos meant, only he can answer that. But it would be obviously foolish to claim relativity could have been deduced without experimental input, and Lubos, whatever else he might be, is no fool.

History of relativity as a victory of pure thought

If interpreted properly, it would not be foolish; it is a historical fact. For example, I recommend you The Elegant Universe by Brian Greene, Chapter 2, for a basic description of the situation. Einstein only needed a very elementary input from the experiments - namely the invariance of physical laws under uniform motion; and the constancy of speed of light - which naturally follows from Maxwell's equations and Einstein was sure that the constancy was right long before the experiments showed that the aether wind did not exist.

It is known pretty well that the Michelson-Morley experiments played a rather small role for Einstein, and for some time, it was even disputed whether Einstein knew these experiments at all back in 1905. (Yes, he did.) Some historians argue that the patented ideas about the train synchronization themselves played a more crucial role. I don't believe this either - but the small influence of the aether wind experiments on Einstein's thinking seems to be a consensus of the historians of science.

Einstein had deeply theoretical reasons to be convinced about both of these two assumptions. Symmetry such as the Galilean/Lorentz symmetry or "the unity of physical explanations" are not just about some irrelevant or subjective concepts of "beauty". They are criteria that a good physicist knows how to use when he or she looks for better theories. The observation that the world is based on more concise and unified principles than what the crackpots and laymen would generally expect is an experimentally verified fact.

These two observations are called the postulates of special relativity, and the whole structure of special relativity with all of its far-reaching consequences such as the equivalence of matter and energy follows logically. Needless to say, all of these effects have always been confirmed - with accuracy that currently exceeds the accuracy available to the experimentalists of Einstein's era by very many orders of magnitude. Special relativity is a genuine and true constraint on any theory describing non-gravitational phenomena in our Universe, and it is a strong constraint, indeed.

Importance of relativity

Whoever thinks that it is not too important and a new experiment with a low-energy nucleus may easily show that these principles are wrong, which essentially allows us to ignore special relativity, and that everything goes after all, is a crackpot.

General relativity: even purer thought

In a similar way, the whole structure of general relativity was derived by the same Einstein purely by knowing the previous special theory of relativity plus Newton's approximate law of gravity, including the equivalence of the inertial and gravitational mass; the latter laws were 250 years old. There was essentially no room for experiments. The first experiments came years after GR was finished, and they always confirmed Einstein's predictions.

The known precession of Mercury's perihelion is an exception; this prediction of GR was known before Einstein, but Einstein only calculated the precession after he had completed his GR, and henceforth, the precession could not directly influence his construction of GR. He was much more influenced and impressed by Ernst Mach, an Austrian philosopher. I don't intend to promote Mach - but my point definitely is to show that the contemporary experiments played a very small role when both theories of relativity were being developed.

There were also some experiments that argued that they rejected the theory, and Einstein knew that these experiments had to be wrong because "God was subtle but not malicious". Of course that Einstein was right and the experiments were wrong. (Similar stories happened to many great theoretical physicists; an experiment of renowned experimentalists that claimed to have falsified Feynman-Gell-Mann's theory of V-A interactions was another example - and Feynman knew right away when he was reading the paper that the experimentalists were just being silly.) Our certainty today that special relativity (or the V-A nature of the weak interactions) is correct in the "simply doable" experiments is much higher than our confidence in any single particular experimentalist. You may be sad or irritated, but that's about everything that you can do against this fact.

Other theories needed more experiments

It would be much harder to get that far without experiments in quantum mechanics and particle physics, among many other branches of physics and science, but whoever questions the fact that there are extremely important insights and principles that have been found - and/or could be found or can be found - by "pure thought" (or that were correctly predicted long before they were observed), is simply missing some basic knowledge about science.

Although I happily admit that we could not have gotten that far without many skillful (and lucky) experimentalists and their experiments, there have been many other examples beyond relativity in which important theories and frameworks were developed by pure mathematical thinking whose details were independent of experiments. The list includes, among hundreds of other examples,

  • Dirac's equation. Dirac had to reconcile the first-order Schrödinger equation with special relativity. As a by-product, he also predicted something completely unknown to the experimentalists, namely antiparticles. Every successful prediction may be counted as an example of theoretical work that was not driven by experiments.
  • Feynman's diagrams and path integral. No one ever really observed "diagrams" or "multiple trajectories simultaneously contributing to an experiment". Feynman appreciated Dirac's theoretical argument that the classical concept of the action (and the Lagrangian) should play a role in quantum mechanics, too, and he logically deduced that it must play role because of his sum over trajectories. The whole Feynman diagram calculus for QED (generalizable to all other QFTs) followed by pure thought. Today we often say that an experiment "observes" a Feynman diagram but you should not forget about the huge amount of pure thought that was necessary for such a sentence to make any sense.
  • Supersymmetry and string theory. I won't provoke the readers with a description.

Lorentz violations are not too interesting and they probably don't exist

  • If he is claiming that Lorentz invariance must be exact at all scales, then I agree that he’s being ridiculous. But I think it is reasonable to claim that this experiment was not really testing Lorentz invariance at a level where it has not been tested before.

What I am saying is that it is a misguided approach to science to think that the next big goal of physics is to find deviations from the Lorentz invariance. We won't find any deviations. Most likely, there aren't any. The hypotheses about them are not too interesting. They are not justified. They don't solve any puzzles. Even if we find the deviations and write down the corresponding corrections to our actions, we will probably not be able to deduce any deep idea from these effects. Since 1905 (or maybe the 17th century), we know that the Lorentz symmetry is as fundamental, important and natural as the rotational symmetry.

The Lorentz violation is just one of many hypothetical phenomenological possibilities that can in principle be observed, but that will probably never be observed. I find it entertaining that those folks criticize me for underestimating the value of the experiments when I declare that the Lorentz symmetry is a fundamental property of the Universe that holds whenever the space is sufficiently flat. Why is it entertaining? Because my statement is supported by millions of accurate experiments while their speculation is supported by 0.0001 of a sh*t. It looks like someone is counting negative experiments as evidence that more such experiments are needed.

The only reason why the Lorentz symmetry irritates so many more people than the rotational symmetry is that these people misunderstand 20th century physics. From a more enlightened perspective, the search for the Lorentz breaking is equally (un)justified as a search for the violation of the rotational symmetry. The latter has virtually no support because people find the rotational symmetry "natural" - but this difference between rotations and boosts is completely irrational as we have known since 1905.

Parameterizing Lorentz violation

In the context of gravity, the deviations from the Lorentz symmetry that can exist can be described as spontaneous symmetry breaking, and they always include considering the effect of gravity as in general relativity and/or the presence of matter in the background. In the non-gravitational context, these violations may be described by various effective Lorentz-breaking terms, and all of their coefficients are known to be zero with a high and ever growing degree of accuracy. Look at the papers by Glashow and Coleman, among others.

Undoing science?

The idea that we should "undo" the Lorentz invariance, "undo" the energy-mass equivalence, or anything like that is simply an idea to return physics 100 years into the past. It is crackpotism - and a physics counterpart of creationism. The experiments that could have been interesting in 1905 are usually no longer so interesting in 2005 because many questions have been settled and many formerly "natural" and "plausible" modifications are no longer "natural" or "plausible". The previous sentence comparing 1905 and 2005 would be obvious to everyone if it were about computer science - but in the case of physics, it is not obvious to many people simply because physics is harder to understand for the general public.

But believe me, even physics has evolved since 1905, and we are solving different questions. The most interesting developments as of 2005 (for readers outside the Americas: 2006) are focusing on significantly different issues, and whoever describes low-energy experiments designed to find "10^{-7}" deviations from "E=mc^2" as one of the hottest questions in 2005 is either a liar or an ignorant. It is very fine if someone is doing technologically cute experiments; but their meaning and importance should not be misinterpreted.

Inside the DeLay machine

It seems typical for an information dump on Friday afternoon during the holidays that this Washington Post article hasn't gotten more attention:

The U.S. Family Network, a public advocacy group that operated in the 1990s with close ties to Rep. Tom DeLay and claimed to be a nationwide grass-roots organization, was funded almost entirely by corporations linked to embattled lobbyist Jack Abramoff, according to tax records and former associates of the group.

During its five-year existence, the U.S. Family Network raised $2.5 million but kept its donor list secret. The list, obtained by The Washington Post, shows that $1 million of its revenue came in a single 1998 check from a now-defunct London law firm whose former partners would not identify the money's origins.

Two former associates of Edwin A. Buckham, the congressman's former chief of staff and the organizer of the U.S. Family Network, said Buckham told them the funds came from Russian oil and gas executives. Abramoff had been working closely with two such Russian energy executives on their Washington agenda, and the lobbyist and Buckham had helped organize a 1997 Moscow visit by DeLay (R-Tex.).

The former president of the U.S. Family Network said Buckham told him that Russians contributed $1 million to the group in 1998 specifically to influence DeLay's vote on legislation the International Monetary Fund needed to finance a bailout of the collapsing Russian economy.

...

Whatever the real motive for the contribution of $1 million -- a sum not prohibited by law but extraordinary for a small, nonprofit group -- the steady stream of corporate payments detailed on the donor list makes it clear that Abramoff's long-standing alliance with DeLay was sealed by a much more extensive web of financial ties than previously known.


And there's a whole lot more at the link, including how DeLay's wife went on the payroll, how the cabal purchased a townhouse and went into contortions to wriggle around the financial and disclosure rules, and so on and so on.

This is another example of Tom DeLay's personal hypocrisy, as demonstrated in his own words, and another reason why he's a dead man walking (politically speaking only, of course -- Hello, NSA).

Because if Ronnie Earle can't nail him, Jack Abramoff will.

Update: More details about the DeLay/Abramoff/Russian connection from MSNBC, here.

Update#2: Josh Marshall summarizes.

Internet gender gap

First, an off-topic answer. Celal asks me about the leap seconds - why has not the Earth already stopped to rotate if there are so many leap seconds. The answer is that we are now indeed inserting a leap second in most of the years - which means that one year is longer by roughly 1 second than it was back in 1820 when the second was defined accurately enough. More precisely, what I want to say is that one solar day is now longer by roughly 1/365 of a second than it was in the 19th century; what matters is of course that the noon stays at 12 pm.

Although the process of slowing down the Earth's rotation has some irregularities, you can see that you need roughly 200 years to increase the number of the required leap seconds per year by one. In order to halve the angular velocity, you need to increase the number of leap seconds roughly by 30 million (the number of seconds per year), which means that you need 30 million times 200 years which is about 6 billion years. Indeed, at time scales comparable to the lifetime of the solar system, the length of the day may change by as much as 100 percent.

100 percent is a bit of exaggeration because a part of the recent slowing is due to natural periodic fluctuations and aperiodic noise, not a trend. However, coral reefs indeed seem to suggest that there were about 400 days per year 0.4 billion years ago. Don't forget that the slowing down is exponential, I think, and therefore the angular velocity will never quite drop to zero (which has almost happened to our Moon).

BBC informs that

While the same percentage of men and women use the internet, they use it in very different ways & they search for very different things. Women focus on maintaining human contacts by e-mail etc. while men look for new technologies and ways to do new things in novel ways.

  • "This moment in internet history will be gone in a blink," said Deborah Fallows, senior research fellow at Pew who wrote the report.

I just can't believe that someone who is doing similar research is simultaneously able to share such feminist misconceptions. The Internet has been around for ten years and there has never been any political or legal pressure for the men and women to do different things - the kind of pressures in the past that is often used to justify similar hypotheses about the social origin of various effects.




People are just doing whatever they find natural to do with the internet, and a result is that there are significant differences between men and women. These differences follow primarily from the fact that we are wired differently. Everyone who is able to look around and evaluate the perceptions knows that.

Growing gender gap in computer science

Meanwhile, the gender gap grows in computer science (CS). Boston Globe describes the computer science department of Tufts University. Twenty years ago, many women were attracted to the department in order for the social engineers to produce "gender equality". The peak of this movement was in 1985 when as many as 37 percent of the computer science Bc. degrees went to women.

You have had everything you need according to the pseudoscientific theories of discrimination: a lot of female role models and good, welcoming atmosphere. Nevertheless, the airplanes don't land. The percentage of women among the new alumni dropped below 20 percent again. Something must be wrong, right? Still, many people are going to continue to propagate their patently false explanations of the observed statistics that have been falsified hundreds of times. For example, the Boston Globe article by Marcella B. wrote

  • As the popularity of computer science soared in the first half of the 1980s, many university departments became overburdened and more competitive, some professors argue. Introductory classes were taught in a way that emphasized technical minutiae over a broader sense of what was important and exciting about the field, a style catering to the diehard -- and overwhelmingly male -- techies rather than curious new recruits.

This is a completely unreasonable comment. The reality is that the field of computer science simply is a competitive field. It has been one for quite many years. And the males seem to be doing statistically better both in the technical minutiae of that field as well as in the excitement about its general features. Moreover, both of these things - general excitement as well as technical details - are necessary for the computer science people to do their work competitively. Abandoning "technical minutiae" is nothing else than lowering the standards.

I wonder how many more years, decades or centuries are necessary for the believers in various feminist religions to start to realize that something is wrong with their beliefs.

Friday, December 30, 2005

Ben Grant for Lt. Governor

This is a mild surprise. Via BOR and Kuff, directly from the Marshall News-Messenger (a newspaper I nearly went to work for, once upon a time):

Marshall resident and attorney Ben Z. Grant on Thursday announced he will be a candidate for Texas Lieutenant Governor in the March Democratic primary.

Grant, 65, a former state representative who also served 17 years as justice of the Sixth Court of Appeals in Texarkana, said he is looking forward to the statewide race.

...

Grant retired from the Sixth Court of Criminal Appeals when his term ended in 2002. He served as a state representative from 1971 until 1981.

Grant was also a district judge for the 71st Judicial District Court in Harrison County and was appointed to the court of appeals in 1985 by then-Gov. Mark White. He said he spent 37 years in government, starting his career as a school teacher.

Grant has also been a columnist for the M N-M, giving them the scoop here. If he gets some competition in the primary, we won't know about it until the end of the filing period, which is next Monday.



Handicapping 2005 for 2008's prospective candidates

Chris Cillezza has some pretty good takes here. For the Dems, the year just past grades out as a winner for Mark Warner, and he also gives Russ Feingold high marks for having broken into the top tier. John Edwards trails them slightly, managing to keep his profile elevated and positive. Hillary Clinton and John Kerry didn't help or hurt themselves during the year, which is a net negative for them both.

Another Virginian, George Allen, tops the Republican list with Haley Barbour (!?) and then John McCain in third, mostly on the basis of how he manages to alienate the base and burnish his independent credentials at the same time. Dr. Bill Frist had the worst year among all contenders, and the jury's still out on Chuck Hagel and Mitt Romney. Cillezza rates Arkansas Republican governor Mike Huckabee as a darkhorse on the order of New Mexico's Bill Richardson on the Democratic side.

He gives the chances of Condi Rice running for president the same odds he gives Al Gore, about zero. And makes no mention of Dick Cheney standing before the voters again.

Ahahahahaha.

Seriously, though, I think he's about right on all of these, and particularly if sad sacks like Allen and Barbour enter 2006 as the GOP pols with the most momentum, then all I can say is "heh."

Next target of terrorists: Indian string theorists

A newspaper in Bombay informs that

The terror attack at the Indian Institute of Science campus in Bangalore on Wednesday that killed a retired IIT professor has sent shockwaves through the Indian blogosphere.

Blogger and researcher, Kate, wondered if Tata Institute of Fundamental Research [the prominent Indian center of string theory] would be the next target.

...

Rashmi Bansal expressed sadness at scientists becoming the latest terror victims. “I mean, sure, there would be some routine security checks at the gate, but who seriously believes that a bunch of scientists gathered to discuss string theory or particle physics could be of interest to the Lashkar-e-Toiba?” she wrote in her blog, Youth Curry (http://youthcurry.blogspot.com/).

Ms. Bansal may change her mind if she analyzed some posters here - to see at least a "demo" how the anger against the values of modern science can look like. More generally, I emphasize that my warning is absolutely serious. It is not a joke, and I've erased a misleading anonymous comment that suggested that.

Finally, I think that whoever thinks that a scientist cannot become a victim of terrorists is plain stupid. The islamic extremists fight against the whole modern civilization, and the string theorists in India and elsewhere - much like the information technologies experts - are textbook examples of the infiltration of the modern civilization and, indeed, the influence of the Western values - or at least something that was associated with the Western values at least for 500 years.

Everyone who observes the situation and who is able to think must know that Bangalore has been on the terrorists' hit list for quite a while.

If the person who signed as "Indian physicist" does not realize that and if he or she were hoping that the terrorists would treat him or her as a friend (probably because they have the same opinions about George W. Bush?), I recommend him or her to change the field because the hopes were completely absurd.

I give my deepest condolences to the victim's family but I am not gonna dedicate special sorrow to the victim, Prof. Puri, just because he was a retired professor. There are many other innocent people being killed by the terrorists and I am equally sad for all of them. The death of the innocent people associated with "our" society is of course the main reason why I support the war on terror - or at least its general principles. The attack against the conference is bad, but for me it is no surprise. And the casualties of 9/11 were 3,000 times higher which should still have a certain impact on the scale of our reactions.

Third string revolution predicted for physics

CapitalistImperialistPig has predicted

for 2006, started by someone who is quite unexpected. It would be even better if the revolution appeared in the first paper of the year.

Sidney Coleman Open Source Project

Jason Douglas Brown has been thinking about a project to transcribe the QFT notes of a great teacher into a usable open source book. I am going to use the notes in my course QFT I in Fall 2006; see the Course-notes directory.

We are talking about 500 pages and about 10 people who would share the job. If you want to tell Jason that it is a bad or good idea, or join his team, send an e-mail to
  • jdbrown371 at hisurfer.net


Bayesian probability I

See also a positive article about Bayesian inference...
Two days ago, we had interesting discussions about "physical" situations where even the probabilities are unknown.

Reliable quantitative values of probabilities can only be measured by the same experiment repeated many times. The measured probability is then "n/N" where "n" counts the "successful measurements" among all experiments of a certain kind whose total number is "N". This approach defines the "frequentist probability", and whenever we know the correct physical laws, we may also predict these probabilities. If you know the "mechanism" of any system in nature - which includes well-defined and calculable probabilities for all well-defined questions - you can always treat the system rationally.

Unknown probabilities

It is much more difficult when you are making bets about some events whose exact probabilities are unknown. Even in these cases, we often like to say a number that expresses our beliefs quantitatively. Such a notion of probability is called Bayesian probability and it does not really belong to exact sciences.




Vendor machines

For example, you pay three quarters to a vendor machine to get Coke for 1 dollar. The third quarter is swallowed but not counted. Should you try to throw two (or more) quarters to the same vendor machine, or should you rather choose the next machine which is more likely (and let's assume that it is guaranteed) to work correctly but where you will have to pay 4 more coins?

If you knew the probability that the first machine is gonna steal your coins - for example, imagine that someone told you that the machine steals every coin with probability P, independently of others - you could solve this problem mathematically and calculate which strategy has a lower expectation value of the money that you will have to pay in total.

However, you don't know P. Because the machine has stolen one quarter among three, you may think that the probability of a "theft" is around 1/3. With the number P=1/3, you may again derive the correct answer. (In fact, taking the risk and continuing with the unreliable machine is cheaper in average.)

But if you were just "lucky" that only one coin was stolen, the probability that the machine will steal your money can be close to one. In this case, abandoning this machine is clearly cheaper. An important conclusion is that there is no canonical way to determine the probability that the "theft rate" of the vendor machine is between P and P+dP. (I used the words "theft rate" because once you interpret this observable as a permanent characteristic of the machine, it is no longer a probabilistic observable.)

Imagining the distribution for the theft rate

You may imagine that the distribution of P is Gaussian, with a center around 1/3 and with width determined by the fact that you have only made 3 measurements. But it is very important how this distribution behaves near P=1. If it were non-zero near P=1 (like the Gaussian), the expected average value of the coins you will have to pay would actually be logarithmically divergent.

You actually know that P cannot be one because two coins have been counted correctly. But even if the probability distribution goes to zero near P=1, but just terribly quickly (so that it is pretty big even if you're very close to P=1), you may still obtain a divergence.

This is where rational thinking ends and religion starts. You simply can't know whether these extremely unlikely events are just very unlikely or absurdly unlikely. And depending on the probability that the theft rate is close to one, you will obtain different conclusions about the optimal strategy.

Insurance and averaging

Whenever we talk about phenomena that occur many times and whose losses as well as benefits are "minor" relatively to what we can afford, the expectation values are the only truly "rational" measurement of the quality of different decisions. For example, a billionaire would be stupid to buy all tickets in the lottery because he knows that 15 percent of his payment (or another percentage that can be calculated almost exactly) would go to the company that runs the lottery.

Such a billionaire would be stupid but he would still be incredibly less stupid than a country that codifies the Kyoto protocol.

In a similar way, a millionaire does not need an insurance against many "minor" things because even in this case, he can calculate pretty accurately what percentage of his payment will be swallowed by the insurance company. On the other hand, a millionaire can also afford to pay the insurance even if it statistically means a loss for him. Millionaires can afford to behave irrationally, in all possible directions.

Huge lotteries and critical insurance

But when you are thinking about the insurance against an event whose impact would be devastating - or if you are thinking about a lottery where you can win large amounts of money that can "solve everything" - it is clear that the rational thinking and expectation values become less important.

Similar issues were relevant when we were thinking about "betting on the climate" and the two sides had vastly different ideas what are the probabilities of different events. One party thought that the probabilities were 50:50 while the other party thought it was closer to 99:1. In this case, once again, we don't know what is the true probability. Any assumption in between these two is statistically attractive for both parties. I mentioned that the geometric average of the two ratios - close to 90:10 - looks as the fairest assumption but there is no way to justify this convention or other conventions simply because the true probability is unknown.

Once you agree about the probability, the rules for your bet are, of course, defined by requiring that the statistical average of the amount of money that is won/lost by either party is zero.

Also, when we predict the death of the Universe or any other event that will only occur once, we are outside science as far as the experimental tests go. We won't have a large enough dataset to make quantitative conclusions. The only requirement that the experiment puts on our theories is that the currently observed reality should not be extremely unlikely according to the theory. For example, the lifetime of our Universe should never be predicted to be much smaller than 14 billion years because it would be surprising to see that we are still here.

Which probabilities are scientific

While the text above makes it clear that I only consider the frequentist probabilities to be a subject of the scientific method including all of its sub-methods, it is equally clear that perfect enough theories may allow us to predict the probabilities whose values cannot be measured too accurately (or cannot be measured at all) by experiments. It is no contradiction. Such predictions are still "scientific predictions" but they cannot really be "scientifically verified". Only some features of the scientific method apply in such cases.

Thursday, December 29, 2005

All stem cell lines were fabricated

All - not just nine - of Hwang Woo-Suk's eleven stem cell lines were fabricated, a panel of Seoul National University concluded today.
It is still plausible that Hwang's team has successfully cloned a human embryo and the Afghan hound. And Hwang still claims to have developed a technology to build the stem cell lines although he is no longer a trustworthy source. However, the apparent failure to have actually produced the patient-specific stem cell lines implies that a realistic application of these biotechnologies in medicine may be decades (or infinitely far) away.

Moreover, the Washington Post claims that the Korean government has probably bribed some scientists - potential whistleblowers - in order to protect the non-existent good name of Korean biology in the international context.

Some people argue that the whole science will suffer as a consequence of this scandal. I don't buy these worries. If someone criticizes the work of the Korean scientists; as well as the work of their colleagues everywhere in the world who could not figure out what was going on; as well as the work of the journalists who inflated this research into a sensation; as well as the editors of the journals and the Korean government officials who have paid a lot of money without proper checks and balances - I am convinced that at least 90% of this criticism is justified.




People will keep on trying to develop these technologies that could be used to cure many diseases because the potential benefits are huge. They will do so in many countries (and many states) that don't ban this research. I am not afraid that the research will collapse. And the scientists will no longer be viewed as innocent angels. This is very correct. The scientists may be smart and they can work a lot, but they can still share vices with ordinary, mortal human beings. The real science and the ideal science are two different things. For example, many colleagues of ours care about the money far too much. Hwang et al. received 60 million dollars or so. This is not a formality. Many of us would be, unfortunately, ready to improve the results in order to get this money.

It is very important to know that an article published in Science or Nature does not have to be correct. A scientist identified as a top scientist by the Scientific American may be a bubble of fraud. Whenever there is a lot of money at stake, the probability that something is fraud increases. The fields with huge potential applications - in medicine or politics of "climate change" - are the first ones where fraud should be expected and where efficient mechanisms to prevent such fraud should be developed.

Recall that in November, we discussed why Most published findings are false. It was explained that a paper is more likely to be wrong if the field is hot, if the financial stakes are high, if the sample size is small, and if a small fraction of relationships is tested. Hwang's research fits into a large portion of these criteria - so it should have been expected that it was not right.

The work of reviewers is usually not paid at all and it turned out to be quite inefficient in dividing good stuff from bad stuff. Maybe these guys eventually find out that submitting their work to the arXiv is as good as publishing things in Nature and Science. However, new policies to avoid similar fraudulent research in the future will be required, I think.

A great week for Texas progressives on Texas radio

Tuesday you had BAR, last night you got Bell, and tonight you can listen to Charles Kuffner of Off the Kuff from 7:30 until 8:00 pm, John Courage (Lamar Smith slayer) for the entire hour -- 8 to 9 pm -- and David Van Os from 9-9:30, all hosted by Sean-Paul Kelley.

Listen live if you're in San Antonio on KTSA AM 550 or stream it live by clicking here.

Wednesday, December 28, 2005

Wikipedia

Comment about the new colors: I believe that the new colors are not a registered trademark of Paul Ginsparg. Moreover, mine are better.

Just a short comment about this creation of Jimbo Wales et al. I am impressed how unexpectedly efficient Wikipedia is. Virtually all of its entries can be edited by anyone in the world, even without any kind of registration. When you realize that there are billions of not-so-smart people and hundreds of millions of active idiots living on this blue planet - and many of them have an internet access - it is remarkable that Wikipedia's quality matches that of Britannica.

But this kind of hypertext source of knowledge is exactly what the web was originally invented for.

Moreover I am sure that Wikipedia covers many fields much more thoroughly than Britannica - and theoretical physics may be just another example. Start with list of string theory topics, 2000+ of my contributions, or any other starting point you like. Try to look for the Landau pole, topological string theory, heterotic string, or thousands of other articles that volunteers helped to create and improve. Are you unsatisfied with some of these pages? You can always edit them.




You may also try to search for your name. Chances are that your page, perhaps including a photograph, has been added by your humble correspondent. For example, are you Lenny Susskind, Joe Polchinski, Cumrun Vafa, Nima Arkani-Hamed, Andrew Strominger, Juan Maldacena, Nathan Seiberg, Edward Witten, Eva Silverstein, Shamit Kachru? Do you think the texts about you, your discoveries, and your colleagues are inaccurate? You may edit them, too.

Of course that Wikipedia is not perfect. Being perfect in the real world is the same thing as being dead. For example, Shahriar Afshar tries to maintain a promotional article about his own "revolutionary" experiment (that we discussed here) and another, very powerful clique of Wikicrackpots will never allow the Wikipedia readers to learn why loop quantum gravity is wrong. (I was informed that a certain "Tweet Tweet" has identified himself or herself in the previous sentence, and I can confirm that the identification is correct.) If certain paradigms are popular within the community of the Wikieditors, it is reasonable to expect that the presentation will be twisted in a certain way. Nevertheless, if one looks at the final result, it looks unexpectedly balanced to me.

Some of the recent incidents have been covered in ridiculous ways by the mainstream media. For example, Brian Chase has made a good prank and edited the page about John Seigenthaler Sr. to argue that he was participating in the assasination of John F. Kennedy. Because the page about John Seigenthaler Sr. is obviously not the most attractive page at Wikipedia, it took nearly half a year before the apparent hoax was found and pointed out to Seigenthaler himself. Media, including FoxNews, then celebrated Seigenthaler as a hero and the amount of positive feedback he received exceeded any possible damage caused by the Wikipedia article by several orders of magnitude. He should definitely be very grateful not only to Brian Chase but to all people behind Wikipedia.

Even though I think that such pranks are funny, I also support some recent policies meant to regulate the editing process of Wikipedia. For example, the page about George W. Bush - and several pages that ignite a similar amount of controversy - can only be edited by registered users whose accounts are not new. In my opinion, this should become the default rule for all pages that have been shown to be controversial.

Tuesday, December 27, 2005

Rick Causey (Enron head beancounter) flips

With Ken Lay and Jeff Skilling slated to go on trial in a few weeks, their defense teams just got bad news:

Enron's former chief accounting officer, Richard Causey, has struck a plea bargain with federal prosecutors and will avoid going to trial with the fallen energy company's two top executives, according to a person familiar with the negotiations. ...

Causey, 45, agreed to testify against his former bosses, Enron Corp. founder Kenneth Lay and former CEO Jeffrey Skilling, in exchange for a much lesser prison sentence than he would receive if convicted on all counts. The trial is scheduled to begin next month, but a delay is considered likely since defense attorneys would want more time to prepare for the government's new witness.

Causey is charged with fraud, conspiracy, insider trading, lying to auditors and money laundering for allegedly knowing about or participating in a series of schemes to fool investors into believing Enron was financially healthy. The company imploded in late 2001 amid disclosures of complicated financing schemes that gave the appearance of success.


As indicated, the trials of Lay and Skilling will likely be postponed while their lawyers scheme a strategy to attack Causey, who is now a hostile witness. Causey ranks higher on the totem pole than Fastow, was an insider to the boardroom where Lay and Skilling managed the company, and is without the stain of self-enrichment that accompanies Andy Fastow:


Causey could be more damaging to Lay and Skilling than former Enron finance chief Andrew Fastow, who joined the government's cadre of cooperating witnesses when he pleaded guilty to two counts of conspiracy in January 2004. Unlike his former peer, Causey didn't skim millions of dollars for himself from shady deals and therefore would bring less baggage to the witness stand.

"While they were preparing to deal with Fastow, Causey is another matter," said Robert Mintz, a former federal prosecutor. "Fastow has been so demonized by the books and media accounts of the Enron collapse that he is an enticing target for the defense teams."


And finally, for the trivia buffs:


Causey would become the 16th ex-Enron executive to plead guilty and agree to cooperate with the government.


Could it be more embarrassment for the Republicans in the new year as the Enron thieves turn on each other?

DVO and DFA

The Christmas holiday is passed, the fiber-optic Santa packed away, and the campaigns for the March primary are about to swing into high gear. I am going to advocate again for my favorite Democratic candidate, and it's not to beg for money (though you would never be discouraged from donating).

David Van Os needs your help in securing the endorsement of his campaign for Texas Attorney General from the good folks at Democracy for America.

The seal of approval from DFA is a coveted one in progressive circles, and there’s no candidate who is more deserving. So click here, and write a few words as to why you think he merits their endorsement.

Don’t have the words? Don’t know the man well enough to do so? Let me help you with that.

Van Os has been fighting the Bush regime long before he went to Florida in 2000 to contest the recount in Bush v. Gore. He’s been fighting for working men and women long before he was the general counsel for the Texas AFL-CIO. He fought against the illegal and immoral war in Iraq way before he went to Camp Casey this summer. He’s been a warrior for economic and social justice for the people of Texas all of his life. You can read more about his life here, but you can also take my word for it. David Van Os walks the walk.

In 2004, Van Os ran for a seat on the Texas Supreme Court because he wanted to take that court back from the mega-corporations which have it bought, paid for, and tucked in their vest pockets. At a time when the PATRIOT Act was our biggest concern, he chose to fight to restore the constitutional checks and balances that protect the rights and liberties of all Texans.

He is running for the office of Texas Attorney General in 2006 in order to carry the same fight to a new front. Texas is under withering assault by swarms of corrupt Republicans lining their pockets with the millions of dollars flowing from ExxonMobil and ChevronTexaco and the other big oil companies, from State Farm and Allstate and the other insurance companies, and all of the other assorted lobbyists and mouthpieces of greed. A strong attorney general in Austin, vested with the power inherent in the Texas Constitution’s Bill of Rights, can do more to achieve economic and social justice for Texans than twenty congressmen in Washington DC.

With your help, DFA will be influenced to throw the weight of their endorsement behind David’s campaign, and that will be a big push forward in returning the state of Texas back to the people (and away from corporate control).

Take two minutes and write a recommendation on behalf of David Van Os, and then click 'send'.

Hubble: cosmic string verdict by February

Let me remind you that the Hubble pictures of the cosmic-string-lensing CSL-1 candidate, taken by Craig Hogan et al., should be available by February 2006. Ohio's Free Times interviews Tanmay Vachaspati who has studied cosmic strings for 20 years. (Via Rich Murray.)

Monday, December 26, 2005

Evolution and the genome

The editors of the Science magazine have chosen the evolution - more precisely, the direct observations of evolution through the genome - to be the scientific breakthrough of 2005.

I think it is a fair choice. The analyses of the genome are likely to become a massive part of "normal science" with a lot of people working on it and a lot of successes and potential applications for the years to come. I expect many discoveries in this direction to shed light on the past lifeforms; on the explicit relationships between the currently existing species and their common ancestry; the evolutionary strategies of diseases and our strategies to fight them; and finally on the new possible improvements of the organisms that are important for our lives, and - perhaps - the human race itself.

Stem cell fraud

Incidentally, the breakthrough of the year 2005 for the U.S. particle physics is called "Particle physicists in the U.S. would like to forget about 2005" which may be fair, too. However, the situation is still better than in stem cell research where some of the seemingly most impressive results in the past years - those by Hwang Woo Suk from Korea - have been identified as an undisputable fraud. Steve McIntyre points out that Hwang was one of Scientific American's 50 visionaries together with Michael Mann who, after a comparable incident (one involving the "hockey stick graph"), was not fired but instead promoted. Steve McIntyre has also written the world's most complete chronology of the scandal. Google tells you more about the sad story of scientific consensus behind the former Korean national hero. It's amazing how this fraud that no one could apparently reproduce immediately gained 117 citations. Should we believe the Koreans - without testing them - because they are so skillful in manipulating the chopsticks? Or perhaps because it is nice to see that the U.S. science is falling behind - "certainly" because of George W. Bush? Have the people in that field lost their mind? Or is it really the case that the whole cloning field - or perhaps even all Bush critics in the world - are participating in a deliberate international fraud?

Back to the positive story: the genetic evidence for evolution.

New tools make questions solvable

New scientific methods and technologies often have the capacity to transform an academic dispute whose character used to be almost religious into an obvious set of facts. Let me give you two examples.

The death of hidden variables

The first example are Bell's inequalities. Before they were found, it was thought that no one could ever determine whether the quantum mechanical "randomness" was just an emergent process based on some classical "hidden variables"; this debate was thought to be a philosophical one forever. After the inequalities were found and the experimental tests confirmed quantum mechanics, it became clear that the quantum mechanical "randomness" is inherent. It cannot be emergent - unless we would be ready to accept that the underlying hidden variables obey non-local (and probably non-relativistic) classical laws of physics which seems extremely unlikely.

Sun's chemistry and spectroscopy

My second example goes back to the 19th century. Recall that the philosopher Auguste Comte, the founder of positivism, remarked in his "Course de philosophie positive" that the chemical composition of the Sun would forever remain a mystery.

It only took seven years or so, until 1857, to show that Comte was completely wrong. Spectroscopy was discovered and it allowed us to learn the concentration of various elements in the Sun quite accurately. Unfortunately, this discovery came two years after Comte's death and therefore he could not see it. Incidentally, two more years later, in 1859, Darwin published his theory.

The last we-will-never-know people

Many people have been saying similar things about physics in general: physics could never determine or explain UV or XY - and all of these people have already been proved wrong except for those who argue that the parameters of the Standard Model can't be calculated with a better accuracy than what we can measure; the latter group will hopefully be proved wrong in our lifetime.

Speed of evolution

What do the new discoveries tell us about the evolution? First of all, evolution is not fuzzy. It is "quantized", if you allow me to use physics jargon, and the evolutionary changes are directly encoded in the genes that can be equally easily decoded.

A related and equally important observation is that the evolutionary changes are quite abrupt. We have never observed skeletons of bats with one wing and similar creatures - as the creationists (including those in a cheap tuxedo, using the words of pandas from level 2) have been quite correctly pointing out for decades. Indeed, it often takes a single mutation only to establish a new species.

Many mutations are harmful and they become immediately a subject of natural selection. Some mutations allow the organisms to survive. All these changes were making the tree of life ramify all diversify - and they are still doing so although this process is nowadays slower than some other types of developments.

Reply to Pat Buchanan

Let me finally choose an article from Dembski's blog in which he reposts

It is entertaining to see a text whose political part is more or less true but the scientific one is so clearly and completely wrong. Let's clarify some errors of Buchanan's:

  • In his “Politically Correct Guide to Science,” Tom Bethell ...

Surprisingly, the book is called "Politically Incorrect...", not "Politically correct...". Tom Bethell is rather unlikely to be politically correct.

  • For generations, scientists have searched for the “missing link” between ape and man. But not only is that link still missing, no links between species have been found.

Because there are no highly refined intermediate links of the type Buchanan suggests; one mutation often makes these changes occur and the evolution is far from being a smooth, gradual, and continuous process. However, chimps' genome has been decoded. We can not only see that chimpanzees are our closest relatives but also deduce the existence of a common ancestor. Our relationship with the chimps is no longer a matter of superficial similarity; a long sequence of bits - a microscopic genetic information - reveals a much more detailed picture.

  • As Bethell writes, bats are the only mammals to have mastered powered flight. But even the earliest bats found in the fossil record have complex wings and built-in sonar. Where are the “half-bats” with no sonar or unworkable wings?

Half-bats with unworkable wings are predicted by Darwin to die quite rapidly, so there should not be too many fossils around. Observations seem to confirm this prediction of Darwin's theory, too. Indeed, such changes must proceed quickly and today we know that a single change of the genome is capable to induce these macroscopic changes.

  • Their absence does not prove — but does suggest — that they do not exist. Is it not time, after 150 years, that the Darwinists started to deliver and ceased to be taken on faith?

Don't tell me that you don't think that this comment of Pat Buchanan sounds just like Peter Woit. ;-) Let me remark, in both cases, that 150 years and maybe even 30 years is probably a long enough time to start to think about the possibility that the "alternatives" to evolution or string theory can't ever work.

  • No one denies “micro-evolution” — i.e., species adapting to their environment. It is macro-evolution that is in trouble.

First of all, it is not a trouble - it was chosen to be the most spectacularly confirmed scientific paradigm by discoveries done in 2005. Second of all, the difference between "micro-evolution" and "macro-evolution" is just a quantitative one. Most of the errors that Buchanan and other creationists do can be blamed on this particular error in their thinking: they incorrectly believe that objects in the world can be dogmatically and sharply divided to alive and not alive; intelligent and not intelligent; micro-evolution and macro-evolution. (And of course, someone would also like to divide the whole human population to believers and non-believers.)

Neither of these categories can be quite sharply defined. Even though the species are defined by "discrete", "quantized" bits of information encoded in the genome, it does not mean that each species can be classified according to some old, human-invented adjectives. Science does not break down but the adjectives used in the unscientific debate - or the Bible - certainly do break down when we want to understand life (or the whole Universe, for that matter) at a deeper level.

The world is full of objects whose "aliveness" is disputable - such as the viruses. The same world also offers evolutionary steps that can be safely classified neither as micro-evolution nor as macro-evolution. Finally, there are many organisms in the world that are only marginally intelligent, and I am afraid that this group would include not only chimps but maybe also some syndicated columnists. ;-)

  • The Darwinian thesis of “survival of the fittest” turns out to be nothing but a tautology. How do we know existing species were the fittest? Because they survived. Why did they survive? Because they were the fittest.

I completely agree that the operational definition of the "fittest" is circular. It is the whole point of Darwin's notion of natural selection that "being the fittest" and "have a higher chance to survive" are equivalent. However, there is also a theoretical way to derive whether an animal is "the fittest" which can be used to predict its chances to survive. Such a derivation must, however, use the laws of nature in a very general sense - because it is the laws of nature that determine the chances to survive. Sometimes it is easy to go through the reasoning. A bird without legs in between the tigers does not have a bright future. Sometimes the conclusion is much harder to make. But the main message is that these questions can be studied scientifically and the answers have definitely influenced the composition of the species on our planet.

  • While clever, this tells us zip about why we have tigers.

"Why we have tigers?" is not a scientifically meaningful question unless a usable definition of a tiger is added to it as an appendix. The Bible can answer such verbal, non-scientific question, by including the word "tiger" in one of the verses (and by prohibiting everyone to ask where the word and the properties of the animal came from). Science can only answer meaningful questions. For example, we may try to answer the question why the hairy mammals - beasts of prey - whose maximum speed exceeds 50 mph have evolved.

  • It is less a scientific theory than a notion masquerading as a fact.

It is somewhat entertaining that the word "notion" is apparently supposed to have a negative meaning. Notions, concepts, and ideas are an essential part of our theories - and the word "theory" is not negative either because the best and most reliable things we know about the real world are theories based on notions and ideas.

  • For those seeking the source of Darwin’s “discovery,” there is an interesting coincidence.

Those who judge the validity of a scientific theory according to the coincidences that accompanied its original discovery are intellectual equivalents of chimpanzees, and therefore they are another piece of evidence for evolutionary biology.

  • As Bertrand Russell observed, Darwin’s theory is “essentially an extension to the animal and vegetable world of laissez-faire economics.”

I completely agree with that. This is why both Darwin's theory as well as capitalism are the leading paradigms among their competitors. Many general ideas are shared by these two frameworks; other ideas are quite independent.

  • If it is science, why can’t scientists replicate it in microcosm in a laboratory?

Of course that they can replicate many particular examples in their labs. They can't replicate them exactly with the same speed as they occured in Nature because such labs would have to cover 510 million squared kilometers and they would have to work for 5 billion years. Nevertheless, the process can be sped up in many ways, at least in some particular situations.

  • If scientists know life came from matter and matter from non-matter, why don’t they show us how this was done, instead of asserting it was done, and calling us names for not taking their claims on faith?

Let me assume that the first sentence talks about the reheating, to be specific. The reason why I probably can't show Pat Buchanan how different forms of matter or non-matter are transforming into each other according to the laws of quantum field theory or string theory - and why we know that it is the case without any religious beliefs - is that Pat Buchanan apparently does not have a sufficient intelligence to understand my explanations. It's that simple.

  • Clearly, a continued belief in the absolute truth of Darwinist evolution is but an act of faith that fulfills a psychological need of folks who have rejected God.

That may well be the case but such an ad hominem observation is completely irrelevant if there are clear proofs that the picture is correct.

  • Hence, if religion cannot prove its claim and Darwinists can’t prove their claims, we must fall back upon reason, which some of us believe is God’s gift to mankind.

Unfortunately for Mr. Buchanan, this is not our situation because the Darwinists can prove their claims quite convincingly. By the way, the discovery of evolutionary biology is certainly one of God's big gifts to mankind, too. ;-)

  • And when you consider the clocklike precision of the planets in their orbits about the sun and ...

The motion of the planets is exactly predictable by our theories. It is clocklike but not atomic-clock-like. Indeed, we can easily measure the irregularities in their motion - which means, among other things, that we will have to insert a leap second between 2005 and 2006 once again to counterbalance Nature's (or God's?) imperfection, so to say.

  • ...the extraordinary complexity of the human eye, does that seem to you like the result of random selection or the product of intelligent design?

It is the result of very sophisticated laws of Nature - physics, biology, and so on - whose important "emergent" feature responsible for much of the progress is the natural selection. Natural selection is not quite random even though it can sometimes look so at short enough time scales.

  • Prediction: Like the Marxists, the Darwinists are going to wind up as a cult in which few believe this side of Berkeley and Harvard Square.

It would be a bit nicer if only a few around Harvard Square believed marxism. ;-)

Saturday, December 24, 2005

I got you a Christmas card

You can open it by clicking here.

Happy Holidays, and thanks for reading me this year.

Friday, December 23, 2005

Merry Christmas



Background sound (press ESC to stop): Jakub Jan Ryba's "Czech Christmas Mass" (Hey master, get up quickly); a 41:39 MP3 recording here



Merry Christmas! This special season is also a great opportunity for Matias Zaldarriaga and Nima Arkani-Hamed to sing for all the victims of the anthropic principle who try to live in the bad universes (audio - sorry, the true artists have not been recorded yet):

  • WE WISH YOU A TINY CC
    WE WISH YOU A TINY CC
    WE WISH YOU A TINY CC
    SO YOU GET GALAXIES!

    GOOD TIDINGS WE BRING,
    FROM FROM VILENKIN
    DON'T CRY IF YOUR WORLD SUCKS
    IT'S MUCH BETTER OUT THERE

    WE WISH YOU A TINY HIGGS VEV
    WE WISH YOU THE RIGHT UP QUARK MASS
    WE WISH YOU A GOOD WEAK SCALE
    AND A HAPPY VACUUM.

    You can't live until you get stars
    You can't live until you get stars
    You can't live until you get stars
    So bring them right here.

    GOOD TIDINGS WE BRING,
    FROM FROM VILENKIN
    DON'T CRY IF YOUR WORLD SUCKS
    IT'S MUCH BETTER OUT THERE

    WE WISH YOU A TINY CC
    WE WISH YOU A TINY CC
    WE WISH YOU A TINY CC
    IN A BIG MULTIVERSE!
Yes, your humble correspondent sees it differently (audio).



Single vevs

  • Trashing discretuum,
    with a one-choice vacuum.
    Laughing with our boss
    and with David Gross... (( Ha, ha, ha ))

    Vevs are 'bout to ring
    Making spirits bright
    What fun it's to determine
    the one background that's right.

    Oh, single set [of] single vevs, single all the way!
    Grab all other vacua and throw them right away.
    Single set [of] single vevs, single all the way!
    Oh, what fun it is to thrash anthropic beasts of prey.

    Landscape is now gone,
    so let's have some fun
    Take Witten along
    And sing predictive song.
    Begin with the state
    Two-fourty-eight E-eight
    Start to calculate the numbers
    and crack! You won't be late.

    Oh, single set [of] single vevs, single all the way!
    Grab all other vacua and throw them right away.
    Single set [of] single vevs, single all the way!
    Oh, what fun it is to thrash an..thropic beasts of prey.

Incidentally, the most famous classical old Czech Christmas carol "Nesem vám noviny" (lyrics, imprecise English lyrics, LM audio) is also sung by those who bring you good news from the landscape of Bethlehem.

E=mc2: a test ... interplay between theory and experiment

An experiment that is claimed to be the most accurate test of Einstein's famous identity "E=mc2" has been performed by physicists on the other side from the Central Square - at MIT.

Their accuracy is 55 times better than the accuracy of previous experiments. They measured the change of the mass of nucleus associated with the emission of energy after it absorbs a neutron. I find their promotion of the experiment slightly dishonest:

  • "In spite of widespread acceptance of this equation as gospel, we should remember that it is a theory," said David Pritchard, a professor of physics at MIT, who along with the team reported his findings in the Dec. 22 issue of Nature. "It can be trusted only to the extent that it is tested with experiments."





The words "it is [just] a theory" remind me of something. The formula is not just "some" theory. It is an inevitable, robust, and rather trivial consequence of special relativity - a theory that has been tested in hundreds of other ways. Many different experimental constraints are known and many of them are more stringent than those from the current experiment. Naively and dogmatically speaking, the formula can only be trusted to the extent that it is tested with similar experiments.

Realistically speaking, the formula - and many other formulae - can be trusted well beyond these experiments. Everything depends on the amount of reasoning that we are allowed to perform with our brains in between the experiments. It is not true in science that every new experiment is really new. The whole goal of science is that we know the result of a huge class of experiments without actualling performing them. We can make predictions. Very general predictions and less general predictions. And science is able to do such things, indeed. If we are allowed to think a lot, the experiment is not terribly thrilling and its result is known in advance. There is just no way how we could design a theory in which the results will be different that would be simultaneously compatible with the experiments that have already been made.

Also, a theorist would not say that such an experiment is testing "E=mc2"; it is very hard to explain to a particle physicist why one thing they measure is "mass" while the other is something else, namely "energy". They just measure several different forms of the same quantity - one that can be called either mass or energy. Finally, if some discrepancy were found, no sane physicist would interpret it as a violation of this particular formula. It's because we have no candidate framework that would be consistent with basic properties of the Universe but that would violate "E=mc2". For example, Noether's theorem only guarantees the conservation of one quantity associated with the time-translational invariance - it is the total mass/energy. Of course that in the case of an experiment that would disagree with the theory, we would have to look for other, more technical and subtle explanations of such a discrepancy.

But all these comments are completely hypothetical because this is just a very low-energy experiment described by well-known physics and we know that that there won't be any discrepancies.

Isolating data to compare

There is one more topic related to the interactions between theories and experiments. Steve McIntyre is trying to clarify some confusions of Rasmus Benestad here. Benestad writes, among many other bizarre things, the following:

  • When ARIMA-type models are calibrated on empirical data to provide a null-distribution which is used to test the same data, then the design of the test is likely to be seriously flawed. To re-iterate, since the question is whether the observed trend is significant or not, we cannot derive a null-distribution using statistical models trained on the same data that contain the trend we want to assess. Hence, the use of GCMs, which both incorporates the physics, as well as not being prone to circular logic is the appropriate choice.

In other words, his proposed strategy is to pick a favorite model of yours - a model that predicts a "signal" - and to work on showing that the observations are consistent with the model. Benestad clearly believes that it is not necessary to try to verify the hypothesis that the observations are a "signal" in the first place. The statement that we observe a "signal" not noise is a dogma for him. No analysis of the "natural background" and its statistical parameters is required; in fact, it is not even allowed, Benestad argues.

I find his reasoning circular, flawed, and downright stupid. This is exactly how crackpots operate: they almost always want to make a big discovery - to find a huge signal - without learning what is actually the "background" above which their hypothetical signal should exist. To be sure: of course that we must first know what to expect without a conjectured new effect if we want to decide whether the new effect exists. And if such expectations are determined experimentally, of course that we are not allowed to include the new effect when we determine the expectations. (This reminds me of the fermionic zero mode debate.)

Let me try to describe the situation in one more way. Rasmus is not right when he says that we cannot derive a null hypothesis from the datasets themselves. This is what we’re doing in many situations in science - every time when an actual nontrivial check of our theories is provided by reality. Let us look at an example. We have thousands of such examples in physics.

CMB and the isolation of theory and experiment

When the cosmic microwave background was discovered, no one had a complete theory. It was determined from the data that the microwave background was thermal and what was its temperature. Namely 2.7 kelvins. Of course one had to know that “being thermal” was a natural answer about the structure of radiation; in fact, the CMB is still the most accurate natural thermal blackbody curve we have seen so far. But you don't need to understand or calculate some situations in general relativity to understand that the observed radiation is approximately thermal!

The fluctuations of the temperature were determined from the data, too. Their dependence on the scale was also found and the spectrum was seen to be approximately scale-invariant. Finally, deviations from the scale invariance are also observed from the data.

The main conclusions - thermal curve; scale-invariant fluctuations; violations of scale invariance in a particular direction; various correlations etc. - are derived directly from the observed data.

Then you independently pick your Big Bang theory and you see that it naturally explains the thermal distribution because everything was in equilibrium 300,000 years after the Big Bang when the radiation was created. Also, inflation that took place a long time before this era - a fraction of second after the Big Bang - explains scale invariance. And some more detailed calculations that depend on the inflationary model also predict some deviations from the scale invariance, and many models may be falsified in this way. In fact, the last observation - the deviations from scale invariance - do not yet have a generally acceptable theoretical description even though people can, of course, fudge their models to get an agreement, much like the climate modellers are doing so.

What I want to say is that there must separately exist conclusions derived from the experiments; and conclusions derived just from the theory. And these two sets of conclusions must be compared. If someone is showing an agreement simultaneously by twisting and cherry-picking the data according to the theory and fudging the theory according to the data, merely to show that there is a roughly consistent picture, then it is no confirmation of “the” theory. In fact, there is no particular theory, just a union of ill-defined emotions whose details can be changed at any time. It’s not science and one cannot expect a "theory" obtained in this way to have any predictive power. This is how the priests in 15th century argued that the real world is consistent with the Bible.

The order of discoveries must be arbitrary

A correct scientific theory must be able to make predictions of some feature(s) of the observed data before the data is observed - this is why it is called a prediction - and the same thing holds vice versa. Nontrivial experimental facts must be determinable and describable without the ultimate theory before this theory is found, otherwise they cannot be used to determine the theory. In other words, it must always be a historical coincidence whether the theory or the experiment was the first group that gave the result.

Of course I am not saying that the actual evolution of science is decoupled to theorists and experimentalists who don’t talk to each other. What I am saying is that they should not be talking to each other - and they should never build their research on their friendship - when they try to determine whether a theory agrees with some particular observations.

In this particular case, whether or not some heating is an example of natural persistence or an effect caused by XY is, of course, an important scientific question. It is much more likely and “default” that it is caused by some long-term persistence because if it were not, there are still very many factors XY that could be really causing it. If we don’t have an observation that would suggest that the persistence does not exist (for example accurate enough observations of the 15th century temperature), we should not assume that it does not exist. Of course that it probably does, and a goal of the scaling papers is to find phenomenological laws that would help to determine the color of the noise - and henceforth also the persistence at various time scales - from the data, regardless of some additional effects caused by anyone else.

The qualitative question whether the persistence exists is quite clear. It does. The noise exists at all scales. The real question is a quantitative one.

Background vs. signal

It is extremely important to know what is the “natural background” if we try to figure out whether there is a new “effect”. Some people like Rasmus Benestad just don’t want to study the natural background at all - they immediately want to get effects (and attention of the press in which they're pretty successful because many journalists are pretty dumb) - which is why I think that they are crackpots. As mentioned previously, one of the defining features of crackpots is that they want to make big discoveries before they learn what is the science describing the “simpler” phenomena before their discovery.

Let me say why their research is defective in one more way.

Whenever we try to design scientific theories that describe something, we must know which quantities in reality will be described by our theories and we must be able to isolate them.

By isolating them, I mean both theoretical as well as experimental isolation. In theories we must know - or at least feel - that the effects we have neglected do not change our predictions too much. In experiments we must know - or at least have rational reasons to believe - that the effects we observe are not caused by something else, something “more ordinary”. When we try to observe telepathy, for example, we must know that the people are not communicating by some more "natural" methods.

The climate modellers almost never try to follow these lines. They have a completely vague, sleeky set of ideas that predict anything and everything - warming, cooling, bigger variations, smaller variations, more hurricanes, less winds, increased circulation, diminished circulation, more ice in Antarctica, less ice in Antarctica, and so forth - and then they’re arguing that the data agrees with these predictions. Of course they emphasize the points whenever they agree and de-emphasize them whenever they disagree. This is not science.

Of course there is no direct way how one can ever construct a scientific framework out of this mess. To do science, one must focus on a limited class of questions that are sufficiently well-defined and that have a chance to be “cracked” by a theory. I am sure that there are many nice laws about the climate that we don't know yet, and I am equally sure that the work of most of the "mainstream" climate scientists today is not helpful in revealing these laws.

When we try to argue that the humans are suddenly dictating the climate trends - after 5 billion years when they were dictated by other, more natural things - it is a rather extraordinary conjecture that deserves extraordinary evidence. For getting any evidence, it is absolutely necessary to understand how the climate was behaving for 5 billion years before the hypothetical “revolution” occured around 1917. We must know what were the fluctuations and how they depended on the time scale. We can only learn such things reliably by observing the real world. Only once we know the background, we can study the additional effects.

Studying additional trends above a background that we don’t need to understand is equivalent to the Biblical literalism.

Summary

Some readers may feel that the two parts of this text contradict each other because I defend theory in the first part and the observations in the second part. However, I am convinced that every sane scientist (and informed layman) knows that both theory as well as experiments are important. My goal was certainly different from changing the balance to one side. My goal was to emphasize that science should be looking for robust conclusions and theories and it should be attempting to find the situations in which the phenomena exhibit themselves in the sharpest possible way. And a necessary principle to achieve this goal is to try to follow these principles:

  • try to isolate the "signal" that you are interested in as well as you can
  • when your signal exists above a certain "background", you must definitely try to understand the background first
  • if you can find an idealized situation in which one signal is isolated from some other effects that you're not interested in, study this situation
  • if you cannot find an idealized situation and if everything looks like quantitatively undescribable chaos to you that you want to match by a computer-generated chaos, then it means that you still misunderstand what's going on; avoid the quagmire and return to the point #1
  • if you have a theory, be sure to deduce and decide what kind of quantities the theory should be able to predict
  • if your theory only agrees with some observations, never fool yourself and never try to de-emphasize the observations you know to disagree with your theory
  • if your theory or model only agrees "roughly" with 24 features of the data but there are 25 parameters or assumptions that led to your model, be sure that you can't claim that you established your model or its assumptions
  • isolate the assumptions of your theories (and open questions) from each other and try to test them separately whenever you can
  • if you try to explain experimental data, always ask whether there exists a simpler and more natural theory than yours that would be able to do approximately the same job
  • if there is a more natural theory with less parameters, go for it
  • never believe that your theory is superior just because it is using the buzzwords - or approximate concepts and laws - that are more frequent in physics; this is not how the better theories are identified