Monday, November 29, 2004

My friend Bean, at Prairie Weather, has reawakened my desire to blog.



Go read her (and me, occasionally).

Sorry I've been gone so long.



I just decided I'd find this and start writing here again, even though it's been almost two years.



Not much has changed in the time intervening, it appears...

Saturday, November 27, 2004

Bush's space program

Let me make the main message of this article clear at the very beginning:

  • In January 2004, I was impressed by Bush's speech about the long-term goals of the space exploration program.
Risa - an astrophysicist from Chicago - just posted an article about this topic on Sean Carroll's blog:



http://preposterousuniverse.blogspot.com/ ...



The Congress has approved the higher funding for NASA



http://www.nytimes.com/ ... /24nasa.html



which is viewed as support for Bush's visions.



I have read the complaints of the American Physical Society, and it was a less pleasant reading for me. These texts



http://www.aps.org/ ... (press release)

http://www.aps.org/ ... (report)



mostly sound like demotivating statements of bureaucrats - no doubt, in reality, there is a lot of outstanding scientists among them - who are always ready to kill every specific idea and replace it with a lot of neutrally sounding, boring cliches. They say, among other things

  • The plan is extraordinarily difficult.
  • It could threaten the funding of some other existing projects KQ, UH, PT, SJ.
Some sentences are so artificial, long, and boring that I don't even want to reproduce them here.



Concerning the second point above: it is exactly the whole point of the new vision of Bush - and any other vision of a similar kind - to re-evaluate the ideas what is important and attractive about the space exploration program and which projects should be completed and canceled. Some scientists just to believe that once their research of something has been accepted as legitimate research, it must be funded forever - and they would even like to hear that it is important forever. This is not how science within a finite society can work in the long run.



In fact, I would find it very useful if an influential politician who still knows how to be excited by something - like Bush - looked at other fields in science including particle physics. I think that the particle physicists - namely the experimentalists - are not using their resources efficiently either, and they're continuing to do many things that are almost guaranteed to lead nowhere, even though one could find many other projects that may have very clear and totally amazing outcomes. How much do we believe that Tevatron will bring us anything new and important? Of course that the world must preserve its family of skillful experimentalists even during the times in which the progress in the experiments is nearly frozen; but even such conservation should be done efficiently, and with a clear idea about the long-term goals.



Although I am a very theoretically oriented person, it's clear that I don't really care about KQ, UH, PT too much - and I care about SJ just a little bit. A human mission to Mars is something on a completely different level. Of course that such a mission is not just a matter of pure science. It is a way to realize the dreams of many of us - scientists as well as other people - dreams that we have had since we were little boys and girls.



A human mission to the Moon - or even to Mars - is something amazing and irresistable. And the idea of a more permanent base on the Moon or on Mars would represent new steps towards the dreams about a big future of our civilization. People were able to fly to the Moon in the 1960s and early 1970s, but it seems much more difficult for us today. What happened with us?



Let's list a few numbers:

  • The International Space Station's (ISS) total cost has been about 100 billion US dollars
  • The canceled Superconducting Supercollider (SSC) would only cost 8 billion dollars
  • The total budget for various telescopes etc. between 2000 and 2010 is about 5 billion dollars
  • The total cost of the Large Hadron Collider (LHC) is about 1.5 billion dollars
These basic numbers describing different projects should be listed for many other projects in science and space exploration, and there should exist a reasonable discussion within the scientific community and the policymakers which of these things are good investments and which of them are less good investments.



I may be a biased particle physicist, but the fact that we could have built 12 SSC supercolliders instead of the ISS is shocking. I have almost no idea what the ISS has been really good for - in what sense it was "better" or "more exciting" or "more scientifically interesting" compared to the first flight of Yuri Gagarin, for example. Note that it's nearly as expensive as the war in Iraq.



On the other hand, humans on Mars would be inspiring for the whole humankind.



Many people are scared by the dim future of the Hubble Space Telescope. You know, the funding in 2009 should be moved to the James Webb Space Telescope - a telescope that will focus on the infrared spectrum. My guess is that the Hubble will be viewed as giving us "nothing really new" much before 2009. Once it retires, it can be privatized or given to another country, and so forth. The progress must simply go on, and it is a completely wrong approach to assume that we will continue to work with the same technology forever. Of course that there must be completely new projects and new machines. And if there is a clear direction, it motivates the people to work on their specific tasks. And finally, it also encourages progress in the industry.



And let me make it sure that there exists no physical principle that will prevent the people from visiting Mars.



The report of the APS also says that the humans on Mars would also be counterproductive because they would contaminate the environment and prevent us from investigating the primary question - namely whether there has been life on Mars. I find such arguments ludicrous, and it's bad if such arguments are used to inhibit the progress in the space exploration program.



First of all, the humans could only "contaminate" the environment by the usual, terrestrial forms of life - and if such forms are found on Mars, everyone will believe that they were brought from the Earth anyway. If other forms of life - like different patterns replacing RNA/DNA - are seen on Mars, then the contamination by the terrestrial forms of life means no problems. Finally, if there's been any life on Mars, it does not seem that it could have been too impressive. I just think that Bush's vision of future life on Mars - something that we can initiate - is a much more exciting idea.



The possibility that there will be a usable base on Mars in this century is 100 times more intriguing than some hypothetical speculations that a robotic probe could find some specific organic compound on this planet. If we were able to create more permanent bases on the Moon or on Mars, it would give us new tools to investigate the Universe, and new hopes to expand our civilization to other places in the Cosmos.

Friday, November 26, 2004

Violation of complementarity?

It's kind of surprising that no one in the high-energy community has discussed this amusing experiment by Shahriar S. Afshar.

Afshar page
Afshar PDF
Questions welcome (Afshar's blog)



I learned about this experiment from Wikipedia where it was uncritically described at various pages about the interpretation of quantum mechanics.

This Gentleman and colleague of ours has argued that he was able to falsify the Copenhagen interpretation as well as the Many Worlds Interpretation - WOW - and confirm an obscure interpretation - the so-called Transactional interpretation by an interference experiment of the type Welcher Weg (which way). This transactional interpretation, proposed by a physicist John Cramer, is based on signals being sent back and forth in time. :-)

Incidentally, an older article about the interpretation of quantum mechanics on this blog is here:




Causality and entanglement

How could he have falsified the standard interpretation(s)? He argued that he has falsified Bohr's complementarity principle itself. As you know from the play Copenhagen, Bohr had the complementarity and Heisenberg had the uncertainty. But from the current perspective, they're very related if they're applied at the momentum (inverse wavelength) and the position. Actually, the current interpretation of "complementarity" was probably constructed as an argument by Heisenberg, not Bohr, in the 1950s. In some sense, I feel that they developed these notions together, and they divided the credits. But let's not analyze history - it's not the main point here. Complementarity says that a photon or light either has particle-like properties, or wave-like properties, but you can never measure both of them simultaneously.

For the interference experiments, it means that you either

  • see an interference pattern - which means that you measure the wave-like properties of light
  • or you're able to determine which slit the photon picked - which means that you measure the particle-like properties of light,
but in the latter case, the interference pattern disappears. There is a complementarity principle that always restricts you - or the uncertainty principle for the photon, if you will. You can't see its position and the momentum (which is manifested by sharp interference patterns) simultaneously.

One may also consider the intermediate cases in which we see the interference pattern with contrast or visibility V which is a number between 0 and 1 - roughly speaking, it's the difference of the intensity at the interference maxima and minima, divided by the sum of the same two intensities. And we may consider the situation in which the "which way information" is extracted with reliability K which stands for Knowledge - again between 0 and 1. The principle of complementarity then implies something like "V^2 plus K^2 is never greater than one". More concretely, it is impossible for both V and K to be greater than 0.9, for example.

This statement is, of course, true as long as you define the variables V,K properly.

Afshar's strategy is simply to arrange an experiment in which he argues that he can see a sharp interference pattern with V=1 (or at least, greater than 0.99), but he can also say which slit the photon chose (which means K=1). That would mean that he gets 1+1=2, which is greater than one, and the principle of complementarity would be violated maximally.

Of course, it's a silliness - the same silliness like saying that "x" commutes with "the derivative with respect to x". Nevertheless, this silliness was described as a "quantum bombshell"

  • in the New Scientist, July 24th, 2004
  • on NPR Science, July 30th, 2004
  • on EI Cultural, September 9th, 2004
  • in the Philosophers' Magazine, October 2nd, 2004
  • in The Independent, October 6th, 2004
  • at various seminars, including one at Harvard University (I did not see it)
On the main page, Afshar "responds" to Bill Unruh:

Afshar's FAQ

Afshar's reply says, roughly speaking, that Bill Unruh is like an engineer who designs planes without wings which cannot fly and who argues that it means that no planes can ever fly.

Unfortunately for Afshar, Unruh's description of the situation is rather transparent and it reveals some common errors in the interpretation of these experiments:

Rebel via UBC

See also the discussion on the blog of the daughter of that Cramer who invented the time machine interpretation of quantum mechanics - and she is happy about Afshar's statements because she admires her father:

Cramer about Dad

Unruh uses a more transparent experimental setup where some issues can be explained very easily, without various technical complications of Afshar's setup. The only respect in which Unruh's setup seems to be oversimplified is that his framework only contains "completely dark" and "completely bright" area, and therefore the subtlety "thickness of the wires is small" does not arise in Unruh's specific approach. (Thanks to William Unruh for his patient explanation of the isomorphism.)

Which photons we consider?

However, Unruh's explanation addresses Afshar's error assuming that Afshar only wants to consider the events with both pinholes open.

In this context there are many ways how to define the problematic notion of "contrast" of the interference picture (that you never really see directly). If you decide to define it using the "both pinholes open" situations, then you must still measure the influence of the wire grid located at the interference minima, as well as the wire grid located at the maxima. If you put the wires at the maxima, however, the direct link between the detectors and the pinholes starts to disappear. This is a point due to Unruh.

In the conventional and straightforward interpretation of the variables V,K, Afshar's error would be even more obvious.

In the inequality "V^2+K^2 is smaller than one" V is normally meant to measure the contrast of the actual picture that we see at the place where the photons under consideration are absorbed: in Afshar's case, it means the contrast of the pictures in the detectors themselves. Of course, these pictures don't show any sharp interference minima at all - the minima are the dark blue regions on the figure 1 screen below, and they're never inside the disk. That would mean that V=0, too.

One again the important point of the previous paragraph:

  • In the complementarity principle, we first determine which set of photons we consider, and then we calculate both V as well as K from this set of photons. The contrast is computed from the pictures created by all these photons, and we also want to determine the "which way" information of all of them if we want to claim that K=1
  • If you read the section 3 of his PDF file, Afshar seems to compute the contrast of his interference picture from a very small subset of his photons that he uses for the calculation of K - only from the photons that interact with the wire grid.
  • If true, that's of course silly. We can always arrange an experiment with 2 million photons - the first million will be used to create a perfectly sharp interference picture, and for the second million we will be exactly able to determine the pinhole. But this does not mean that we have K=1, V=1. We must consider the same set of photons if we want to determine K, V.
Whatever reasonable way we choose to define V, we will always see that it equals zero as long as we're able to determine the "which way information", in agreement with the principle of complementarity.

Because of the differences between Afshar's and Unruh's setups, it is perhaps useful to talk directly about Afshar's experiment. The text below could very well be the first public critique of Afshar's claims in his own setup. Below, I will assume that Afshar wants to measure the wave-like character of the laser light by comparing the influence of the wire grid on the outcome of experiments with 1 or 2 pinholes open - this seems to be the last case in which we want to prove that the contrast is also going to zero as the thickness of the wires approaches zero.



First we must describe his bombshell. See the picture. Laser light goes through two pinholes and through the lens. The geometry of the lens is such that if you close the upper pinhole 1, the photons coming through the lower pinhole 2 will always be detected in the upper detector on the right, and vice versa. You only need the geometric optics of the light rays to see why. Each detector records a full image. This allows us to determine which pinhole the photon chose from the detector where we see the photon.

Fine: we can measure the position i.e. the pinhole through which the photon traveled - by looking which detector "beeped". But we don't see any wave properties. The interference pattern at the lens is not recorded.

Afshar was probably thinking like this:

  1. What will you do if you want to see which pinhole the photon chose, and also simultaneously see the wave-like properties of the photon? Well, you put a lot of wires just before the lens (the red dots in the second and the third picture above). You put the wires in the interference minima obtained from the interference between both pinholes.
  2. If the wires are sufficiently thin, it guarantees that we will see the same images created on both detectors (see the third picture) like the images from the first picture without the wires. It's because no photons are absorbed by the wires - the wave function is essentially zero near the wires.
  3. The fact that the situations "1" and "3" give you identical images proves that we are observing some wave-like properties of the photon. The situations "1" and "3" only lead to the same outcome if the spacing between the wires agrees with the shape of the wave function near the lens, for example.
  4. Obviously, the wires will have a more significant effect if you close one of the pinholes - this is visualized on the second picture in the middle. Because there is no interference from two pinholes in this case, the wave function is never zero, not even near the wires. Therefore the image is visibly weaker and more measurably disrupted, if compared to the single-pinhole no-wire situation.
  5. OK, return to the third picture with both pinholes open and with the wires arranged before the lens. Recall that we observe the interference from both pinholes - this is displayed by the fact that the presence of the wires does not make any measurable difference.
  6. That proves that we see the interference pattern with the maximum constrast or visibility, V=1 (sic !!!)
  7. But at the same moment, we are also able to say which pinhole the photon chose - if the photon appears in the upper detector, it chose the lower pinhole, and vice versa.
  8. Therefore we have showed that Bohr's complementarity principle is violated (sic !!!).
OK, where's the problem? If you think about it for a few minutes, you will see the problem. The statements 1,2,3,4,5 are correct, 7 is approximately correct, but 6 and probably 8 are incorrect - even though I can't guarantee that everything Bohr has ever said about complementarity was true; it's just my understanding of the complementarity principle that has not been violated by this experiment, and Shahriar may have found a statement due to Bohr in the libraries or archives that was proven incorrect by this experiment. (Thanks to Bert Halperin for discussions about this point.)

It is also important that 3 is not taken out of the context - 3 is only correct if you also say 4.

There is still complementarity in action, but one must express it a bit more quantitatively. Imagine that the relative thickness of the wires - the fraction of the space that they block - is T (effectively something below 0.1 in the actual experiment). At T=0, the wires don't really exist, and you can't say that you observed the wave phenomena because the image is unchanged not only in the two-pinhole interference case, but also in the single-pinhole case.

If T grows, then you can actually see that the two-pinhole interference situation is affected much less by the wires than the single-pinhole setup. This proves that you "start to see" the wave phenomena. But at the same moment, as T grows, in the second, middle picture, the photons also start to produce a weak image of the photon in the "wrong" detector. This starts to invalidate the connection between the detectors and the pinholes.

The more clearly you see some interference pattern, the less certainly you can claim that you know which pinhole was chosen. It is completely analogous to the situation in which the two pinholes are being closed randomly and independently: some photons will interfere, and some won't. Those that interfere will create a "weak" interference pattern, and those that don't interfere (coming through one pinhole only) will allow you to guess correctly which pinhole they went through.

But the main error of Afshar is that

  • he thinks that the contrast of his interference pattern is V=1 or very close to one, i.e. he observes the wave-like properties "sharply", just because his thin wires don't spoil the signal in the case in which the photons interfere near the lens.
  • The definition of V is a bit cumbersome because he really does not measure any interference pattern directly, but we can talk about the indirect V anyway. Actually, V is very close to zero. You only prove the existence of the interference pattern if you compare the effect of the wires in the case of two open pinholes with the case of one open pinhole.
  • But it's important that for thin wires, even their effect on the situation of the second, middle picture is very small. You see that the colorful picture in the middle is disrupted "just a little bit". Even here, most photons don't care about the wires. It's exactly this disruption that defines how much we observe the wave-like properties - i.e. how much we observe the difference between putting the wires at the interference minima, and putting them into places where no interference occurs.
  • Because this signal (disruption) from the second, middle picture is small (equivalently, it only affects a very small portion of the photons), the contrast V is also very small, and goes to zero for infinitely thin wires.
If the explanation is not clear, we may try to be more extreme. Imagine that the wires are so thin - and we put just a couple of them - so that it is guaranteed that we will not be ever able to measure their effect. Would you then claim that we observe a sharp interference pattern? Of course not. The position of the superthin wires does not affect any experimental setup, and we don't observe any interference pattern whatsoever.

The only way how we observe the interference patterns with the thin, but visible wires is that while they don't affect the outcome of the experiment with interference (both pinholes open), they do influence the situation with one closed pinhole. We must consider an ensemble of two different kinds of experiment. But even in the latter case, the influence is small, and it is this influence that measures the contrast or visibility of the wavelike properties. As the thickness of the wires goes to zero, the visibility goes to zero, too.

I can try to be a bit quantative. If the relative thickness for the wires is T (effectively below 0.1 in the actual experiment), and the wires are put to every minimum, then the probability of the wires absorbing/reflecting the photon goes like T for the single-pinhole case, and T^3 in the case of interference. However, the probability that the wires will send the photon (with one pinhole closed) into the "wrong" detector goes like T^4 or something like that (about 10^-6 in the actual experiment) - this probability will be irrelevant. The wave phenomena are observed by seeing that the single-pinhole picture is damaged much more than the double-pinhole picture. This means that T must be rather large, but then the error in the determination of the pinhole is rather large, too.

In fact, the main error in determining the pinhole does not come from the photons that appeared in the "wrong detector". The main contribution comes from the photons that were absorbed or reflected by the wires - we are definitely unable to determine the "which way" information for these photons!

T^4 may look like a large power, but it is important to see that T itself must be comparable to one if we want to claim that we've seen the wave phenomena "with visibility near one", and then T^4 is also of order one, which means that the error of our identification of the pinhole is of order one, too.

There is absolutely nothing mysterious about Afshar's experiment. In fact, quantum mechanics allows him to predict much more than just qualitative statements whether he can guess. It predicts the full probability distribution for both detectors and for every setup he chooses. If he did science quantitatively, he could have just tested that these numbers exactly agree. And if he treated quantum mechanics seriously, he would not waste time with these configurations because physics behind such experiments has perfectly well understood since the experiments of Thomas Young in the 19th century. And of course, the conventional quantum mechanics is compatible with the principle of complementarity.

The variables V,K once again:
V=0+epsilon, not 1-epsilon

Let me use Afshar's variables V,K for a while - see the PDF file linked at the beginning of this article. V is the "visibility" or "contrast" of the interference picture (0 for no interference, 1 for perfect minima with zero probability), while K is the reliability how you can determine "which way" the photon went (0 if you cannot, 1 if you can say it perfectly). The inequality from complementarity is something like "V^2+K^2 is not greater than one". Afshar more or less argues that "V^2+K^2" in his case is "1+1=2", which is not realistic. While K is pretty close to one, V can by no means be set to one because he does not really see any interference picture. If the thickness T goes to zero, it's totally obvious that the information about the wave character of the photon goes away - the superthin wires affect neither the interfering photon nor the single-pinhole photon. The interference is not seen, no photons near the lens are absorbed or affected.

Because he does not directly see any interference pattern, it is not quite clear how to assign a nonzero value of V, but morally it is true that V^2 behaves like something of the form T^4 while K^4 behaves like 1-T^4, as argued above. I will have to re-check and fix the powers, but the main conclusion is not affected:

If the wires are thin, the visibility V of the wave character of the light is close to zero, not close to one as Afshar argues!

Is it a quantum experiment?

What I find most unjustified about this experiment is that it has actually nothing to do with quantum mechanics. You can describe everything about this type of experiment using classical electrodynamics simply because the number of photons is large - he has not really measured individual photons. If he measured the individual photons, the probability density would simply be proportional to the energy inflow determined from the classical theory.

The classical electromagnetic waves propagate according to Maxwell's equation. Of course, the standard quantum mechanics - quantum electrodynamics - in such limits reduces to classical electrodynamics. Afshar therefore

  • either argues that he has also falsified Maxwell, which I consider enough to call someone a crank; my feeling is that he does not argue that
  • or he admits that his experiment is just another trivial, unsurprising double-slit experiment that agrees with Maxwell's equations and quantum mechanics - but then I really don't understand how can he say all these big statements about "quantum bombshells" and "violation of the principle of complementarity"...

Note added later: let me mention that Kastner has submitted another paper criticizing Afshar's conclusions. In my opinion both Unruh as well as Kastner replace Afshar's experiment by a completely different experiment that does not capture the main flaw of Afshar's reasoning. The main flaw is that Afshar does not realize that for a tiny grid, only a very tiny percentage of photons is used to observe the wave-like properties of light; these are essentially the photons for which the which-way information is completely lost. Because most photons go through the lens without any interactions and interference, Afshar is not allowed to say that he observes the wave-like phenomena with visibility close to one. In fact, it is close to zero if a consistent set of photons is used to define both V and K.

Lucie Vondráčková: Strach (Fear), English lyrics

Lucie Vondráčková: Strach

Czech lyrics




3:38 Rain
3:37 when it drops from the height of clouds
3:34 and then it walks on the roofs
3:31 may become slightly confused
3:28 when it hears what I do.
3:23 Every day
3:20 my windows
3:17 hear from me...

3:15 That love's crawling on its knees
3:12 there's no more reason to fear.
3:09 I'm no longer dating you,
3:06 you don't have to invent lies.
3:01 Every day
2:57 so why I
2:54 suddenly ... have

2:51 Fear
2:49 that nothing can compare with
2:47 ideas that you have
2:44 I fear that
2:43 nothing can compare with your
2:40 beautiful dreams.
2:36 That nothing can compare with
2:34 old legends that you know
2:31 I fear that
2:29 nothing can compare with those
2:27 liars.

2:19 The girl
2:17 you didn't love is yours.
2:14 It was our destiny.
2:12 With her
2:11 you only played
2:09 and who knows
2:08 which one has heard more lies.
2:05 And I
2:03 feed the rain
2:00 with my sorrow.
1:57 That rain
1:55 when it drops from the height to us
1:52 and then it walks on the roofs
1:49 may become slightly confused
1:45 when it hears what I do.
1:41 Every day
1:37 your windows
1:34 get my mail...

1:31 With
1:30 fear
1:29 that nothing can compare with
1:27 ideas that you have
1:24 I fear that
1:22 nothing can compare with
1:20 your lovely dreams.
1:17 That nothing can compare with
1:14 old legends that you know.
1:11 I fear that nothing can compare with
1:07 liars.

0:43 I fear
0:41 that nothing can compare with
0:39 ideas that you have
0:36 I fear that
0:35 nothing can compare with
0:32 your lovely dreams.
0:29 That nothing can compare with
0:26 old legends that you know.
0:23 I fear that nothing can compare with
0:19 liars.

0:18 I fear
0:17 that nothing can compare with
0:13 ideas that you have
0:10 I fear that
0:09 nothing can compare with
0:06 liars.

Category theory and physics

John Baez's "This Week's Finds in Mathematical Physics (Week 209)" is about one of John's favorite topics, namely category theory.

http://math.ucr.edu/home/baez/week209.html

Incidentally, the previous week 208 was discussed here.

You know, category theory is the most abstract and "universal" part of rigorous mathematics, and it is also known as the (generalized) abstract nonsense. Actually the supporters of category theory don't view this name as an insult but rather as a cool compliment. Many of us have had a period in which the mathematical rigor was almost irresistable.

Category theory involves various theorems about sets, categories, functors, homomorphisms, and other abstract objects. Instead of saying "we perform an analogous procedure with a different kind of theory", category theorists give you a framework that translates vague words such as "analogous" into a mathematically rigorous structure.

This approach is meant to unify all of mathematics and all of rational thinking. You would think that the result must be very deep and hard to understand. But the actual results and tools of category theory are often closer to ordinary diagrams (combinatorial graphs) with arrows in between the objects - similar diagrams like those used by the people from the commercial sector in their PowerPoint presentations.

The mathematically oriented part of the string theory community is mostly excited by category theory. The so-called "derived categories (of coherent sheaves)" have been proposed as a generalization of K-theory to describe D-branes of various dimensions. Unlike K-theory that only knows about the allowed conserved charges of D-branes in various backgrounds, category theory should also know - and perhaps knows - something about their dynamics, for example the points in the moduli space at which the D-branes become (un)stable or (un)bound.




If you want to see some deep papers in this direction, see e.g.

http://arxiv.org/abs/hep-th/0011017
http://arxiv.org/abs/hep-th/0104147

The role of category theory in physics can therefore be described as a "progressive direction" within string theory. That's why at sci.physics.research, Aaron Bergman and Urs Schreiber replied to John Baez's week 209 that "he should be careful because he is secretly starting to work on string theory". ;-)

Skepticism

Nevertheless I think it is still fair to say that the dominant, physically oriented part of the string community is skeptical about the importance of the methods of category theory in physics - and, of course, for most of us the reason is simply that we don't understand where this approach came from. I've asked the same elementary questions to many people who've been trying to explain me derived categories - some of them with some success, most of them with no success whatsoever:
  • Are these notions and statements of category theory something that you can prove - or at least check in many situations - to be valid for string theory as we know it, or is it just an unproven conjecture that derived categories describe D-branes?
  • Or is category theory just a different language to describe the same physical insights that we know anyway, using other means?
  • Or is this categorical approach meant to redefine string theory? How do you prove that such a "new" string theory makes any sense?
The answers I've received were always a bit mixed. Some of the friends of category theory replied that category theory was just a different language to describe physics that we knew anyway - but "not quite". Such uncertain messages potentially make no sense, and it is hard to understand their meaning. In physics, we propose different conjectures about the real world, and it is important that we're not guaranteed that these conjectures will be true.

Probably the most serious conjecture of current high-energy theoretical physics is that string theory, once understood completely, describes the whole Universe around us exactly. Again, it's important that it is just a conjecture, albeit a very attractive one and one with a lot of circumstantial evidence. String theory itself is not just a conjecture, but rather a seemingly consistent mathematical framework.

Once we accept string theory as an objectively existing mathematical structure, a structure that we treat as a part of "generalized physics" - which is of course what all string theorists are doing every day - we can ask a lot of questions about its properties. But it's important that even in this case, the statements about string theory are just conjectures, and they need to be proved or supported by evidence, otherwise they're irrelevant and "wrong", in the physical sense.

I always feel very uneasy if the mathematically oriented people present their conjectures about physics, quantum gravity, or string theory as some sort of "obvious facts". One of the fascinating features of string theory is that its objects and investigations, even though they've been partially disconnected from the daily exchanges with the experimentalists, remained extremely physical in character. All of the objects that we deal with are analogous to some objects in well-known working physical theories, to say the least.

You may imagine that after 20 years and 15,000 papers, theoretical physicists who don't interact with the new experiments too much will end up with a completely abstract structure that does not resemble observable physics too much. It's not the case of string theory - which is a deep framework to unify all good features and facts about gauge theory, gravity, elementary particles, Higgs mechanism, confinement, and so forth. String theory is the most conservative among all the extensions of quantum field theories that go beyond the standard framework of QFT.

Well, category theory is exactly the type of science that would invalidate the previous paragraph.

Don't get me wrong: I do believe that string theory will eventually be formulated with very abstract rules - rules that will imply the features of the well-known systems and backgrounds with a geometric interpretation as an important special case. But I just don't see that this particular approach, diagrams with arrows from category theory, are getting closer to the right answer. If something is abstract, it does not have to be correct, and this particular thing just does not smell correct.

What I can imagine is that some insights of category theory will be useful to understand some things about D-branes and M5-branes - categories are possibly relevant for the "gerbes" - but it remains an open question whether category theory as we know it today will ever become a useful language for physics. No doubt, the mathematicians among us will try to impose their culture on the rest of the community. But I feel that this is the influence that would move the string theorists away from physics.

It's great if some people study string theory because of purely mathematical motivations, and I don't want to question the meaning of mathematics. What I want to question is the self-confidence of many mathematicians with which they want to use their mathematical ideas directly in physics. String theory is first of all a physical theory, and it should be studied because of physical motivations - the primary physical motivation is to locate the right ideas and equations that describe the real world.

Category theory has been used by many to achieve completely wrong physical conclusions - for example, by considering the "pompously foolish" quantization functor, many people have claimed that everything that happens in a classical theory has a counterpart in the "corresponding" quantum theory. This is just a lousy idea that we know to be wrong because of hundreds of different reasons - yet, the people who have a religious respect to the diagrams in category theory view this incorrect map as a "self-obvious dogma". Incidentally, this dogma is one of the reasons that makes the category theory community and the loop quantum gravity community correlated. It may be nice to be rigorous, but it's always more important to be correct: if the specific kind of rigor leads us to stupid conclusions in physics, we should avoid it.

Categories and "continuous vs. discrete"

Urs Schreiber on sci.physics.research describes "categorization" as "stringification" - which is the step from "quantum mechanics" to "conformal field theory". In other words, "categorization" is nothing else that adding dimensions to your auxiliary "worldvolume". Well, adding dimensions is not such a mysterious thing. Moreover, we know that sometimes it can be done and sometimes it cannot. Even if it can be done, some things are preserved, but others are completely changed.

Category theory often resembles linguistics (or even postmodern literary criticism): it is a science about arrows between different objects and about creating new objects from these arrows, but it does not really care too much whether the objects exist and what are their real properties. Although "categorization" is making things "more continuous" (e.g. by adding dimensions; making quantization), the actual diagrams that category theorists eventually talk about are extremely discrete objects.

If a complete discrete diagram tries to encode an extremely continuous, infinite-dimensional object, we should always carefully ask whether we know that the non-trivial, infinite-dimensional, highly continuous object really exists. Its existence is never guaranteed and drawing simple-minded diagrams from otherwise "rigorous and deep" category theory can't help us to decide about the "hard questions" associated with quantum field theory, string theory, or any other objects that require some sort of "infinite-dimensional continuous" structure. If we create physical theories as complex as quantum field theory or more, the difficult part is always the task to make the highly continuous, infinite-dimensional objects and expressions meaningful. On the other hand, one can invent zillions of different discrete rules - such as the rules for n-categories - and it is simply not guaranteed that a set of combinatorial, discrete rules will be relevant for anything in physics (which includes string theory).

In my opinion - and not only mine - it is important for all of us to realize that we have not yet solved the fundamental physical goal of string theory - to predict and confirm new phenomena and facts about the real world. And category theory does not seem to help us in this goal, I think. Do you disagree?

Wednesday, November 24, 2004

Ukraine and reunification of the West

The messy elections in Ukraine look pretty serious. Just to make the situation clear:

  • There were two main presidential candidates. Both of them are called Viktor, but at most one of them can be the victor at the very end.
  • Viktor Yanukovich is a pro-authoritative, pro-Russian candidate, supported by the Eastern half of Ukraine as well as by Lukashenko and Putin (Byelorussia and Russia). Let me call him Viktor East because you may be confused by the similar names.
  • Viktor Yushchenko is a pro-Western, center-right candidate, who enjoys the support of the Western half of Ukraine, as well as the European Union and the United States of America. Let's call him Viktor West.
  • Exit polls indicated that Viktor West would win. Exit polls are not that important. Nearly official results indicated that Viktor East has won. However, the international observers claim that the polls were flawed. A lot of concrete accusations were raised and the EU and the US require a detailed audit of the polls.

Countries such as Czechia, Slovakia, Poland, Hungary, and the Baltic states are already members of the EU. Bulgaria and Romania could join the EU in a couple of years. The Baltic states are the only countries of the former Soviet Union that left the Commonwealth of Independent States. However it is pretty unlikely that Russia will join the EU in the foreseeable future. A real open question is Ukraine. Russia wants to interpret Ukraine as a "friendly region abroad" which is of course not quite compatible with Ukraine marching to the West. It's not shocking that Ukraine is the natural battleground for any new tension between Russia and the West.

You may get the impression how serious this tension is if you read the following article:

http://english.pravda.ru/ ...

Interference in the internal affairs of the Commonwealth of Independent States

... Pro-western factions ready to sell out to the Washington camp, orchestrated by their foreign masters, sweep to power on the crest of a wave of popular revolt, hooliganism and riots. ... The reaction from Moscow was more mature and as usual, more in line with the principles of international diplomacy ...

Yushchenko - a danger to the Ukraine ... Western reaction immature and meddlesome

Well, the author of this article is Timothy Bancroft-Hinchey (born 1958). You may say that PRAVDA does not represent Russia anymore - except that it essentially does. I am pretty curious where this controversy is going.

You know, the support of the West for Yushchenko is something that both major US parties and the US human rights institutes are paying a lot of money for. See an interesting article in The Guardian.

Tuesday, November 23, 2004

Aliens' answer to the anthropic principle

Cumrun Vafa has just pointed out a new article in the Time magazine to me. The address is



http://www.time.com/ ...



The article starts by explaining who are cranks, and why many scientists turn on their e-mail filters to avoid all contacts with the crackpots. However, the article itself is then becoming increasingly entertaining, frustrating, or irritating, depending on your mood.



It explains that many parameters of the world seem to be adjusted in such a way that life is possible, and roughly speaking, the text proposes four main different explanations why it's so - and the first three explanations are related:

  • God. The most obvious explanation of these coincidences is that there exists God who created the world that allowed Him or Her to start to produce the humans to His or Her image at the end of the week. Instead of God, the creator could have also been Zeus or Odin, as some experts propose. Unlike the previous cases in which science superseded God by another explanation, we may have reached the point where God becomes the only savior, the text explains.

  • Aliens. Although James Gardner, an attorney from Portland, Oregon may seem as another crackpot, the text argues that there is a widespread consensus that Gardner is a genius instead: people like Martin Rees and researchers from SETI and the Santa Fe institute support him. Gardner's theory says that our Universe (whose technical name is Biocosm TM) has been manufactured by a superior race of superintelligent extraterrestrial beings, the so-called U.F.O. Übermenschen.
  • Multiverse. The author uses inflation theory, Darwin's theory, and especially string theory to argue that there are many Universes, and roughly one of them has the right properties for us to be born. Superstring theory was "renamed" to M-theory, and no one except for the specialists should be interested in the reasons why it was renamed. Strange as it sounds, most physicists agree that it is the most likely candidate to unify GR and QM. Shamit Kachru and his friends calculate the number of Universes according to M-theory - because string theory has been renamed - and the result is 1 followed by something like one hundred zeroes (recall that one hundred is 1 followed by two zeroes).

  • Rational arguments. At the very end, the article also offers several reasonable voices - like Brian Greene, Steven Weinberg, Alan Guth - who are not sure whether there are very many Universes around, and who still find it plausible that there will be a rational explanation for the currently unknown features of the Universe. In fact, Lenny Susskind at the very end remains a bit open-minded, too - he almost sounds as a voice of reason. The article does not mention Weinberg's anthropic prediction of the cosmological constant, and it also makes other errors in attributing various ideas to the people.

Well, this popular article shows the direction in which theoretical physics is going to be developed if we eventually accept the anthropic meta-arguments as a part of science, instead of saying "we don't know yet the answer". Just like our Universe will be viewed merely as one representative among "1 followed by 100 zeroes" universes, the physical description itself will be understood as one possible explanation among many others that include different races of aliens and gods. It's just obvious that if we can't make hard and testable (and tested) predictions, the corresponding part of physics will be reduced to "yet another religion".

Sunday, November 21, 2004

Cultures in theoretical physics

Amazon.com has included two popular books in their listings - both of them should appear on April 30th or May 1st, 2005 - about our field.

First of all, it is a very insightful and extremely useful book by Lisa Randall
Warped Passages : Unraveling the Mysteries of the Universe's Hidden Dimensions
by Lisa Randall




Be sure that I will write much more about this book as the date of publishing is getting closer. Yes, I've read this book, and it certainly deserves your attention! On the other hand, I have not seen Lenny's book

An Introduction To Black Holes, Information And The String Theory Revolution: The Holographic Universe
by Leonard Susskind

Well, Lenny Susskind is a character, and it is very natural that he writes a book about all these wonderful ideas - many of them are his ideas. But I can't tell you anything about the book.

The information about these books scared Peter Woit - because every time the public learns more about particle physics and string theory, Peter Woit's struggle to kill string theory becomes more hopeless. And therefore Peter wrote an article "More hype on its way" on his blog "Not even wrong":

http://www.math.columbia.edu/ ... 000109.html


The section with comments was pretty illuminating because we could learn that it is not just string theory and the anthropic principle that Peter Woit hates. He also informed us that he does not like

  • The research of the hierarchy problem. In his opinion, it is enough to believe that there is no grand unification, and therefore there is no GUT scale, and therefore we don't have to worry about the hierarchy problem - it does not really exist.
  • You might think that I am twisting and trying to humiliate Peter, and he could not write something like that, but you can check yourself! Just to be sure, unless we postulate very new physics, such as large or warped extra dimensions, quantum corrections to gravity only become important at the Planck scale about 10^{19} GeV - which can be calculated without any assumptions of grand unification. If we want to claim that there is any field-theoretical description at energies below the Planck scale, we must explain the parameters, including the obvious gap between the electroweak scale and the Planck scale, a gap that is vulnerable to quantum corrections in the Standard Model.
  • Extra dimensions. Obviously, Peter states that the ideas related to extra dimensions are even more ridiculous than string theory itself, and this is not anything new.
  • But we also learned some time ago that he is totally unimpressed by gauge coupling unification - and of course by the very idea of supersymmetry.
That's becoming ridiculous, and therefore let me stop. There may be different cultures in particle physics - hep-th and hep-ph cultures - but it is quite clear that the difference between them is not as large as the difference between hep-** and Peter Woit.

That's a good place to say a couple of words about these cultural differences.

In the 1980s, the gap between the theorists and phenomenologists - or, alternatively, between the top-bottom and bottom-up approaches - was huge. String theory and conventional particle physics were two different cultures, and this also affected the separation to two arXivs in the early 1990s.

There has been at least one revolution - the second superstring revolution - in the stringy camp, but virtually no progress in the non-stringy camp. That's one of the reasons that brought these two camps much closer to each other. The other reason is less pragmatic and more important scientifically: namely the specific new discoveries.

In phenomenology, the large and warped extra dimensions were proposed as the new ways to solve the hierarchy problem. These approaches are clearly rooted in string theory; without the influence of string theory, it would seem unlikely that any approach based on extra dimensions would be studied seriously.

The case of extra dimensions in phenomenology was not the first example in which string theory gave bottom-up model builders new ideas and mantinels to probe possible new physics beyond the Standard Model - supersymmetry was another example - and most phenomenologists started to realize that there there is something about string theory if it's able to generate viable paradigms for their own field.

Of course, many phenomenologists don't care about the rest of string theory, besides these stringy-inspired ideas, and it's their legitimate attitude. On the other hand, it is conceivable that the future will show them that they should have taken many more details from string theory than just these general ideas such as extra dimensions and braneworlds. Well, in reality, many phenomenologists are trying to apply other ideas from string theory, too.

The concepts of holography and the AdS/CFT correspondence were very important for convergence of the phenomenological and stringy communities, too. The renormalization group flow looks like an extra dimension - the holographic dimension whose existence seems to be a general feature of quantum gravity. Most string theorists were trained as quantum field theorists anyway, and therefore it is not too surprising that the two communities overlap significantly in their investigation of holography - in the context of Randall-Sundrum models or the AdS/CFT correspondence.

The stringy and particle camps are still working independently most of the time, but there is a significant amount of interactions. Harvard is one of the places where the interactions are very strong, and I was explained that the gap between these two cultures is slightly bigger at some places in California, for example.

One may also consider a third culture, namely loop quantum gravity. From the Harvard perspective, the gap between the hep-** cultures on one side, and loop quantum gravity on the other side is virtually infinite. There is a wide consensus between all visible parties at Harvard - string theorists as well as particle physicists - that the precise proposals of the loop quantum gravity to modify the rules of physics (and to ignore the lessons of quantum field theory) is simply rubbish.

That's Harvard, but the people from the Perimeter Institute can tell you stories about their peaceful co-existence with the loop quantum gravity people over there. Of course, it is extremely hard for me to imagine how the string theorists can live so happily with so much anti-physical nonsense around, but it's probably just a matter of mutual brain-washing.

String theorist vs. SUVs

All of my left-wing anti-gasoline readers - and they probably constitute a majority of the readers - should consider their support for Billy Cottrell.



Billy Cottrell is a slightly eccentric :-) 23-year-old graduate student from Caltech who decided that the SUVs were evil, and therefore spray-painting (and possibly also torching) these SUVs is the right answer.



The total damages are about 2.3 million dollars and Cottrell has been threatened with up to 40 years in the prison. Well, it's not too surprising that he has probably agreed to become a police informant and his ecoterrorist friends have unfortunately stopped their support.



But why is it interesting? It's because Billy is described as a string theorist! The judge Gary Klausner, for example, mentioned that "string theory was an area of physics". Billy corrected him: "It's the area of physics." :-)



Of course, it's too bad that Billy has put himself into such problems, but he has some sympathies of mine, too. The main difference between Billy and the Kyoto protocol is that the damages made by Billy are 2.5 million, while those caused by the Kyoto protocol in the world will be 2.5 trillion - and the latter will be completely legal, unlike Billy's Molotov cocktails. :-) Moreover, the Kyoto is much less focused.



http://www.spiritoffreedom.org.uk/ ...

http://www.freebillycottrell.org/

http://www.pasadenastarnews.com/ ...

http://www.google.com/ ...



At any rate, I would like to discourage all of my other fellow string theorists from torching SUVs and other types of ecoterrorism.



The newest developments (I wrote them before, but blogger.com was frozen): Billy has been convicted on virtually all charges facing him, except for the most serious one that would imply at least 30 years in the prison. Without this particular charge, he is expected to spend roughly 5 years in the federal prison. His attorney wanted to argue that Billy had Asperger's syndrome, but it won't work, it seems.



I learned that he wrote a bachelor thesis about p-adic numbers in string theory at Chicago... I also learned about his research at Caltech which should be kept confidential, I think.

Saturday, November 20, 2004

Ian Swanson and other talks

Because the behavior of Linux and Mozilla is mostly unpredictable, I will try to write shorter articles.

Ian Swanson from Caltech has been visiting us. He has told us a lot of interesting stuff, and was also giving a special seminar yesterday (Friday) at 3:00 pm.

He used PowerPoint with a lot of sophisticated equations, tables, and some animations. His topic was String integrability and the AdS/CFT correspondence.

Let's describe Ian's talk and the current status of the post-BMN business. Because I am writing this for the second time, let me be shorter:
  • the industry of the pp-wave limit has transmuted into the spin chains and integrable systems
  • Ian Swanson et al. (which includes John Schwarz, but also Jonathan Heckman who is now at Harvard) studied the physics of strings beyond the pp-wave - which means that AdS5 x S5 is described as the pp-wave deformed by some 1/R^2 corrections
  • the gauge theory side of the calculation more or less reduces to the spin chains and other tools of integrable systems
  • the string theoretical side is described by the Green-Schwarz string that, at least classically, becomes the supercoset SU(2,2|4) / ( SO(5) x SO(4,1) ), if I remember well - and as long as you forget about stringy loops, the ghosts (like Berkovits' pure spinors) are not necessary
  • the integrable structure on both sides agree, even though it is not manifest
  • most of the calculations are done at the leading order of the 1/N expansion (or the J^2/N expansion), and the stringy loop corrections are more or less open questions
  • the results are still expanded in lambda'=lambda/J^2, and in 1/J
  • the gauge theory (spin chains) agrees with the string (supercoset) at the one-loop and the two-loop level
  • there is a discrepancy at the three-loop level, and Ian says that it is believed that the problem is a subtlety neglected by the spin chain models
  • many states are combined into various multiplets, and there is indirect evidence of the existence of an infinite number of conserved charges that make up the integrable structure




Symplectic field theory

Before Ian's talk, we also saw another talk in the Science Center C (Shing-Tung Yau invited us), namely a talk by the mathematician Helmut Hofer from NYU. It was about symplectic field theory, and already the meaning of the term is not quite clear.
  • instead of a manifold, Hofer was discussing a concept of a polyfold
  • it seems that the dimension of the polyfold may jump from point to point
  • this technology should be powerful for infinite-dimensional systems, and it should give a unified framework to study Gromov-Witten theory, Morse theory, and many other theories like that
  • nevertheless one may draw finite-dimensional examples. Hofer's example is a heart connected with Jupiter by three tubes, while Jupiter is surrounded with red Saturn's rings :-)
  • this mathematical machinery should explain how Riemann surfaces bubble off - all such phenomena have a local flavor in his formalism
  • in the second part of his lecture, Hofer discussed applications, and Sergei Gukov told us a couple of words about this second part, too
  • the first 30 minutes were spent with deriving the equations analogous to cubic string field theory, namely QF+F*F=0
  • then he discussed how to construct Riemann surfaces from pieces - namely from Riemann surfaces with holes p and antiholes q - where he formally defines the commutator of [p,q] to be hbar, whatever it exactly means - this commutator is probably the source of the word "symplectic"
  • it's not clear what's the relation between the different things he was discussing
Let me mention that if my description of this mathematical talk makes no sense to me, it may be my fault, but it is not guaranteed that it's my fault. ;-)

Puppet show



The video above is from the future - from 2006 - but let us return to the present, 2004.

OK, last night, the G2 students (second year of graduate studies) performed their puppet show, much like every year. You know, the Harvard physics students are always making fun of their fate, and they are humiliating their professors (the puppets). Yesterday it was a lot of fun, too, but I must be much shorter because I am still typing at Linux/Mozilla which guarantees absolutely nothing:
  • they started with a nice video, probably prepared with Google's Keyhole, in which the camera was approaching the Earth and focusing on the USA; Massachusetts; Cambridge; Harvard; Jefferson laboratories
  • finally they switched to the elevator on the 3rd floor that did not work
  • many comments were about a person who assigns the desk in our building; in my opinion, these attacks were too frequent and I did not like this portion of their show too much
  • Shiraz Minwalla was presented as a power plant who always jumps in the class - very funny and very realistic :-)
  • Tai Wu, David Nelson, Paul Horowitz, and especially Nima Arkani-Hamed and Andy Strominger were frequent targets of their jokes
  • George W. Bush was an important puppet and he decided to study physics and eliminate the therrorists - especially the high-energy therrorists - from Massachusetts. I liked these jokes - and Nima Arkani-Hamed was, of course, very self-confident in his discussions with George W. Bush
  • in fact, Bush was also concerned about the uncertainty in Massachusetts, and he was surprised that the uncertainty was even a principle
  • they were fortunately nice to me. Well, they showed a video that I won't link right now, but it's the same video as one described at the Lubos Motl Fan Organization (LMFO)
  • otherwise they quickly displayed several slides of a PowerPoint file that I use to teach - and argued that I explained that 24+2=26 implies some complicated one-loop correlators in string theory involving the Jacobi theta function that everyone knows from the kindergarden




  • one of the jokes that I really liked was their parody of Brian Greene's Loeb lectures at Harvard - it was a very realistic parody
  • Andy Strominger (the puppet) introduced Brian Greene (another puppet) by saying something along these lines: "Brian Greene became very successful by explaining our work to the stupid people. I always wondered how it's possible that a person like Brian is successful. Then I realized that it's because of his smile..."
  • Brian Greene (the puppet) was then explaining that people in the 1980s discovered 2 string theories, and then 3 more string theories, which makes 5 string theories - much like 2 apples plus 3 apples is 5 apples :-)
  • the students also showed a couple of entertaining anagrams, using the names of some professors
  • they visited classes of Hau, Wu, and Arkani-Hamed
  • their show contained commercials for various new fictitious books by Brian Greene and the physics professors at Harvard, as well as their new music CDs

Friday, November 19, 2004

Tom Banks at Harvard

Note added: this article has been updated, and two talks about two-dimensional string theory are briefly described at the bottom.

As Aaron Bergman and Jacques Distler pointed out, Google started another service:

http://scholar.google.com/

It is a fulltext search engine through all scientific articles, including all articles from www.arxiv.org (the PDF files are those that are linked), and it knows about the number of citations. The server lists the article with the given query and it prefers the cited articles.

For example, search for

You will get all the papers that contain these two words, and the papers where the words appear in the title, and the papers with the large number of citations, will be listed first.

OK, let me know switch to the scientific activity at Harvard. My advisor Tom Banks (now at Rutgers, in spring at UC Santa Cruz) is visiting us, and we have a lot of interesting discussions. Tom says, for example,

  • the current research in string theory is not going quite in the right direction
  • the connections with the experiments must be understood better
  • one of the most important particular problems we must understand is the nature of supersymmetry breaking in string theory, and the possible interrelations between supersymmetry and cosmology
  • Tom often emphasizes his viewpoint that the different vacua in string theory really don't belong to the same theory, in the physical sense, because you can't usually create a bubble of one Universe inside another Universe
  • he gave me some useful feedback about the matrix generalization of the CFT techniques
  • he also presented some insights, questions, and criticism about things like the relation between black hole entropy and topological string theory; about the developments in the "landscape"; and many other issues.



Yesterday, Umut Gürsoy from MIT was speaking at the Duality Seminar about his recent work with Hong Liu about the big bang and big crunch in two-dimensional string theory, as understood from the old matrix models. Some points in that talk:
  • Umut talked about the partition sums in "two-dimensional critical string theory", as he called it (the word "critical" seems a bit like a contradiction, but it's just a matter of terminology)
  • their two-dimensional string theory was bosonic string theory on a Euclidean S^1/Z_2 orbifold - the two fixed points of the orbifold are the "big bang" and "big crunch"
  • although he was talking about bosonic string theory, he said that it was non-perturbatively well-defined; many of us thought that this should only be the case of two-dimensional type 0 theories, but Umut explained that he is expanding in the cosmological constant or what exactly was the expansion parameter
  • it seems that he claimed that they can derive the initial and final conditions of the Universe
  • many of us, including Tom Banks and me, thought that they only calculated the transition amplitude between two specific states, and by inserting some operators, they could have calculated the amplitude between any other pair of states

Davide Gaiotto delivered the family lunch meeting, and we've listened to another talk by Ian Swanson (Caltech) at 3 PM which is described in one of the newer articles.

Much like Umut, Davide Gaiotto was also talking about the old matrix models

  • especially the Kontsevich models - or, more precisely, the Seiberg-Kutasov models
  • he's been working on these things with Leonardo Rastelli, and sometimes with other co-authors
  • he presented the derivation of the matrix model as a discretization of a cubic open string field theory
  • one of his main questions were how the Riemann supersurfaces, including the gravitino fields, should be discretized - and he agreed that the superdirections don't carry any local excitations, and might only influence physics by a determinant and some global constraints (finally, it's not really possible to discretize a fermionic dimension more than it is)
Davide also emphasized that the finite N old matrix models are meaningful. My opinion about this issue is similar to Tom Banks': it's just hard for me to imagine that such a matrix model with finite N and a rather generic potential is a part of string/M-theory. Then we would have to say that virtually everything is string theory. In our description, the finite value of N is a regulator, but the real interesting physics only appears for large N.

My general skeptical comment at the end: I don't know where this two-dimensional string theory research is going, and I am not able to recognize progress in these papers. What is exactly better about the papers about this subject that are written today compared to those 10 years ago? We don't live in two dimensions and two-dimensional gravity is diferent from the four-dimensional and higher-dimensional gravity in many respects - it has no local gravitational degrees of freedom, for example. The matrix models don't seem to be constrained too much. Is it really a part of the same string theory as the ten-dimensional type IIA string vacuum, for example?

Tuesday, November 16, 2004

Orbifold tachyons from SUGRA and other papers

Last time when I commented all the articles, we were impressed how interesting and serious they were. Tonight it's slightly easier to describe all the hep-th papers on the web, but some of them still look interesting:

This paper by Matthew Headrick and Joris Raeymaekers is obviously interesting. Consider the nonsupersymmetric orbifold of type II string theory - Adams-Polchinski-Silverstein (APS) type of orbifold - on C/Z_n for large n. You know that there are many tachyons in the twisted sectors. For large n, it makes sense to T-dualize around the angular direction of C/Z_n. You get some SUGRA solution. Well, many people have definitely looked at the orbifold in this way, using T-duality. But Matt and Joris finally consider the obviously interesting limit in which n is sent to infinity, but you keep n times alpha' fixed. It's some kind of zero slope limit, but the "lightest" tachyons (=closest to being massless) whose squared masses are comparable to (-1/alpha' n) survive this limit because these squared masses are exactly inverse to the quantity that is kept fixed.



Note that the tachyons come from twisted sectors. By interpreting the angle as a circle, they're winding strings. It means that in the T-dual picture, they are momentum modes of a field in the dual string theory which is effectively supergravity. Finally, the present authors calculate some interactions of the momentum modes in supergravity - which exist off-shell - and they show that they agree with the couplings of the different tachyons calculated from the CFT - which only exist on-shell. That's very interesting. The main thing I worry about is that the result is perhaps not too unexpected because instead of the orbifold, one might work directly with the "limiting CFT" on the thin cone and its T-dual.

This author constructs some new solutions of the modified Ricci-flatness equations, something that is necessary for a CFT to be well-defined. You know that Ricci flatness is the right equation of motion only if you have no fluxes and if the dilaton is constant. If the dilaton is not constant, the Ricci tensor is nonzero - Einstein's equations get a source. He or she does not quite want to talk about a non-constant dilaton. Instead, he or she focuses on another generalization of the CFT and Ricci flatness - namely a CFT with an extra complex field that has an "anomalous dimension". My understanding is that it's just a matter of notation whether you say that a field has an "anomalous dimension", or whether you redefine it by a function of the dilaton, and you allow the dilaton to vary. If my understanding is correct, Nitta has effectively found new solutions of the combined Einstein's equations with some dilaton-gradient source. These solutions have either a U(N) isometry, or an O(N) isometry. It's because the coordinates of his or her manifolds are explicitly written using U(N) or O(N) covariant coordinates, and the metric only depends on the quadratic invariants. Some of these solutions can be interpreted as generalizations or deformations of the Ricci-flat metric of the conifold with an extra linear dilaton - at least that's my impression.

We often say that the maximally supersymmetric Yang-Mills theory in four dimensions is "finite" because of powerful supersymmetric cancellations. Well, what is exactly is finite? Clearly, there are operators with anomalous dimensions that must be regulated and that are cutoff-dependent, and so forth. So it's not everything that is finite. However, the effective action is something that should be finite - it sort of computes the correlators of the elementary fields in which the divergences are supposed to cancel between the bosons and fermions. In this paper, these two guys try to prove the finiteness of the N=4 effective action in the N=1 superspace language. The effective action of N=4 can be written in this form, due to the Slavnov-Taylor identity, as long as we allow the superfields to be "dressed". They must go through some un-controversial steps to show that the R-symmetry anomaly cancels due to the N=2 supersymmetry, and they re-check that the one-loop beta-function vanishes. Nevertheless, the final result is that once you express the effective action of the N=4 gauge theory in terms of the dressed fields, all the terms become independent of the UV cutoff, and the effective theory is therefore finite.

OK, there was a Lorentz violating paper last time, too, and many comments may be repeated. I still don't understand the motivation behind these models. The space of possible non-relativistic non-stringy quantum field theories is huge. I don't really feel what constrains it. For relativistic theories, we may label all fields by their dimension - which is their dimension with respect to space as well as time. However, for non-relativistic theories, we must introduce separate spatial and temporal dimensional analyses, and I don't think that we can really distinguish "renormalizable" theories from "non-renormalizable" theories. Of course, this can be done if we write down a non-relativistic theory as a deformation of a relativistic theory, and this is what this groups does, too. Nevertheless the number of new terms, once you allow the Lorentz symmetry to be broken, is again huge. Moreover, I think that the lessons of 1905 are serious lessons, and breaking the Lorentz invariance explicitly should also be accompanied by breaking of the rotational invariance, and I see no reasons to do so. Let's stop criticism for a while.

What theory do they consider? Take the usual Maxwell action in four dimensions, and imagine that you decide to deform it in some way, by adding another action. What is the other action you know for a U(1) gauge field? Well, the Chern-Simons action. There is a problem: the usual Chern-Simons term only exists in three dimensions. However, that's not a problem for those who don't care about the Lorentz invariance. Just multiply the Chern-Simons 3-form with a general vector, i.e. a 1-form, to get a 4-form, and you can integrate the product over the spacetime. The vector picks a priviliged direction which is OK with you. Because the Chern-Simons action has an epsilon in it, you will break not only the Lorentz symmetry but also the CPT symmetry - which can happen once the Lorentz symmetry is gone. To make the things even more confusing, add an external current J that couples to the gauge field via the J.A term.

That was too natural so far. Let's make something more fancy. Reduce this four-dimensional Lorentz invariant theory to three spacetime dimensions (dimensional reduction). Moreover, don't reduce it along the priviliged vector discussed previously, but along a more general vector. In this case, the three-dimensional Lorentz symmetry will still be violated. If you write down some terms, you will discover mass terms of various types. Just do it and derive the equations of motion and solve them and draw several graphs. And don't forget to be excited that the results pick a priviliged reference frame (even though you know that it was your starting point). It's probably a good feature to violate the Lorentz symmetry.

Finally, let me admit that I am totally lost. I have no idea why they're doing what they're doing, whether it should be a physically realistic model or a mathematically interesting one: I just don't see the meaning of it all. This type of activity is what most of us would be doing today if we had no string theory. Combining random terms that apparently follow no deeper or organizational principles - terms extracted from an infinite chaotic ocean of arbitrary terms and their combinations - terms that are much more ugly and unjustified than the theories that are known to work. Sorry for being so skeptical; I might simply be dumb.

These colleagues first repeat a lot of the commercials about "Causal Dynamical Triangulations" that they've already written in many previous papers. The starting points are very obvious and sort of naive: try to define the path integral of quantum gravity in a discretized form. (It's like spin foams in loop quantum gravity, but you don't necessarily require that the details will agree.) OK, so how can you discretize a geometry? You triangulate it into simplices, and you imagine that every simplex has a region of flat Minkowski spacetime in it.

(That's not like loop quantum gravity - the latter assumes that there is no geometry "inside" the spin foam simplices - the geometry is concentrated at the singular points and edges of the spin foam.)

Then you write down the Einstein-Hilbert action many times and you emphasize that it is discretized. There are many other differences from loop quantum gravity: while the minimal positive distance in loop quantum gravity is sort of Planckian, in the present case they want to send the size of the simplices to zero and the regulator should be unphysical. Of course that if you do it, you formally get quantized general relativity with all of its problems: as soon as the resolution becomes strongly subPlanckian, the fluctuation of the metric tensor becomes large. The path integral will be dominated by heavily fluctuating configurations where the topology changes a lot and where the causal relations are totally obscured - and the results of these path integrals will be non-renormalizably divergent - at least if you expand them perturbatively. But this is simply what a correct, authentic quantization of pure gravity gives you.

These authors are doing something different in one essential aspect. They don't want to sum over all configurations, all metrics - the objects that you encounter in the foamy GR path integral above. They don't do it because they sort of know that pure GR at subPlanckian distances is rubbish. Instead, they truncate the path integral to contain "nice and smooth" configurations only. The allowed configurations they include must be not only nice, but they must have the trivial causal diagram as well as a fixed topology - namely S3 x R in their main example. Well, if you restrict your path integral to configurations that look nice, it's not surprising that your final pictures will look nice and similar to flat space, too. But it by no means implies that you have found a physical theory.

Any path integral that more or less works simply must be dominated by configurations that are non-differentiable almost everywhere, by the very nature of functional integration and by the uncertainty principle. One can often show that the path integral localizes, but that's just a result of theorems and calculations. One cannot define the path integral to include smooth and causal histories only. Such a definition simply violates the uncertainty principle as well as locality, if you make some global constraints on the way how your 3-geometry can look like. Consequently, it also violates general covariance, and you won't decouple the unphysical polarizations. If you also make global constraints about the allowed shapes as functions of time that cannot be derived from local constraints, you will also violate unitarity.

As far as I know, none of these "discrete gravity" people ever asked the question whether these theories are physical, unitary, and so forth. In fact, the subset of the "discrete gravity" people called the "loop quantum gravity" people declares quite openly that they don't care about unitarity at all. Unitarity is actually one of their enemies and they claim that it follows from time-translation symmetry, which is of course a misunderstanding of total basics of physics: unitarity is about hermiticity of the Hamiltonian, if one exists, while time-translation symmetry is about its time-independence. Unitarity is one of the concepts that must be destroyed by the revolution in physics that they've been planning for quite some time. ;-) They're just not getting that unitarity is about the probabilities being non-negative numbers that sum up to one, and that this rule must work in any context in quantum physics. I am afraid that the rest of the "discrete gravity" people does not bother to check these elementary physics questions either. That may be a good point to stop criticism because everyone knows it anyway that I don't believe that this line of research will lead to any usable new physics because it simply neglects some totally essential features of quantum physics.

At any rate, they show that these strange rules of the game admit some big-bang big-crunch cosmological solution described by some collective coordinates (a nice picture animates in front of your eyes), and they construct or propose a wave function of the Universe that depends on the observable representing the "3-volume of the Universe".

15th anniversary of Velvet Revolution

Tomorrow it will be exactly 15 years from the Czechoslovak Velvet Revolution in 1989. Because it was an amazing period of the Czechoslovak history, I believe that this anniversary deserves an article. Those of you who are interested in Central and Eastern Europe might want to read it.



Although Mikhail Gorbachov started glasnost' and perestroika in the Soviet Union already in the middle 1980s, these trends did not quite penetrate to Czechoslovakia. Despite the Czechoslovak communists' claims that they followed his process of democratization (namely of "přestavba" which means "re-building"), the reality was quite different. Václav Havel was in prison and the economic reforms were stuck. The lives of all people were "normalized" and they looked much like in the 1970s. Not much progress. Most people were frustrated. Socialism was a boring system, indeed.

If you open this address,
(the address would be translated as november.itoday.cz, itoday is an internet daily) you will see how the Czech internet news would look like if there were no revolution. The actual news about 2004, as described by hypothetical socialist journalists, are so absurd, so funny, yet so incredibly realistic! :-) (And some of them are pretty similar to the comments about the world events as heard from some left-wing commentators throughout the world.) They describe letters about the comrade Arafat's death; the struggle of the Iraqi people against the American imperialists; the help of the Czech engineers for their Syrian comrades that are being threatened by the USA; the new bummers for the dollar, and so forth. Unfortunately you don't speak Czech, and therefore the pages won't be too entertaining for you. They have chosen authentic language; as well as the typical symbols of the late years of socialism (such as the commercials with Mr. Egg - all commercials in the 1980s always started with the same logo).

For most of us, it was very hard to imagine that things could speed up. In October 1989, Eastern Germans started to escape to West Germany through the embassies in Budapest, Prague, and other cities. History was obviously moving on in other countries but Czechoslovakia looked frozen. The developments in Poland and Hungary were being pictured as chaos that must be avoided in Czechoslovakia. There were no signs that the Party was forced to change anything. Some people like to say that Czechoslovakia was completely controlled from Moscow - but the years 1987-1989 were a great evidence that it was not so. Unfortunately - because the Soviet Union was ahead!

However, on Friday, November 17th, 1989, the students were celebrating the "international students' day". Well, it was exactly 50 years after the Nazis closed the Czech universities and killed or arrested the most inconvenient students. Those active students in 1989 not only admired their predecessors in 1939, but they also wanted to show that they did not like the practises of the socialist totalitarian system.

As you know, the young participants of the demonstration in Prague were beaten by the socialist police. Those of us who listened to the Radio Free Europe knew what happened on Friday. Most people who were interested in politics knew about these events by Saturday, and it was becoming clear that something important was going on. There were some rumors that a particular student named Martin Šmíd was killed by the police. These rumors were not true - in fact, a couple of years ago, I exchanged a plenty of e-mails with exactly this Martin Šmíd. He advocated astrology while I don't believe this science too much. ;-) At any rate, he definitely survived.

There were other important and confusing details such as an agent of the Communist Secret Agency that pretended he was a student, but I don't want to bother you with these details.

Nevertheless the anti-socialist dissidents, famous actors, musicians, and other people were slowly joining this "children's revolution". Various groups of people began to write down petitions and organize strikes. On Monday, the revolution was already underway. Although we were high schools students, our role was more important than what would be reasonable to expect. We were able to elect the principal of the school, and we wrote our own petitions.

In the evening, there was always a big demonstration in Prague, and smaller demonstrations in other cities including Pilsen, my hometown. The demonstrations were frequent, and the biggest one in Prague attracted more than one million of people.

New leaders started to emerge - Václav Havel, Jiří Dienstbier, Václav Klaus, Valtr Komárek, Miloš Zeman, Jiří Bartoška, and so on. Many of them had had no political experience. Many of them were actors and musicians, and so forth. Some of these early leaders are still doing politics; most of them have returned to their older jobs.

The communists started to have problems with the workers, too. An infamous boss of the Communist Party in Prague, Miroslav Štěpán, was explaining to the workers of ČKD, a big factory in Prague, that no country in the world - neither socialist nor capitalist nor third-world country - can allow children, 15-year old children to decide who will be in the government. Thousands of workers started to scream: "We're not children! We're not children!" Obviously, he was in trouble. Such things would have been unthinkable 2 weeks earlier.

The demonstrations in the streets were incredibly lovely and peaceful. The people were very nice to each other. Everyone thought that they agreed about everything - which, of course, turned out to be false as soon as the communists were eliminated from the government, and other questions beyond "the same socialism yes/no" had to be answered. They were ringing the keys. And the developments were extremely fast. Imagine a country where nothing really changed for 20 years - and suddenly the leading role of the Party is eliminated, the president and the Party's chief resign, and a new president is elected although he was in prison 2 months ago - and everything happens without any violence whatsoever.

I was always proud about the velvet character of the revolution even though I have also had many doubts about that strategy every time the communists said something outrageous after 1989. Could we have reduced their influence more significantly by replacing the velvet with a tougher material? I am not sure about the answer... Czechoslovakia was nevertheless the only country in which the communists did not return to the government after 1989 - the social democratic party was revived and became strong (for a couple of years, at least - although they are the senior party in the current government, they received essentially zero votes in the last two elections).

At any rate, thousands of changes had to be made between 1989-1991. It was pretty easy to guarantee the freedom of speech and other basic human rights. Our first president Václav Havel became the symbol of all these changes and new philosophies. It was much harder to transform the most socialized economy in the world into a moderately vibrant emergent free market. Thousands of companies had to be privatized - and some of them did not have a bright future because they lost the Russian markets etc. The currency had to become convertible, controlled by free market currency exchange rates. Václav Klaus, a charismatic economist who was neither a communist nor a true dissident, was ready to realize this task, and although it was a difficult one and its realization could not avoid some problems, I think that the result was very good.

Václav Klaus symbolized the transformation process of the economy, and consequently, he became very unpopular among many people in the middle 1990s. Nevertheless, his popularity jumped well above 60 percent once he was elected the second Czech president in 2003 - of course, the main reason is that he is not responsible for the economy anymore.

Although there are many "usual" and "expected" problems facing the people, the Czech Republic and Slovakia are clearly much better off today. Back in 1989 we discussed the question how many years would we need to catch up with some western European countries, and even though some of our guesses were unrealistic, most of them were not that far off. Today, both parts of Czechoslovakia belong to the European Union. The countries' citizens can do (and have) virtually everything as our Western fellows who have not lived in socialism.