Tuesday, May 31, 2011

Seeing D-branes at the Tevatron and the LHC

As of 2011, no sane person seriously doubts that string theory is the right framework that describes the Universe around us. However, there remain uncertainties about the inner structure of the relevant string vacuum. String/M-theory possesses many four-dimensional solutions that qualitatively resemble our world.



Those solutions are connected on the configuration space of the theory and may be related by various dualities. However, the known stringy descriptions that are semi-realistic and weakly coupled may still be divided into several major categories:
  1. Weakly coupled heterotic E8 x E8 strings
  2. Heterotic M-theory in 11 dimensions; the strongly coupled limit of (1) with two end-of-the-world domain walls
  3. M-theory on singular 7-dimensional G2-holonomy manifolds
  4. Type IIA orbifold/orientifold braneworlds
  5. F-theory on local singularities
Somewhat independently of this classification, the models may reproduce the ADD old large dimensions - especially the braneworlds based on flat manifolds of extra dimensions in (4) and (5) - or Randall-Sundrum warped extra dimensions - especially types of (5) and maybe (3) and others.




The largest group (5) also includes the "generic" models - that one may describe as F-theory on Calabi-Yau four-folds - that have been used by the anthropic people. Because I don't consider the large number of models in a class to be a positive argument in favor of a scenario, this anthropic interpretation of the large class (5) will be ignored.

However, even independently of that, the group (5) contains some subgroups of "simplified" and "more specific" models that may be imagined in various ways and have various interesting properties. In particular, the group (5) includes the Vafa et al. bottom-up F-theory model building with singularities as large as the E8 singularity. Also, (5) includes the simple "moral" T-duals of the category (4) - type IIB braneworlds with D3-branes and similar things occupying a nearly flat space.

Relationships between the scenarios

While one cannot identify each model with "twin city" models in all other groups in a one-to-one fashion, it's still true that for various compactifications, there are dualities which are sometimes many-to-one equivalences. Let me mention a few of the basic ones that are enough to connect all five groups.

The group (1) is obviously related to (2) because (2) is the strong coupling limit of (1). Also, if the Calabi-Yau manifold in (1) is written as a T3-fibration, one may use the heterotic/K3 duality and obtain (3), the M-theory compactification with a K3-fibered G2-holonomy manifold. So (3) is connected to (1), too.

Locally on the compact G2-holonomy manifold, one may also try to shrink some cycles and get to a type IIA description. So (3) is related by the type IIA/M-theory duality to (4). And (4) is at least in some cases a T-dual of (5) - the usual dualities between type IIA and type IIB string theories. Let me just re-emphasize that this simple list is not exhaustive. There are other relationships between the vacua in different groups, too. Chances are that if you consider a vacuum in one group, you may learn a lot if you find a complementary perspective on it using a vacuum from a different group.

Advantages

The group (1) is the most field-theory-like scenario in string theory. It usually agrees with the conventional supersymmetric and grand unified model building in field theory even though there are some extra characteristic string effects modifying the grand unification, too.

The group (2) is similar except that the 11th dimension may be pretty large in which case one gets a higher-dimensional theory well beneath the Planck scale. At any rate, (1) and (2) are naturally exploiting the gauge coupling unification and other beautiful arguments of grand unified theories.

The group (3), much like (2), uses M-theory, the highest-dimensional description (if you don't count the fiber-like 2 extra dimensions in F-theory as dimensions). However, the group (3) doesn't contain the ends-of-the-world. Its singularities are pointlike which some people might view as more natural.

The braneworlds (4) and (5) may break the coupling unification and other advantages but they may have natural explanations for the fermion mass hierarchy and other things.

It's also interesting to look where the spectrum of the Standard Model is located. In (1), it lives everywhere in the 10D bulk (codimension 0). In (2), it lives in the 10D bulk on the end-of-the-world domain walls (codimension 1). In (3), it's on points of a 7-dimensional manifold (codimension 7). In (4), most of the matter depends on D6-branes and their intersections (codimension 3 or 6). In (5), it's mostly D3-branes (codimension 6 but also possibly 4 and 2).

Branewolds may have gotten some evidence

I would still say that (1) and (2) are the most motivated one but the possible observation of the 150 GeV new particle by the CDF, which might be a new U(1) gauge boson, has surely shifted my opinions a little bit, especially after I read a couple of cool articles about the realistic type IIB braneworlds. In fact, I hadn't previously read the following 3 papers:
There are many more articles that were needed for the list above but you may find some references in the papers above - e.g. in the last one. The last paper has already claimed that the stringy type IIB braneworlds may naturally explain the 140 GeV Tevatron bump we discussed yesterday.

I've seen various braneworld constructions but the Berenstein et al. 2001 construction strikes me as a very natural one. They study D3-branes on a nice orbifold of C^3 - an orbifold singularity that may occur in the 6 compact dimensions.

One orbifolds C^3 by a group known as Delta_{27} which is a "special" element in an infinite collection of finite groups Delta_{3n^2} for n=3. The group is generated by some added third roots of unity to z1,z2,z3 and by the cyclic permutation of z1,z2,z3: note how extremely natural this group is! And this natural orbifold in 6 extra dimensions pretty much produces the Standard Model - with 6 Higgs doublets.

Now, following the general description of Douglas and Moore for quiver diagrams, and particular derivations by Brian Greene et al. and others for the open string spectrum on the Delta orbifolds, Berenstein et al. have found out that one may get a very nice Standard-Model-like construction. It gives three generations - and in some sense, you may say that the number "3" is being linked to the number of complex compactified dimensions, so it is being "explained" here.

The fermionic spectrum, as described also in the 2008 paper mentioned above, looks very natural - and still "qualitatively differently" from the nice embedding in the grand unified theories. One has three stacks, morally U(3) x U(2) x U(1) gauge groups, and the charges of the fermions under them are

(1,1,0)
(2,0,0)
(-1,0,1)

(0,-1,1)
(0,2,0)
(0,0,-2)

Note that one includes all permutations of (2,0,0) and (1,1,0) - imagine that you flip the convention for the third sign - and the negatives of these vectors. Very natural, right? One may actually derive this spectrum from the non-Abelian orbifold of C^3 above.

However, you may define the hypercharge Y as the inner product of the six 3-vectors above with (-2/3,1,0) and one gets 1/3, -4/3, 2/3; -1; 2; 0. This hypercharge allows one to interpret the spectrum - six vectors above - as the left-handed quark doublet (3,2); anti-up-quark singlet (3bar,1); anti-down-quark singlet (3bar,1); left-handed lepton doublet (1,2); positron singlet (1,1); neutrino singlet (1,1) - an unnecessary right-handed neutrino.

Just to be sure, you may also define the right B-L quantum numbers of the spinors as the inner product of the six 3-vectors above with (-1/6,1/2,-1/2). A perfectly valid spectrum.

New U(1) groups

Now, these models have lots of new U(1)s - something that is really natural and generic within the stringy braneworlds. In the 2011 paper, Lüst and collaborators diagonalize the mass matrix for the new Z' and Z'' bosons, imposing various constraints, and they see that they can get the new 140 GeV particle.

When they do adjust the models in this way, they produce spectacular new predictions. In particular, there should be another Z'' boson at mass slightly above 3 TeV or so, potentially still available to the LHC, and the string scale at 5-10 TeV. A new collider would probably be needed for that but it is imaginable.

Imagine that the Tevatron 145 GeV signal is genuine and there will be an accumulating evidence supporting a new U(1) group. I would surely think that this would significantly increase the probability that the braneworlds are right.

If some other predictions of the model above were confirmed, it would be good to build a more powerful collider that would try to search for stringy physics at tens of TeVs because the possibility that the strings are almost behind the corner is just fascinating. It has always been too good to be true but if the experiments provided some evidence for some characteristic signatures of the braneworlds, the model could also be good enough to be true. ;-)

Strangely enough, these braneworld models predict some of the stringy physics to be almost as observable as supersymmetry - the superpartner masses may be a few TeV here. The idea that some more "specifically stringy" phenomena would be seen before supersymmetry used to look foreign or fantastic; however, showing that the expectations have been wrong or too modest is something that the experiments have the right to occasionally do.



Off-topic, Facebook: Today, Facebook made an interesting step in its (not quite) cold war against Google when it enabled the e-mail addresses on Facebook. If you have a Facebook account, you may start to use your.own.email@facebook.com that will be sent and readable on Facebook. Try to log into Facebook and check it.



Google Plus One: If you look at the bottom of any post, below "posted by", you will see the five "share buttons" and there is a new, sixth button, with "+1" in it.

Just like Facebook attacked Gmail today with its competition, Google just attacked the Facebook "like" button by its "+1" competition. See an explanation by Google. If you click at it, it will just add number "+1" for you only, and the people who are known to Google to be your contacts - via Gmail - may see on Google search pages that the page was "plus-oned" by you. Try it if you liked this article a bit. ;-)

So far, it doesn't seem to work reliably. You're more likely to see the "+1" button at the main TRF page, beneath each article's excerpt.



Bousso-Susskind crackpottery in Nude Socialist

I kind of expected it but now I know it. Even though the Nude Socialist journalist called Justin something asked me for a discussion and called me by phone for half an hour, after which he thanked and claimed that he began to understand what many-worlds of quantum mechanics etc. mean, he finally wrote a totally uncritical article promoting the Bousso-Susskind hypermultiverse crackpottery, denying that he had ever talked to me, or anyone else who realizes it is a crackpottery, after all.

This time I hesitated and finally said Yes but next time I will instantly say No to any single person from this crackpot tabloid. On the other hand, I must say that this new piece by Amanda Gefter in the same Nude Socialist is just flawless for a popular summary of the 150 GeV bump - even though it's true that she may have used many blogospherical sources that contain the same insight and facts.

New Biotech Pick

On August 30, 2005 I posted a blog on a new, unknown up and coming biotechnology company, Nanoviricides.  NNVC was at 8-10 cents when I ran the article and seven months later it hit $3.75.
Here is how that piece began:
In my ongoing quest for short-term trading opportunities, I have come across some services that specialize in “reverse merger” opportunities. A reverse merger stock is a stock that becomes public by buying a “shell” public company which is essentially going or already out of business. It is a quick and dirty way for a start-up to go public.
Not surprisingly, this back-door way around traditional IPO’s attracts some “substance-challenged” companies. But sometimes it attracts a gem that is just in too big a hurry to get their story out and get funding to go the traditional IPO route.
Five and one half years later, I may have uncovered another one of these reverse merger gems.  I had been researching this stock all last week.  Over the weekend I wrote up the company in my Weekend Update sent to my subscribers.  By now, they have had an opportunity to buy, as have I, so here it is on my public persona.
BIOTECHNOLOGY STOCKS
Always a soft-spot in my investors heart, when these go right they have a way of uplifting both our wallets and mankind. This is the moral soul of trading, i.e. when we win, the whole world wins with us.  NNVC will one day wipe out viral diseases.  CTIX may one day cure as many as half of the cancers known to mankind.  While we wait, NEOP is up 30% from its most recent Buy signal.
One of the attractions of NNVC was its, “one size fits all” approach to anti-virals. One basic structure works on all viruses.  It’s not that far apart from what we do here with trend trading.  One basic algorithm identifies and follows trends, with a tweak here and there to cover special situations.
Cellceutix has taken the same approach to the treatment of cancer.  They have developed a drug, Kevetrin, that triggers the p53 gene, the so called, “Guardian Angel” gene. From this linked interview with the company’s CEO:
One of the major breakthroughs that sets Cellceutix apart from other biotech companies is Kevetrin’s ability to activate p53. Can you discuss what this is and why it is so significant?
p53 is known as the “Guardian Angel” gene. It’s the p53 in more than half of the cancers that is not regulating the cell properly. When Kevetrin comes in contact with the p53, it reactivates the p53 and it gets alerted to start doing its job, whether to fix the cell or to kill the cell. This has been one of the “Holy Grails” in cancer research. There have been numerous programs at the biggest pharmas working on this, and it’s hard for us to estimate how much has been spent on these programs, but we’d have to guess it has been in the hundreds of millions of dollars of research. There’s one whole class of drugs with which they attempted to do this, called Nutlins, but those were found to be defective in that they destroyed the DNA, so that wasn’t a viable option. So, it seems that Cellceutix, at this time, and with what we know, has the only activator of p53 as a cancer drug.
So let’s get to some bottom lines here that have me pretty excited about the opportunity represented by this little known biotech:
*$75M market cap, a pittance;
*Products (mainly cancer and autism) amounting to $30B in potential market sales;
BEVERLY, MA–(Marketwire – 05/23/11) – Cellceutix Corporation (OTCQB: CTIX) (Pinksheets:CTIX – News), a biopharmaceutical company focused on discovering and developing small molecule drugs to treat unmet medical conditions, announced today that in light of recent exceptional preclinical results regarding Kevetrin™, the Company believes it would be in its shareholders’ best interest to test Kevetrin™ against a very aggressive cancer that would be more beneficial to patients and generate more useful data for its IND filing. Cellceutix has therefore initiated a study in an animal model implanted with human pancreatic cancers.
*Management: Some very prestigious names. Read carefully the resumes of management and of the scientific advisory board, including Dr. Krishna Menon, prominent in the development and progress of NanoViricides;
*Just like NNVC, a truly phenomenal risk/reward profile. 1,000′s of percent to the upside against maybe 50% to the downside, but so what it its even 100% to the downside?  A few of these in your portfolio means only half of one needs hit and your gains are enough to fill the coffers.
*Read the above press release again.  Something happened in the lab in May, something they can’t completely disclose yet that made them take a 180 and go after pancreatic cancer.  WHAT DO THEY KNOW?  Well, I’ve seen enough to want to own of piece of whatever it is.  Trends or not, this is an opportunity, let’s run with it.

CTIX Daily Trend Model

Donald Boudreaux: I'll Take That Bet

Writing in the WSJ last week economist Donald Boudreaux of George Mason University offers to make a bet in order to make a point about human-caused climate change in response to Bill McKibben's silly essay from earlier in the week in the Washington Post:
I'll bet $10,000 that the average annual number of Americans killed by tornadoes, floods and hurricanes will fall over the next 20 years. Specifically, I'll bet that the average annual number of Americans killed by these violent weather events from 2011 through 2030 will be lower than it was from 1991 through 2010.
I am willing to take this bet in order to raise awareness of the fact that both sides of the debate over climate change debate can't see the forest for the trees.  The factors that will drive loss of human life due to weather extremes in coming decades will be increasing vulnerability and exposure.

As a condition of the bet, when I win (which unfortunately will occur long before 2030) I ask that the proceeds go directly to the American Red Cross. (Should I lose the bet come 2030, I'll make out a check to the charity of Prof. Boudreaux's choice.)  A second condition is that Prof. Boudreaux agrees to write an op-ed for the WSJ (or some other venue) explaining the bet and why he lost (of course, I am willing to do the same).

Here are the technical terms of the bet that I will accept.  Prof. Boudreaux refers to three hazards: floods, tornadoes and hurricanes.  The dataset that he proposes using is the official record kept by the US National Weather Service (available here in PDF through 2009, and here in PDF for 2010).  This leads to the following summaries for the base period of 1991-2010 proposed by Prof. Boudreaux:


Tornado Flood Hurricane
1991 39 61 19
1992 39 62 27
1993 33 103 2
1994 69 91 9
1995 30 80 17
1996 26 131 37
1997 67 118 1
1998 130 136 9
1999 94 68 19
2000 41 38 0
2001 40 48 24
2002 55 49 53
2003 54 86 14
2004 34 82 34
2005 38 43 1016
2006 67 76 0
2007 81 87 1
2008 126 82 12
2009 21 53 2
2010 45 103 0

1129 1597 1296 4022

In his WSJ column Professor Boudreaux asks his readers to subtract the deaths from Hurricane Katrina because they were the result of a levee break.  It is not clear whether that was just a rhetorical move for the column or if that calculus also extends to the bet. He can clarify that for me in his response. With Katrina the total is 4,022 deaths and without Katrina the total is about 3,000 (again, Prof. Boudreax can tell me which number he prefers).

So far 2011 has seen 518 deaths from tornadoes.  This means that from today through 2030 the United States could see only 3,500 additional extreme weather deaths, or 180 per year (using the higher baseline that includes Katrina deaths, or 154 per year using the lower number of 3,000).  Such numbers would represent an improvement over 1991-2010, and Prof. Boudreax would still lose the bet.  We should be so lucky, and it would take a lot of luck, to see so few deaths due to extreme weather.

The fact of the matter is that our vulnerability to extreme weather is increasing, due to a combination of a growing population and especially urbanization in locations prone to extreme weather events.  This means that even with the hard work by many professionals in a range of fields that has contributed to the dramatic decrease in the number of deaths over recent decades low death totals are unlikely to continue into the future, as this year's tragic tornado season tells us.  Of course, given expected societal trends a reversal in statistics would not necessarily mean that our disaster policies are failing.  What it means is that our responses to extreme weather require constant vigilance, investment and continued hard work.

In trying to score points in the debate over global warming Professor Boudreax misses what really matters most on this issue.  And that is why my response to his question, "Do I have any takers?" is "Yes."

I will email Prof. Boudreaux with this post and update with his response.

Continued Deceleration of the Decarbonization of the Global Economy

Last summer I noted a distinct trend since 1990 of a deceleration of the decarbonization of the global economy.  What does this mean in plain English?  It means that the the trend of emitting less carbon per unit of economic activity -- in place for much of the 20th century -- was slowing down.  This slow down was occurring despite intentions expressed in policies around the world to accelerate the trend, despite assumptions in virtually every major integrated assessment model that the trend would begin to accelerate even in the absence of such policies and despite the rapid deployment of renewable energy around the world.

New data has just been released which allows for an update of the data that I presented last summer through 2010, and the update can be seen in the graph above. The data shows that in 2010 the world saw the rate of change in its carbon dioxide emissions per unit of economic activity continue to decrease -- to zero.  (The data that I use are global GDP data from Angus Maddison extended using IMF global GDP growth rates and NEAA carbon dioxide data extended to 2010 using the 2010 growth rate released by the IEA yesterday). 

The deceleration of the decarbonization of the global economy means that the world is moving away from stabilization of concentrations of carbon dioxide in the atmosphere, and despite the various reports issued and assertions made, there is no evidence to support claims to the contrary.  For more on why this is so, I recommend this book.

Kids Haven't Changed: Kindergarten Has (must-read article)

Has anyone noticed that new standards seem to move requirements down a grade level? What was once required in 5th grade is now required in 4th. First grade standards are moving into kindergarten. What does this mean?

Are our kids suddenly smarter? More capable? More worldly? More knowledgeable due to all the technology surrounding them each day?  Parents and teachers should read this article...Kids Haven't Changed: Kindergarten Has.

Great food for thought.

How-To: Maintain a correct posture while working on your laptop



Adult or Child Laptop Use at Home, Work or Classroom.



Mobile or Smart Phone Use while Driving, Traveling or on the Move.

Link via Digital Inspiration.

Monday, May 30, 2011

CDF: Wjj 150 GeV bump grows to almost 5 sigma

Adam Jester (see also Sean Carroll, Phil Gibbs, and Tommaso Dorigo) has pointed out that the bizarre bump near 144 GeV has become much more statistically significant, according to a talk by Giovanni Punzi (pages 30-35 or so) at the 23rd Rencontres de Blois conference at a cute French chateau.



As you could read here in April 2011, there was a 3.2-sigma, or 99.9% confidence level, excess of the production of an apparent 144 GeV particle decaying to two jets that have to be accompanied by a W-boson. The data used about 4/fb of the CDF data.

The preprint about the bump has produced 46 citations in less than two months.

Could it have been a fluke? It could but it almost certainly wasn't. When 7/pb of the data were analyzed, the excess grew to 4.8 sigma or so which is something like 99.9997%. So the shape almost certainly didn't appear by chance.




So one has to look at alternative explanations. It may either be a systematic effect that's being neglected or new physics. If you look at an answer of mine at Physics Stack Exchange, you will see a discussion of a possible systematic error - an incorrect scaling of the jet energy - as well as possible theoretical explanations I will recall momentarily.



Well, even the systematic effects seem somewhat unlikely at this point. Even if you assume that the energy was incorrectly measured by up to 7%, you will still see some at least 3-sigma deviation in the data, Punzi announced. There are other reasons why people loudly scream that the bump can't be due to incorrect Standard Model calculations.

(The excess is unlikely to be due to some top-quark stuff because the excess events don't have too many bottom quarks in them and because the other measurements of the top don't look too bad - even though some other surprising claims by the Tevatron about the tops and antitops could be viewed as counter-evidence.)



This brings me to the theoretical explanations. Recall that the three leading explanations of this bump are
  1. Z' boson - a new U(1) force of Nature, see e.g. Cheung-Song and Lüst et al. (stringy)
  2. Technipion - a decaying scalar particle in the technicolor theories by K. Lane and friends
  3. Squark or slepton pairs - a superpartner of the top quark in R-parity violating theories
  4. Scalar fields breaking a new flavor symmetry - by Ann Nelson et al. - claims to explain the top-antitop asymmetry as well
  5. Many others
These were three explanations plus some bonus ones. ;-) What will happen next? We are waiting for CDF's competitors at the Tevatron, the D0 Collaboration, to announce their results about the issue. And of course, then there's the LHC. :-) So far, the LHC sees nothing there in those 33/pb or so. With the current recorded 0.5/fb that are being evaluated right now, the LHC should see it more clearly than the 7/pb of the Tevatron if it is there.

Greece: can the infusion remain permanent?

Since the last year, the sick fat socialist pig called the Hellenic Republic has devoured EUR 110 billion - a package that was pumped into that country a year ago - and it's hungry again. It needs another package of EUR 100 billion or so, Fitch calculated.



So the EUR 110 billion gift has disappeared and we are where we were one year ago.

Some things have changed, however. First of all, today we are already pretty much certain that Greece hasn't succeeded in a single condition that was imposed a year ago. Populist politicians want to look nice in the eyes of their parasite voters - and they don't care about any creditors.

Also, in May 2010, people were shocked to see that the interest rate on the Greek government debt soared to nearly 9 percent. Last week, in May 2011, the interest rate soared to 17 percent - twice the value where it was a year ago. The two-year loans have a 26-percent annual interest rate. I don't know whether it's the rate (that should be exponentiated to get the yield) or the yield - and be sure that at numbers that are this high, it makes a significant difference. ;-)




Most people don't realize how stunningly high the figure EUR 110 billion that Greece has devoured is. For all practical local purposes, it's infinitely high. All predictable laws of economy and the rational behavior resulting from it break down. Greece has 11 million people, if I count the infants. That means that the average Greek, including infants, has devoured the "loan" of EUR 10,000 per year. And this is just the infusion from outside. Normal nations are supposed to work, too. For normal nations, the imbalance is supposed to be relatively low. Just to be sure, your humble correspondent has spent less money during the same year.

Equivalently speaking, all Greeks could be declared pensioners as soon as they're born. They could be put into fancy retirements homes and none of them would have to ever work. It would cost about the same to the European taxpayer. In other words, the country is pretty much economically dead. And it doesn't care. No surprise. There is apparently no threat it faces.

Now, I want to mention the crazy politically correct vocabulary that's being used in this context. We are hearing the term "restructuralization" and similar constructs instead of the proper "sovereign default" which should be governed by the laws about "bankruptcy". That's because people are supposed to believe and convince others that we're living on a civilized continent where state bankruptcy couldn't ever take place, could it? Only the third world, e.g. Latin America, could face such things.

Except that this opinion is nothing else than flagrant racism. Greece is about as screwed an economy as Argentina was before it bankrupted some time ago. So why the hell people are trying to create a different impression? There's no difference, and if there is, chances are that Argentina was more promising. This dilution of the common terms - that, as some pathologically insane utopianists believe, can be permanently avoided - is affecting the solution of such situations, too. People are saying that the "restructuralization" (default) would be disastrous, and so on.

But the restructuralization is already taking place. Some creditors are forced to delay the maturity of the debt, sacrifice parts of it, and so on. Guess which creditors are forced to do such things. Yes, it's the taxpayers. The taxpayers are just milking machines that fucking Greek communist politicians and their international accomplices may rob whenever they want. No one will notice, and if they will, the taxpayers may always be shitted into their faces, can't they? The politicians who sign these deals don't pay a penny from their pockets, so why wouldn't they sign it? They may be invited to vacations in Greece in the future.

It's the fucking parasites - the voters of the Greek socialist party and similar scum across the world - who are the foci of this whole theater and whose interests have to be protected by the politicians. That includes much of the nation of Greece. After all, a majority of the nation is composed of parasites, and because the country is a democracy, they ultimately decide. And they decide to steal money of anyone else they encounter.

I am disgusted by the behavior of the politicians and others. There's a clear looming bankruptcy of a fucking communist government that should have bankrupted decades ago. So please let it die. Make it die. The socialist government of Greece is a nasty tumor that should be not just left to die but deliberately liquidated. The bloated Greek government sector has to be nuked, neutralized, and if thousands of people have to starve to death to achieve this goal, they should be encouraged to starve to death. This is the best thing that could happen to the Greek nation.

By the way, if you think that the previous sentence is extremist, I assure you that 90% of the Slovak citizens agree with it, among majorities in many other countries.

In the case of insolvency, the politicians from the countries with lots of creditors etc. should insist to be paid by whatever Greece has, such as its islands, and if Greece decided to reject this offer, the creditors should declare the war against that arrogant crippled country. The assumption is that the creditors would win and would get what they deserve with some legitimate interest in hundreds of percent.

I am exaggerating but just a little bit. The behavior of the international politicians who support the Greek nation in believing that it may devour EUR 100 billion every year - which is why the Greek government sector may continue to be a giant fattening station - is much more insane an exaggeration than anything I have proposed as a solution above.

German nuclear suicide: 2022

While the annoyances in Fukushima have only led to one death - which was unrelated to the radiation, a heart attack of an old employee (compare with the 14 deaths so far caused by the Spanish "organic" cucumbers, with dozens of extra environmental consumers on their way out) - the Luddites across the world continue in their irrational and dishonest jihad against nuclear energy. Germany has become an epicenter of this struggle.

Just a year ago, Germany was planning to extend the lifetime of many nuclear power plants. The closure of the power plants built in the years denoted by the red numbers were extended from the white numbers to the yellow numbers.



However, after Fukushima, everything is different. Mass hysteria has affected major political parties as well as a majority of the brainwashed German public. It has been agreed today that all German nuclear power plants will be closed by 2022.




Of course, Germany may survive such an insane decision. It's seen a remarkable 5.2 percent GDP growth between Q1 of 2010 and Q1 of 2011. It can apparently afford to annually pay gazillions of euros to various Greeces, Irelands, Portugals, Spains, and maybe others.

So why it couldn't pay for the extra energy it will have to buy? At this moment, Germany is getting 23 percent of its power from the nuclei. Of course, it's risky to abolish the nuclei because they're what keeps the electrons on their quantum orbits. ;-)



The white nuclear power plants were running during the tsunami in Fukushima while the red ones were stopped right afterwards and the green ones have been closed for some time.

Well, if it is easy for you to buy international stocks, I recommend you to buy some ČEZ stocks, those of the main Czech power utility, which has bold plans to extend its coal power plants as well as the nuclear ones, and to build additional ones in other countries. It's unlikely we will see any comparable plans. Moreover, Germany is likely to suppress not only nuclear energy but also energy based on fossil fuels. At least that's how I understand their top-tier imbeciles who call themselves politicians. ;-)

By the way, it's not really a new decision. Former Mr Gerhard Schröder's socialist government wanted to close nuclear power plants in 2022, too. Merkel has just confirmed that she is a social democrat, too.



Schröder's main contribution to the humanity, The Tax Song. He was years ahead of time. Around 1:13, the Chancellor decided to establish a new "weather tax", among several other taxes. As early as in 2002, it was cold in Germany because God had learned that Schröder wanted to introduce the Wettersteuer (weather tax). Many contemporary politicians such as Australia's Julia Gillard are currently employed as ludicrous parodies of this parody of Mr Schröder and they are trying to make this "weather tax" a reality.

By the way, in the 1990s, while I was in the college, we visited both Isar eins und Isar zwo nuclear power plants near Munich. I was amazed by the smooth running of the facility and by their excellent P.R. There were lots of zoologists collecting bugs around the cooling towers, and so on. It was almost touching ;-) and it's even more touching that these plants will be euthanised just because of people's bigotry.

Today, Mr Jan Rovenský, the General Secretary of Czech Greenpeace for Climate Campaigns against the Energy Industry and the GDP Growth in General, has debated our minister of trade and industry, Mr Martin Kocourek. Rovenský, who was climbing a chimney or something like that just a few hours earlier, is clearly living outside the reality. He suggests that the Czech Republic will totally give up both coal and nuclei.

Climate Progress becomes a directory on a far left website

I want to applaud Joseph Romm. Kind of.

He has abolished the ClimateProgress.org domain and redirected all traffic to a directory hosted by the server called ThinkProgress.org owned by a radical far left political organization, The Center for American Progress.

Because of that change, he had to temporarily delete all the comments that had ever been posted to ClimateProgress.ORG.




It's much better when he admits that what actually drives him are his fanatical political beliefs and that his blog is a propaganda mouthpiece that has nothing whatsoever to do with the scientific evidence and its impartial evaluation - and that can only be read, without existential threats to the personal health, by other radical leftists.

I hope that RealClimate.org and others will follow Mr Romm and will finally change the status of their pathetic propagandist blogs to subdirectories on George Soros' personal web page, too. And I hope that Mr Mann will quickly be moved from the Penn State to the State Pen, as Tim Ball has cleverly recommended. ;-)

Just to be sure, Joseph Romm openly admits that he has always been paid by the Center for American Progress to spread all the predetermined lies. So it's likely that the decision to change the URL wasn't made by Romm himself - much like the content of his idiotic rants is probably not determined by himself. It's probably up to his owners.

Math Monday Blog Hop #8






If you want to share this collection on your blog, just grab this link:
get the InLinkz code

Visit past Math Monday Blog Hops.

Grab a Math Monday button for your own blog:

Tsunami: swimming houses and cars live

If you want to have an idea about the amount of time you have when tsunami arrives, look at this 5-minute video of tsunami attacking in Minami-Sanriku.



Warning: the video gets tough.

Not much is happening in the first two minutes or so - except for some dust above the distant beaches - even though the cameraman seems nervous from the beginning. You will understand why.




But the actual scary events eventually begin. I found it impressive how the houses remained compact yet swimming - they turned into boats. Nature is very powerful and cruel. Such events sometimes occur and they're more far-reaching than 0.008 °C of warming or cooling per year.

Thanks to Gene

Sunday, May 29, 2011

Copenhagen interpretation of quantum mechanics

I have considered myself a champion of the Consistent Histories interpretation of quantum mechanics for almost 20 years - Roland Omnes' 1992 article just erased all my doubts about the statement that the foundations of quantum mechanics have been fully understood.

However, I have always realized that the "improvements" that this interpretation brings relatively to the Copenhagen interpretation are very subtle and it has been annoying to see that pretty much everyone misunderstands the basic points of the Copenhagen interpretation.



Bohr, Heisenberg, and Pauli. Am I the only one who thinks that Pauli looks like good soldier Švejk here?

There's a lot of misunderstandings being spread about the Copenhagen interpretation - and I would say that some of them should better be classified as deliberately propagated lies and propaganda. In this text, I would like to clarify some of them.




People in Copenhagen

First, let's ask which people are "fathers" of the Copenhagen interpretation. Well, the name indicates that they have something to do with the Danish capital. Clearly, we mean Niels Bohr - a natural leader of the group - and the people who worked with him and/or visited him in Copenhagen in the mid 1920s - especially Werner Heisenberg.

Max Born is of course a key co-author of the Copenhagen interpretation - after all, he supplied the probabilistic interpretation and received a Nobel prize for that. Wolfgang Pauli has also spent some time over there and he would also sign to the principles of the Copenhagen interpretation. The number of top physicists who may be considered parts of the Copenhagen school of thought is much larger - although some of them worked on topics that were further from the "foundations". Let me mention Lise Meitner and Carl Friedrich von Weizsäcker as two "nuclear" examples. I am also confident that people such as Paul Dirac would essentially subscribe to the Copenhagen interpretation.

There was no fundamental disagreement about the meaning of quantum mechanics among those people. Obviously, many other people such as Albert Einstein, Erwin Schrödinger, or Louis de Broglie didn't ever accept the Copenhagen interpretation but they didn't have any alternative. The Copenhagen interpretation works well and we don't need another hero.



Gabriela Gunčíková - Tina Turner: We Don't Need Another Hero. Czech Slovak Superstar II. Yes, all Czech girls and women between 17 and 71 years of age look just like her.

Subjectivity of the wave function

The first major confusion - or propagandistic distortion - is linked to the interpretation of the wave function. The Copenhagen folks were very carefully applying positivism. That means that they refused to talk about properties of physical systems that can't be measured unless some observations or consistency of the predictions make it necessary to talk about them - which is an attitude that became essential with the birth of quantum mechanics. In this sense, they uniformly rejected the assumption of "realism". If the observations are described by a framework that doesn't contain any "real and objective" things or properties before the measurement but if the measurement may be predicted, then it is how the things should be.

Lots of fringe stuff, garbage, and crackpottery was later written by various people who weren't really part of the Copenhagen school of thought but who found it convenient to abuse the famous brand. That's why one can also hear that the Copenhagen school may (or even must) interpret the wave function as a real wave that collapses much like a skyscraper when it's hit by an aircraft on 9/11.

But nothing like that has ever been a part of the Copenhagen school of thought. If you open any complete enough description of the Copenhagen interpretation or if you look at Bohr's or Heisenberg's own texts, you will invariably see something like the following six principles:
  1. A system is completely described by a wave function ψ, representing an observer's subjective knowledge of the system. (Heisenberg)
  2. The description of nature is essentially probabilistic, with the probability of an event related to the square of the amplitude of the wave function related to it. (The Born rule, after Max Born)
  3. It is not possible to know the value of all the properties of the system at the same time; those properties that are not known with precision must be described by probabilities. (Heisenberg's uncertainty principle)
  4. Matter exhibits a wave–particle duality. An experiment can show the particle-like properties of matter, or the wave-like properties; in some experiments both of these complementary viewpoints must be invoked to explain the results, according to the complementarity principle of Niels Bohr.
  5. Measuring devices are essentially classical devices, and measure only classical properties such as position and momentum.
  6. The quantum mechanical description of large systems will closely approximate the classical description. (The correspondence principle of Bohr and Heisenberg)
Note that the very first point says that the wave function is a collection of numbers describing subjective knowledge. That doesn't mean that in practice, everything will be always subjective - or whatever the spiritual people have attributed to quantum mechanics. Of course that constant interactions between parts of the world - and different people - pretty much guarantee that they have to agree about many "objective properties". But as a matter of principle, this rule is important for quantum mechanics and Werner Heisenberg has never left any doubts that this is how one had to interpret it.

Heisenberg would often describe his interpretation of the wave function using a story about a guy who fled the city and we don't know where he is but when they tell us at the airport they saw him 10 minutes ago, our wave function describing his position immediately collapses to a smaller volume, and so on. This "collapse" may occur faster than light because no real object is "collapsing": it's just a state of our knowledge in our brain.

In practice, everyone can use pretty much the same wave function. But in principle, the wave function is subjective. If the observer A looks at a quantum system S in the lab, he will use a wave function where S has a well-defined sharp spin eigenstate as soon as the spin of S is measured by A. However, B who studies the whole system A+S confined in a lab won't "make" any collapse, and he evolves both S and A into linear superpositions until B measures the system. So A and B will have different wave functions during much of the experiment. It's consistent for B to imagine that A had seen a well-defined property of S before it was measured by B - but B won't increase his knowledge in any way by this assumption, so it is useless. If he applied this "collapsed" assumption to purely coherent quantum systems, he would obtain totally wrong predictions.

So the wave function is surely subjective if one wants to obtain a universal description of the world. It's a collection of probability amplitudes that may be combined in various ways, before the squared absolute values of the combinations are interpretated as probabilities. All probabilities of physically meaningful events may be calculated in this way - as the squared absolute value of some linear combination of the probability amplitudes.

No collapse in the principles of the interpretation

A widely propagated myth is that the Copenhagen interpretation is all about the "collapse". However, if you look at the six principles above, there is not even a glimpse of a comment about a "collapse" because it's not needed. The notion of an objective collapse was introduced by John von Neumann in 1932 and he was clearly not a part of the Copenhagen school of thought anymore. Comments that Heisenberg later switched to an "objective wave function" or an "objective collapse" are untrue, and even if these legends were true, these new opinions wouldn't be a part of the Copenhagen interpretation and, more importantly, they wouldn't be valid.

Because the wave function is subjective, see rule 1, everything that happens with the wave function has to be subjective as well.

Consider a cat. You will evolve a wave function and the final state is "0.6 alive + 0.8i dead." (The fact that the actual state is not pure in any useful sense will be discussed later.) When you observe the cat, it's - unexpectedly - alive. Once you know that the cat is alive, it becomes a fact. You have to use this new knowledge in all your subsequent predictions and retrodictions if they're supposed to be any accurate. Or valid, for that matter. I think that the previous statement is totally obvious.

However, people invent lots of nonsensical gibberish in order to obscure what's actually going on even though it is fundamentally very clear.

For example, you will hear all the time that it is so difficult to get a "collapse" and there must be some complicated gadget or mechanism that does so. But if you realize that those things are probability amplitudes rather than potatoes, you must see that absolutely nothing has to be supplemented to "get" the collapse.

The laws of physics predict that with the state above, there is a 36% probability that we will measure the cat to be alive and 64% probability that it is dead. Just to be sure, there is a 0% probability that there will be both an alive cat and a dead cat. The last sentence, while totally obvious, is once again being obscured and crippled pretty much by everyone who has every said that he sees a problem with the Copenhagen interpretation.

Once decoherence eliminates the off-diagonal elements, the density matrix for the cat is
rho = 0.36 |alive> <alive| + 0.64 |dead> <dead|
The diagonal entries of the density matrix, 0.36 and 0.64, are the probabilities that we will get either of the results. But the result "we will have both types of a cat" isn't among the options with nonzero probabilities at all, so it will certainly not occur. Only one of the options with the nonzero entries, "dead" or "alive", will occur, and the probabilities are 64% and 36%, respectively.

So one of them has to occur and the "symmetry" between them surely has to be broken. That's what the formulae imply. There is no possible answer of the form "half-dead, half-alive", so the latter result can't be observed. If you used another basis where a "half-dead, half-alive" state would be one of the basis vectors, the off-diagonal elements of the density matrix wouldn't ever go to zero and the interpretation of the diagonal elements of the density matrix as "probabilities" would be illegitimate.

In this sense, while the density matrix (also a subjective tool to predict and explain!) isn't an "observable", it's still true that you may compute its eigenvalues - the allowed probabilities of observable microstates - and the corresponding eigenstates of the density matrix are those that can actually be observed with well-defined probabilities. If a state is not an eigenstate of the density matrix, it's not possible to imagine that this state will be realized after a measurement. The Hamiltonian evolves the density matrix and dictates which states are "observable" in the classical sense.

So when you realize that the numbers 36% and 64% are not piles of potatoes but probabilities, it's very clear that once we learn that the cat is alive, even though the chance was just 36%, the number 64% has to be replaced by 0% while 36% jumps to 100%. We know that the cat is alive so for us, it's a fact. It's nonsensical to try to "preserve" the dead option because the dead option was not realized.

It makes no sense to claim that it's "predetermined" that the cat would be seen as alive. The free-will theorem, among other, morally equivalent results, shows that the actual decision whether the cat is seen alive or dead has to be made at the very point of the spacetime where the event (measurement) takes place; it can't be a functional of the data (any data) in the past light cone.

One more comment, linked to the recent discussions about Everett's "many words", is the following. People often say that is unfair that some terms disappear. And they say that the disappeared terms should exist in separate worlds, and all this crap. But it is the very meaning of probability that only one of the options occurs and the others just disappear once we know that they were not realized. The actual event we were trying to predict breaks any "democracy" between the different possible results that were "on equal footing" prior to the actual event.

Even more importantly, people often say that "rho = rho1 + rho2" decomposition of the density matrix means that there are "two worlds", one that is described by "rho1" AND one that is described by "rho2". (Similarly for "psi", but for "rho", the comments are more clear.) But this is a complete misunderstanding of what the density matrix and addition means. The density matrix is an operator encoding probabilities - its eigenvalues are predicted probabilities. And we're just adding probabilities, not potatoes.

What does it mean to add probabilities? It doesn't mean "AND" at all! If "P(A and B)" vanishes i.e. if A and B are mutually exclusive, then "P(A) + P(B) = P(A or B)". You see the "or" word over there? It's always "OR"! So whenever you ADD terms in the density matrix - and similarly the state vector - it always means "OR" rather than "AND"! So the existence of the two terms can in no way imply that there should exist both options or both worlds. It's complete rubbish. People who say that the two terms in the wave function imply that there have to exist two real objects somewhere in the "multiverse" are making an error that is as childish as confusing addition and multiplication. Literally. They're just confused about basic school-level maths. And you can't just confuse the words "and" and/or "or" because if you do, your whole framework for logic becomes flawed.

Let me rephrase a point of the "Lumo interpretation" clearly: you describe the system by a density matrix evolving according to the right equation and it is always legitimate to imagine that the world collapsed to an eigenstate of the density matrix and the probabilities of different eigenstates are given by the corresponding eigenvalues of the density matrix. Incidentally, this also works for pure states for which "rho = psi.psi". In that case, "psi" is the only eigenstate of "rho" with the eigenvalue of "1", so you may collapse into "psi" with 100% probability which leaves "rho" completely unchanged. ;-) The only illegitimate thing to imagine is that the world has collapsed into a state which is not an eigenstate of "rho".

Uncertainty principle

The third rule is the uncertainty principle. It means that generic pairs of properties, such as "x" and "p" or "J_x" and "J_z" - but pairs of projection operators describing various Yes/No properties of a system could be even better examples - can't have well-defined values at the same moment. Especially John von Neumann, before he began to say silly things, liked to emphasize that the nonzero commutators and the Heisenberg uncertainty principle is the actual main difference between classical physics and quantum physics. If the commutators were zero, the evolution of the density matrix would be equivalent to the evolution of the classical probabilistic distribution on the phase space.



Because the projection operators P and Q corresponding to two Yes/No questions about a physical system typically don't commute with one another, it is totally illegitimate to imagine that during its evolution, a quantum system had well-defined both Yes/No properties, P and Q. It just couldn't have had because P and Q don't commute and they don't usually have any common eigenvectors (unless there are eigenvectors of the commutator [P,Q] with the vanishing eigenvalue).

This main principle is the main "underlying reason" why the GHZM experiment or Hardy's experiment produce results that are totally incompatible with the classical or pre-classical reasoning. The classical reasoning is wrong, the quantum reasoning is right - and the nonzero commutators in the real world are the main reason why the classical reasoning can't agree with the observations.

Complementarity principle

Bohr's favorite principle shows that the systems exhibit both particle-like and wave-like properties, but the more clearly you can observe the latter, the more obscure has to be the latter, and vice versa. In some sense, this principle just follows from the uncertainty principle for "x" and "p" because particle-like behavior occurs for measurable states that are close to eigenstates of "x" while the wave-like properties are seen when the measurable states are close to eigenstates of "p" - with long enough wavelength (more precisely, long enough "any" features of the wave) so that the wave properties may be seen.

In some sense, the complementarity principle is totally uncontroversial - unless you are Shahriar Afshar, of course. ;-) So I won't spend more time with that.

Measurement devices are classical

The fifth rule says that the measurement devices follow the rules of classical physics. This is another source of misunderstandings.

The Copenhagen school surely didn't want to say that quantum mechanics couldn't be applied to large systems. Indeed, many people from the Copenhagen school were key researchers who helped to show that quantum mechanics works for large systems including molecules, bubble chambers, solids, and anything else you can think of.

Instead, this rule was meant as a phenomenological rule. If you measure something, you may assume that the apparatus behaves as a classical object. So in particular, you may assume that these classical objects - especially your brain, but you don't have to go up to your brain - won't ever evolve into unnatural superpositions of macroscopically distinct states.

Is that true? Is that a sign of a problem of the Copenhagen interpretation?

It is surely true. It's how the world works. However, one may also say that this was a point in which the Copenhagen interpretation was incomplete. They didn't quite understand decoherence - or at least, Bohr who probably "morally" understood what was going on failed in his attempts to comprehensibly and quantitatively describe what he "knew".

However, once we understand decoherence, we should view it as an explicit proof of this fifth principle of the Copenhagen interpretation. Decoherence shows that the states of macroscopic (or otherwise classical-like) objects whose probabilities are well-defined are exactly those that we could identify with the "classical states" - they're eigenstates of the density matrix. The corresponding eigenvalues - diagonal entries of the density matrix in the right basis - are the predicted probabilities.

Because the calculus of decoherence wasn't fully developed in the 1920s, the Copenhagen school couldn't have exactly identified the point at which the classical logic becomes applicable for large enough objects. However, they were saying that there is such a boundary at which the quantum subtleties may be forgotten for certain purposes and they were damn right. There is such a (fuzzy) boundary and we may calculate it with the decoherence calculus today. The loss of the information about the relative phase of the probability amplitudes between several basis vectors is the only new "thing" that occurs near the boundary.

Again, this point was the only principle of the Copenhagen interpretation that was arguably "incomplete" but their proposition was definitely right! To make this point complete, one didn't have to add anything new to quantum mechanics or distort it in any way. One only needs to make the right calculation of the evolution of the density matrix in a complicated setup. In that way, one proves that they always treated the measuring devices in the right way even though they couldn't fully formulate a fully quantum proof why it was the right way.

Correspondence principle

Both Bohr and Heisenberg also emphasized the correspondence principle, e.g. that the quantum equations reduce to the classical ones in the appropriate limit. If you study e.g. the evolution of the expectation value of "x" and "p", they will evolve according to the classical equations. It's the Ehrenfest theorem.

As discussed in the previous point, it is not enough to show that the world of classical perceptions will occur in the limit as well: we also need to know that the right "basis" will become relevant in the classical limit. Nevertheless, with the extra additions that had been demonstrated in recent decades, I mean decoherence, we know that it is true that the classical perception and choice of states does occur in the appropriate classical limit.

So all the points of the Copenhagen interpretation were right and they were only incomplete because they didn't explicitly calculate where the classical-quantum boundary really is. I need to emphasize that this boundary is fuzzy. And this boundary doesn't mean that quantum mechanical laws ever break down. They never break down. What's true is that for large enough systems, one may use - and should use - the approximate classical scheme (the word "approximate" sounds too scary but in reality, these approximations are super excellent for all practical and even most of the impractical purposes) of asking questions because one may show that it becomes legitimate.

Well, I surely don't expect that people will stop being hysterically angry about the Copenhagen interpretation and its alleged flaws - which don't exist. But at least, I would like to see the chronic Copenhagen haters to acknowledge that the Copenhagen interpretation says what it says and that it clearly doesn't have any demonstrable flaws. It has no internal inconsistencies and it is not in contradiction to any observation done as of today.

And that's the memo.