Showing posts with label string vacua and phenomenology. Show all posts
Showing posts with label string vacua and phenomenology. Show all posts

Tuesday, June 14, 2011

D0 rejects CDF's claim on top-antitop mass difference, too

First: major LHC detectors reach one inverse femtobarn!

By 21:10 Prague Summer Time tonight, the CMS and ATLAS at CERN have collected 1/fb of data each: see viXra blog.

This was the original plan for the whole year 2011 - and it's reached in mid June! The figure includes 47/pb from 2010, but 47/pb is collected in 36 hours in average these days, so on Wednesday, the 2011 run will probably reach a femtobarn, too. About 95% of the data have been recorded, so in a few more days, one also surpasses 1 recorded femtobarn of the data.

The LHC could collect 4/fb by the end of 2011 and, because of expected luminosity increases, 15/fb by the end of 2012 which should be enough to discover the Higgs boson even at the most inaccessible places. In fact, if the Higgs boson isn't in the data that have already been collected by now, then it's probably lighter than 135 GeV. It follows that the vacuum of the Standard Model is unstable and needs new, SUSY-like particles to be saved.

Now, let's look at the opposite side of the Atlantic

A Japanese translation of the text below is available.

In March, Tommaso Dorigo hyped a preprint by the CDF Collaboration,
Measurement of the mass difference between top and antitop quarks
As far as I remember, I hadn't discussed that paper on this blog because I considered it and still consider it a very bad, offensive piece of work. At any rate, they claimed that the top quark was much lighter than the antitop antiquark:
M(top) - M(antitop) = -3.3 +- 1.4(stat) +- 1.0(syst)
Using this lousy measurement, they wrote a paper phrased in such a way that they offered a 2-sigma evidence that the CPT "hypothesis" (which, of course, implies that antiparticles have to have exactly the same masses as particles) was wrong.



CDF, a pretty detector that has produced lots of rubbish lately (and maybe it's the people behind it who did it)

This is just offensive. It's the kind of shoddy research with the most sensationalist claims supported by the weakest possible (and, independently of that, flawed) evidence that belongs to the climate "science" but surely not to particle physics.




Today, the CDF's competitors at D0 have released their paper on the very same question:
Direct measurement of the mass difference between top and antitop quarks
There is no sign of the mass difference, of course. In the previous D0 paper, the very same mass difference was +3.8 +- 3.4 (stat) +- 1.2 (syst) - yes, the mass difference was going in the opposite direction than the CDF claim! The newest D0 paper says it is
M(top) - M(antitop) = +0.8 +- 1.8(stat) +- 0.5(syst)
Again, it's the opposite sign than the CDF claim but the mass difference is as zero as you could hope. The accuracy of the measurement remains disappointing but the preposterous idea suggested by the CDF paper that the experiments would begin to uncover a huge, multi-GeV mass difference between the top and the antitop has clearly been debunked.

Something is really wrong with the CDF detector and/or the CDF Collaboration and/or their methods.



Newsy Science: this is actually a fair and clear popular report on the Wjj 150 GeV events

In March, I had some exchanges with an experimenter on Dorigo's blog. The first thing I wrote was:
I think that the interpretation chosen by this [CDF] paper is shameful sensationalism. Paying attention to 2-sigma deviations is bad enough, but using 2-sigma deviations to "disprove" one of the most important principles of the discipline is a really bad taste.

Whoever had the idea to interpret their inability to measure the masses of tops and antitops more accurately in this far-reaching way should be ashamed.
Sean agreed with me. Andrea Giammanco, an experimenter who is not even a CDF member, tried to defend their work:
Hi Lubos,

I suspect you didn't actually read the paper.
I didn't find a single sentence of the paper where they claim that this result "disproves" anything. If I missed it, please point me to the exact line.

The most bold sentence that I found is that it deviates at 2 sigma level from the CPT-symmetry expectation. What is your criticism, exactly? That they should have written "it agrees at 2 sigma level with the CPT-symmetry expectation"? Whomever is able to read until that point in the paper also knows that the two statements are the same.

I must also add that I can't understand the criticism in some of the comments above for looking for an effect that "cannot" be there.

I find healthy to look for deviations even when you have no reason whatsoever to expect a deviation; I understand in general the criticism that, with finite resources, these must be prioritized, but this is a particularly uncontroversial case because data are anyway available as a by-product of all the rest of the Tevatron program, and performing this study doesn't require any other cost than a few months of salary of just two people (*).

There are several instructive historical examples of important effects that could have been observed before, because the data for their discovery were already available or easy to produce, but none of the "owners" of the data had considered to look for that particular effect (or, even worse, had actually stumbled into a 2-3 sigma deviation and paid no attention, remodelling the background or inflating the systematic uncertainty to take it into account and therefore unwittingly hiding it under the carpet.)

(*) their number and identity can be seen here:
http://www-cdf.fnal.gov/physics/new/top/2010/mass/TMT_massdiff_p28_public/
So I gave him a longer explanation:
Dear Andrea, what I find offensive is their arrogant suggestion - included to the very title, and much of the abstract as well as the paper - that they have measured a nonzero difference between top and antitop masses.

To deduce what are the masses of the particles from their measurements, you effectively have to use some quantum field theory at one level or another. Quantum field theory implies that the mass difference is zero. So the most accurate measurement is to do nothing and to conclude that the difference is exactly zero.

In this sense, the most outrageous sentence about that paper is the last sentence, one that the measurement of the top-antitop mass difference - which is a whopping 3.3 GeV according to the CDF paper - is the "most accurate" measurement of the quantity.

This is just bullshit.

The most accurate measurement of the mass difference, done with a clever definition of the mass, is 0.000000 +- 0.000000 GeV, and neither they nor you have presented any real evidence that this is not the case. The CDF result is a striking sign of the immense inaccuracy with which they measure the mass of the heaviest quark, so it's really disingenious for them to claim that they've just produced the most accurate result.

They have produced one of the most *inaccurate* results for this quantity ever written down.

It is healthy to look for whatever effects but it is totally unhealthy to publish papers claiming to have measured effects that almost certainly don't exist, without having any real evidence for such claims.

CDF should have verified that they don't have any 5-sigma deviation, and because they don't, they should use the obvious identity mass(top) = mass(antitop) as a method to make their other measurements more accurate. In other words, they should have used the top-antitop average mass, 172.5 GeV, as the actual input to use for other measurements. They failed to do all such things which really indicates that most of the mass measurements depending on the matter-antimatter difference are likely to have similar 3.4-GeV-like errors, too.

They have surely nothing to boast about in this context, so it is irritating that they apparently do.

Cheers
LM
Obviously, they just don't feel that way. They produced the worst (largest) bullshit number ever describing the top-antitop difference - over 3.3 GeV even though it is obviously 0.000 GeV and many others have come really close to it - and they don't see any problem about claiming that the 3.3 GeV result for the mass difference is the "most accurate measurement". Holy crap.

Obviously, the conversations were completely unproductive. Maybe it's because the experimenters don't actually understand the laws of physics. Maybe they're being honest but they just don't know that to suggest that they have measured a nonzero mass difference of a particle and its antiparticle because of a 2-sigma bump is just idiotic. It's like claiming that the rectangular triangle with legs 3 and 4 has hypotenuse of 5.05 plus minus 0.02 meters which deviates from the Pythagorean "hypothesis" by 2 sigma.

Hopefully, they at least understand the experiments, so they should be able to understand that the D0 has just confirmed that I have always been right and that the CDF paper had actually made the most inaccurate measurement of the m(top)-m(antitop) mass difference ever, it should be ashamed of that paper, and it had absolutely no moral right to suggest far-reaching interpretations of their shoddy measurement.

Claims about the CPT violation (or, equivalently, about the nonzero matter-antimatter mass difference) are extraordinary claims and if you're a serious person, you simply shouldn't make them based on some illusions or 2-sigma signals.

I think that the CPT-symmetry is exact, a direct consequence of the Lorentz symmetry and analyticity. Just Wick-rotate your scattering problem to the Euclidean spacetime, rotate the "tz" plane by 180 degrees, and Wick-rotate back to the Minkowski space. You will get the CPT image of the original process. Because the analytic continuation and the "tz" rotation - which is a kind of the Lorentz boost - were the only operations and they're symmetries, it follows that CPT has to hold.

(You may be confused why the reversal of the "tz" plane gives you CPT rather than PT. While the obvious geometric transformation induced by this operation is PT, the particles also start to move backwards in time after this PT, and a particle moving backwards in time has to be interpreted as an antiparticle. So the full transformation is CPT, not PT, and one has to be sensitive to properly derive that the non-geometric "C" factor is included there as well.)

But even if you had doubts whether the CPT symmetry would be exact forever, it's just exact according to all measurements that had been done so far. So a claim that it doesn't hold - and that it's even violated by 2 percent by ordinary top quarks - is just an extraordinary claim and 2-sigma deviations are surely not good enough to claim to support the claim.

Friday, June 10, 2011

D0 denounces CDF for 4-5 sigma claim on Wjj 150 GeV bump

The claim of the CDF Collaboration that they have observed a new particle similar to a Z'-boson near 150 GeV decaying to 2 jets and produced together with a W-boson was just dismissed by their colleagues and competitors at the same Tevatron, the D0 Collaboration, that is working with the rather ugly and non-compact detector on the picture:
Study of the dijet invariant mass distribution in
pp → W(→lν) + jj final states at √(s) = 1.96 TeV
That's despite the fact that the CDF claim grew to 4.8 standard deviations - formally approximately 2 parts per million risk of a false positive - before it dropped to about 4.2 standard deviations (27 part per million risk of a false positive). Many people were almost certain that the signal had to be real.




D0 has analyzed 4.3 inverse femtobarns of their data. They formulated their opposition to CDF in this way: the probability that their (D0) signal is compatible with the CDF's claim of a dijet resonance near 145 GeV that has the cross section of 4 picobarns is just 0.000008, i.e. 8 parts per million.

Because D0 has obtained an agreement with the Standard Model, using the same data as the original CDF paper, and I believe that it's more nontrivial to get an exact agreement than to introduce a random error and inappropriate bump, I am almost completely convinced that D0 is right, CDF is wrong, and there is no bump. As a jury member, I say: Abazov beats Aaltonen! ;-) This expectation - that conservative physicists are almost always right and the "progressive" ones are almost always wrong - holds for the same reason why most big mutations of an organism are detrimental.

If you will hear someone saying that near-5-sigma "proofs" are incontrovertible, tell him or her about this story!



The CDF Bactrian (2-hump) camel is actually a D0 Arabian (one-hump) camel i.e. a dromedary. The 150 GeV hump was a Fata Morgana. Picture of D0 via Dorigo: click to get to his blog.

It's hard to figure out what CDF is doing wrong. But the evaluation of the jets and leptons is very complicated. If it were possible, the D0 people should try to apply their algorithms on the CDF data even though it's not quite possible because even the raw data differ because of the different detectors.

Needless to say, the likely CDF debacle is making their top quark-antiquark asymmetry doubtful and I just increased my subjective probability that this 3+-sigma result is wrong as well to 90 percent.

CMS at 190/pb: LHC avoids black hole production

The CMS Collaboration has published a new paper that also answers the question whether the Earth is going to be swallowed by a man-made black hole.




You can be sure that everything is fine and the Earth isn't being eaten yet. To be sure, here are the webcams near the CMS experiment and the LHC building. You may watch it to get assured that our planet is safe. ;-)



If you want to check that the LHC is safe with some music...

Indeed, even after 190 inverse picobarns, there's no black hole:
Search for black holes in pp collisions at sqrt(s) = 7 TeV
They studied final states with many - e.g. 10 - objects such as jets, leptons, and photons which carry a high energy such as 1 TeV or more. They would be a sign of an evaporating black hole. An agreement with the Standard Model background is found. I choose not to get excited by some occasional 2-sigma-like excess etc.

If there were low-energy string theory in Nature around us, I would still expect the string scale to be of order 3 TeV but the quantum gravity scale - the mass of the lightest black holes worth the name - would probably be significantly greater than that (an order of magnitude or more) and inaccessible at 7 TeV and maybe even at 14 TeV. So I personally view this experiment as a formality. But surprises may come at unexpected times...

Saturday, June 4, 2011

Indian SUSY island barely survived ATLAS' visit to 165/pb

On their conference notes page at CERN, the ATLAS Collaboration began to produce many new papers that use the 2011 data. At this moment, the fifth one is
Search for squarks and gluinos using final states with jets and missing transverse momentum with the ATLAS detector in sqrt(s) = 7 TeV proton-proton collisions (PDF)
They studied lepton-free events - dijet, three-jet, four-jet, with possible missing transverse energy - in order to find supersymmetry's gluinos. They didn't find any significant evidence of them which raised the lower bound to 725 GeV for the gluino mass in general and about 1 TeV for gluino mass if it is equal to the squarks' mass.



Miss Gluino, a vampire alchemist, is looking pale in the new paper. She will have to eat some blood, gore, and al-chemical food and become more massive in order to survive.




Clearly, the LHC is starting to penetrate to the territory of the parameter space that was possible according to all the pre-LHC data. As an example, the Indian supersymmetric island I was discussing in December 2011 survived once again, despite its having pretty low masses. But it had to be lucky.

The light stop is at 390 GeV, the light sbottom is at 720 GeV, other squarks are at 800-900 GeV while gluinos are at 934 GeV which is different. As you can see, the conditions of the new ATLAS paper are satisfied. But a single improvement that happens to produce the same negative outcome will eliminate the Indian supersymmetric island, too.

And maybe it will not and something more fascinating will take place.

The required amount of data has actually been accumulated by the LHC collider already, because each of the main two detectors at the LHC has collected over 650/pb at this point, which is 4 times larger than the dataset used in the paper we are discussing now, and the pile of the data is just waiting to be processed by the experimenters.

The latest gluino paper from ATLAS reports three events (two three-jet and one one-jet) that are close to the effective mass of 1.5 TeV but they're not enough to make me excited as much as the 3.34 TeV bump in the dijet spectrum.

Thursday, June 2, 2011

ATLAS: Standard Model passes 3 tests even at 205/pb

A rather remarkable triplet of similar events with the dijet energy near 3.33 TeV is the most spectacular hint of a new particle

Note that two days ago, I discussed the string theory Z'-boson explanation of the 150 GeV CDF Wjj bump. The newest paper predicted another Z''-boson with the mass of 3.* TeV - page 7 top. With a modest dose of optimism, ATLAS has just confirmed a stringy prediction.

But not so fast...



Phil Gibbs was the first man who spotted three radically new ATLAS preprints on a CERN page that at least two of us constantly watch. ;-)
Update of the Search for New Physics in the Dijet Mass Distribution in 163/pb of pp Collisions at sqrt(s) = 7 TeV Measured with the ATLAS Detector (PDF)

Search for high-mass states with one muon plus missing transverse momentum in proton-proton collisions at sqrt(s) = 7 TeV with the ATLAS detector (PDF)

Search for high-mass dilepton resonances in pp collisions at sqrt(s) = 7 TeV with the ATLAS experiment (PDF)
So far, the papers by CMS and ATLAS have only used at most 43 inverse picobarns of the 2010 data. Suddenly, they jumped to 205/pb, 163/pb of which was gotten in 2011.




The luminosity 43/pb in 2010 was small enough so that the chances of getting new exciting results were a matter of wishful thinking - a price in lottery (not necessarily the highest one) that physicists could win. Papers predicting new physics to be seen after 43/pb were "contrived" - the authors resemble gamblers who just want to imagine that sensations and Nobel prizes for them are waiting "right behind the corner".

While most authors who predict and discuss new physics at the LHC usually talk about 1/fb of the 7 TeV data or more, I think that those 200/pb = 0.2/fb is already starting to be significant and the three new papers that have found nothing are starting to substantially increase the probability that the LHC will see (almost) nothing - except for some kind of the Higgs sector.

I think that the first papers with a qualitatively higher luminosity contain much more information than the generic papers that were published "routinely". It's pretty clear that none of the standard methods can find anything new in 43/pb of the data and the new papers based on the 43/pb data quickly became redundant.

With the 205/pb of the data, it's different. The signal-to-noise ratio (the number of "sigmas") would more than double between 43/pb and 205/pb, so 2-sigma hints would become 4-sigma or 5-sigma signals in the new ensemble. That didn't happen.

However, three papers are still too little. It can't be excluded that these particular tests - dijet mass distribution and muon+MET states - were published first exactly because they don't see anything, so no further tests of their big and novel claims are needed (because there aren't any). However, the papers above already surely eliminate some models that were relatively close to the previous "boundary of our knowledge".

Some detailed impact of the three papers

The first paper on the dijet spectrum, extending this March 2011 preprint, mostly eliminates excited quarks and axigluons (and some RH black holes and perhaps other animals, but less clearly so). If you care about small hints, the first paper shows a 2-sigma excess for dijet energy of 3.333 TeV :-) plus minus 33 GeV or so, and they also show the picture of several events near this point. Because the expected number of events at even higher energy is lower, one event with the dijet energy of 4.040 TeV (the winner of the highest-energy championship) is enough to be a 1.8-sigma excess, too.

I would actually bet that the three events at 3.333 TeV could be real - not because it's a number that is guaranteed to excite crackpots but because the agreement in the energy between the three events is just striking. They show the events at 3.32, 3.35, and 3.36 TeV. The spread is just 30 GeV - even though the bin into which they have been thrown is something like 200 GeV wide! The probability that all the differences between 3 numbers in a 200-GeV-wide bin are at most 40 GeV is small - 10% - so it seems unlikely that it is a coincidence.

Well, I can make the case much stronger. There were exactly three events with the dijet energy between 3 TeV and 4 TeV. The probability that all 3 differences between the energies in the triplet are at most 40 GeV (which was the case with the 3.32, 3.35, 3.36 TeV) is just 0.5%. You might say that at a 99.5% confidence level, I showed that there is something real near 3.33 TeV. (I used a uniform distribution between 3 and 4 TeV but it's a good estimate because 3.33 TeV is not far from the center of the real distribution.) This 2.5-sigma argument of mine, if real, will grow to nearly 5 sigma if you use the 650/pb that have been accumulated as of today. Maybe, a 3.33 TeV particle rules. :-)

The second paper, updating this March 2011 paper, eliminates new charged massive gauge bosons - W-prime-bosons - because they would decay to missing energy (neutrino) plus the charged lepton, electron or muon, that they were searching for (and perhaps, it may rule out other animals, but less clearly so). I find W-prime-bosons just silly and marginally impossible even in theory (sequential standard model: WTF? SSM was the Czechoslovak Komsomol) so this particular conclusion is not a big deal, anyway. It requires some thinking to realize why these three papers don't exclude many models that are much more sensible.

The second paper shows the highest missing-transverse-energy event which has 1.35 TeV or MET. This whole paper reveals some excess at many energies but it's always within 2 sigma. No clumping of similar events so I can't use my argument that worked for the first paper.

Imminent flood of new results based on 2011 data

Within a few days, we should see a higher number of papers that use hundreds of inverse picobarns of the data and that will be more directly relevant for the search for more realistic new physics such as garden-variety supersymmetry. It's likely that in this case, ATLAS (the larger collaboration) will be faster in announcing their new results than the CMS (the smaller one). Note that after the 2010 run, it was the other way around.

Some of the potentially exciting results could be presented next Monday, in 3 days from now. Others may wait for the end of July, the EPS-HEP 2011 conference in Grenoble, the largest looming event in the field. For example, if the 3.33 TeV signal is real, each of the detectors may have 12 events strikingly accumulated near 3.32 TeV +- 20 GeV, while just a few between 3 and 4 TeV are elsewhere. This would already be a remarkable 7-sigma evidence for a new particle.

There may be many more signals that they already know. After the confusing recent leaks, omerta could have been imposed upon all the members and the announcement may be more grandiose as a result.

Low-energy excited strings at the LHC

Dieter Lüst, who was also excited by the possibility of a 3 TeV Z'' boson seen by ATLAS, has pointed out to me the following paper of theirs:
Dijet signals for low mass strings at the LHC
If string theory is realized in the real world by a brane world with old large dimensions, this fact will be easily discoverable by the LHC. The paper shows that the textbook Veneziano-like amplitudes could be independent of many details of the compactifications, at the leading level, and could be seen by huge cross sections, potentially right now.

If you forget that the 3.34 TeV particle is a Z''-boson, you might also say that it is a stringy harmonic. To make things numerologically funnier, the squared mass is 11.1 TeV^2. If you identify it with the N=2 excited mode of a string, the N=3 mode is at 11.1 x 3/2 = 16.65 TeV^2 which is the square of 4.08 TeV, very close to the other highest-energy event that was seen. ;-)

Whoever likes this argument will conclude that m(string) is equal to 2.35 TeV = 3.32 TeV / sqrt(2). Nice. Shmoits and Shwolins are already planning to buy lots of new underwear when they're reading these sentences and looking at the graphs above. ;-) Just to be sure, I am not certain that such signals will materialize in more complete data. Any of them. But they could. After many decades, a collider is running that is bringing us into a qualitatively new territory.

2.35 TeV is an excellent value for a string scale. Crackpots-slash-geniuses have already predicted that yablonons and magnetic monopoles exist exactly at the mass of 2.35 TeV. The former particle, named after a crackpot-slash-genius, could even be the first excited massive string mode. Moreover, the LHC was running for a while at the 2.36 TeV center of mass energy without realizing that it was fine-tuned to the string scale! :-) 2.35 TeV was also the mass of a fermion triplet dark matter while another paper quoted 2.35 TeV as the lowest bound for some dark matter mass. Good company! :-)

Tuesday, May 31, 2011

Seeing D-branes at the Tevatron and the LHC

As of 2011, no sane person seriously doubts that string theory is the right framework that describes the Universe around us. However, there remain uncertainties about the inner structure of the relevant string vacuum. String/M-theory possesses many four-dimensional solutions that qualitatively resemble our world.



Those solutions are connected on the configuration space of the theory and may be related by various dualities. However, the known stringy descriptions that are semi-realistic and weakly coupled may still be divided into several major categories:
  1. Weakly coupled heterotic E8 x E8 strings
  2. Heterotic M-theory in 11 dimensions; the strongly coupled limit of (1) with two end-of-the-world domain walls
  3. M-theory on singular 7-dimensional G2-holonomy manifolds
  4. Type IIA orbifold/orientifold braneworlds
  5. F-theory on local singularities
Somewhat independently of this classification, the models may reproduce the ADD old large dimensions - especially the braneworlds based on flat manifolds of extra dimensions in (4) and (5) - or Randall-Sundrum warped extra dimensions - especially types of (5) and maybe (3) and others.




The largest group (5) also includes the "generic" models - that one may describe as F-theory on Calabi-Yau four-folds - that have been used by the anthropic people. Because I don't consider the large number of models in a class to be a positive argument in favor of a scenario, this anthropic interpretation of the large class (5) will be ignored.

However, even independently of that, the group (5) contains some subgroups of "simplified" and "more specific" models that may be imagined in various ways and have various interesting properties. In particular, the group (5) includes the Vafa et al. bottom-up F-theory model building with singularities as large as the E8 singularity. Also, (5) includes the simple "moral" T-duals of the category (4) - type IIB braneworlds with D3-branes and similar things occupying a nearly flat space.

Relationships between the scenarios

While one cannot identify each model with "twin city" models in all other groups in a one-to-one fashion, it's still true that for various compactifications, there are dualities which are sometimes many-to-one equivalences. Let me mention a few of the basic ones that are enough to connect all five groups.

The group (1) is obviously related to (2) because (2) is the strong coupling limit of (1). Also, if the Calabi-Yau manifold in (1) is written as a T3-fibration, one may use the heterotic/K3 duality and obtain (3), the M-theory compactification with a K3-fibered G2-holonomy manifold. So (3) is connected to (1), too.

Locally on the compact G2-holonomy manifold, one may also try to shrink some cycles and get to a type IIA description. So (3) is related by the type IIA/M-theory duality to (4). And (4) is at least in some cases a T-dual of (5) - the usual dualities between type IIA and type IIB string theories. Let me just re-emphasize that this simple list is not exhaustive. There are other relationships between the vacua in different groups, too. Chances are that if you consider a vacuum in one group, you may learn a lot if you find a complementary perspective on it using a vacuum from a different group.

Advantages

The group (1) is the most field-theory-like scenario in string theory. It usually agrees with the conventional supersymmetric and grand unified model building in field theory even though there are some extra characteristic string effects modifying the grand unification, too.

The group (2) is similar except that the 11th dimension may be pretty large in which case one gets a higher-dimensional theory well beneath the Planck scale. At any rate, (1) and (2) are naturally exploiting the gauge coupling unification and other beautiful arguments of grand unified theories.

The group (3), much like (2), uses M-theory, the highest-dimensional description (if you don't count the fiber-like 2 extra dimensions in F-theory as dimensions). However, the group (3) doesn't contain the ends-of-the-world. Its singularities are pointlike which some people might view as more natural.

The braneworlds (4) and (5) may break the coupling unification and other advantages but they may have natural explanations for the fermion mass hierarchy and other things.

It's also interesting to look where the spectrum of the Standard Model is located. In (1), it lives everywhere in the 10D bulk (codimension 0). In (2), it lives in the 10D bulk on the end-of-the-world domain walls (codimension 1). In (3), it's on points of a 7-dimensional manifold (codimension 7). In (4), most of the matter depends on D6-branes and their intersections (codimension 3 or 6). In (5), it's mostly D3-branes (codimension 6 but also possibly 4 and 2).

Branewolds may have gotten some evidence

I would still say that (1) and (2) are the most motivated one but the possible observation of the 150 GeV new particle by the CDF, which might be a new U(1) gauge boson, has surely shifted my opinions a little bit, especially after I read a couple of cool articles about the realistic type IIB braneworlds. In fact, I hadn't previously read the following 3 papers:
There are many more articles that were needed for the list above but you may find some references in the papers above - e.g. in the last one. The last paper has already claimed that the stringy type IIB braneworlds may naturally explain the 140 GeV Tevatron bump we discussed yesterday.

I've seen various braneworld constructions but the Berenstein et al. 2001 construction strikes me as a very natural one. They study D3-branes on a nice orbifold of C^3 - an orbifold singularity that may occur in the 6 compact dimensions.

One orbifolds C^3 by a group known as Delta_{27} which is a "special" element in an infinite collection of finite groups Delta_{3n^2} for n=3. The group is generated by some added third roots of unity to z1,z2,z3 and by the cyclic permutation of z1,z2,z3: note how extremely natural this group is! And this natural orbifold in 6 extra dimensions pretty much produces the Standard Model - with 6 Higgs doublets.

Now, following the general description of Douglas and Moore for quiver diagrams, and particular derivations by Brian Greene et al. and others for the open string spectrum on the Delta orbifolds, Berenstein et al. have found out that one may get a very nice Standard-Model-like construction. It gives three generations - and in some sense, you may say that the number "3" is being linked to the number of complex compactified dimensions, so it is being "explained" here.

The fermionic spectrum, as described also in the 2008 paper mentioned above, looks very natural - and still "qualitatively differently" from the nice embedding in the grand unified theories. One has three stacks, morally U(3) x U(2) x U(1) gauge groups, and the charges of the fermions under them are

(1,1,0)
(2,0,0)
(-1,0,1)

(0,-1,1)
(0,2,0)
(0,0,-2)

Note that one includes all permutations of (2,0,0) and (1,1,0) - imagine that you flip the convention for the third sign - and the negatives of these vectors. Very natural, right? One may actually derive this spectrum from the non-Abelian orbifold of C^3 above.

However, you may define the hypercharge Y as the inner product of the six 3-vectors above with (-2/3,1,0) and one gets 1/3, -4/3, 2/3; -1; 2; 0. This hypercharge allows one to interpret the spectrum - six vectors above - as the left-handed quark doublet (3,2); anti-up-quark singlet (3bar,1); anti-down-quark singlet (3bar,1); left-handed lepton doublet (1,2); positron singlet (1,1); neutrino singlet (1,1) - an unnecessary right-handed neutrino.

Just to be sure, you may also define the right B-L quantum numbers of the spinors as the inner product of the six 3-vectors above with (-1/6,1/2,-1/2). A perfectly valid spectrum.

New U(1) groups

Now, these models have lots of new U(1)s - something that is really natural and generic within the stringy braneworlds. In the 2011 paper, Lüst and collaborators diagonalize the mass matrix for the new Z' and Z'' bosons, imposing various constraints, and they see that they can get the new 140 GeV particle.

When they do adjust the models in this way, they produce spectacular new predictions. In particular, there should be another Z'' boson at mass slightly above 3 TeV or so, potentially still available to the LHC, and the string scale at 5-10 TeV. A new collider would probably be needed for that but it is imaginable.

Imagine that the Tevatron 145 GeV signal is genuine and there will be an accumulating evidence supporting a new U(1) group. I would surely think that this would significantly increase the probability that the braneworlds are right.

If some other predictions of the model above were confirmed, it would be good to build a more powerful collider that would try to search for stringy physics at tens of TeVs because the possibility that the strings are almost behind the corner is just fascinating. It has always been too good to be true but if the experiments provided some evidence for some characteristic signatures of the braneworlds, the model could also be good enough to be true. ;-)

Strangely enough, these braneworld models predict some of the stringy physics to be almost as observable as supersymmetry - the superpartner masses may be a few TeV here. The idea that some more "specifically stringy" phenomena would be seen before supersymmetry used to look foreign or fantastic; however, showing that the expectations have been wrong or too modest is something that the experiments have the right to occasionally do.



Off-topic, Facebook: Today, Facebook made an interesting step in its (not quite) cold war against Google when it enabled the e-mail addresses on Facebook. If you have a Facebook account, you may start to use your.own.email@facebook.com that will be sent and readable on Facebook. Try to log into Facebook and check it.



Google Plus One: If you look at the bottom of any post, below "posted by", you will see the five "share buttons" and there is a new, sixth button, with "+1" in it.

Just like Facebook attacked Gmail today with its competition, Google just attacked the Facebook "like" button by its "+1" competition. See an explanation by Google. If you click at it, it will just add number "+1" for you only, and the people who are known to Google to be your contacts - via Gmail - may see on Google search pages that the page was "plus-oned" by you. Try it if you liked this article a bit. ;-)

So far, it doesn't seem to work reliably. You're more likely to see the "+1" button at the main TRF page, beneath each article's excerpt.



Bousso-Susskind crackpottery in Nude Socialist

I kind of expected it but now I know it. Even though the Nude Socialist journalist called Justin something asked me for a discussion and called me by phone for half an hour, after which he thanked and claimed that he began to understand what many-worlds of quantum mechanics etc. mean, he finally wrote a totally uncritical article promoting the Bousso-Susskind hypermultiverse crackpottery, denying that he had ever talked to me, or anyone else who realizes it is a crackpottery, after all.

This time I hesitated and finally said Yes but next time I will instantly say No to any single person from this crackpot tabloid. On the other hand, I must say that this new piece by Amanda Gefter in the same Nude Socialist is just flawless for a popular summary of the 150 GeV bump - even though it's true that she may have used many blogospherical sources that contain the same insight and facts.

Monday, May 30, 2011

CDF: Wjj 150 GeV bump grows to almost 5 sigma

Adam Jester (see also Sean Carroll, Phil Gibbs, and Tommaso Dorigo) has pointed out that the bizarre bump near 144 GeV has become much more statistically significant, according to a talk by Giovanni Punzi (pages 30-35 or so) at the 23rd Rencontres de Blois conference at a cute French chateau.



As you could read here in April 2011, there was a 3.2-sigma, or 99.9% confidence level, excess of the production of an apparent 144 GeV particle decaying to two jets that have to be accompanied by a W-boson. The data used about 4/fb of the CDF data.

The preprint about the bump has produced 46 citations in less than two months.

Could it have been a fluke? It could but it almost certainly wasn't. When 7/pb of the data were analyzed, the excess grew to 4.8 sigma or so which is something like 99.9997%. So the shape almost certainly didn't appear by chance.




So one has to look at alternative explanations. It may either be a systematic effect that's being neglected or new physics. If you look at an answer of mine at Physics Stack Exchange, you will see a discussion of a possible systematic error - an incorrect scaling of the jet energy - as well as possible theoretical explanations I will recall momentarily.



Well, even the systematic effects seem somewhat unlikely at this point. Even if you assume that the energy was incorrectly measured by up to 7%, you will still see some at least 3-sigma deviation in the data, Punzi announced. There are other reasons why people loudly scream that the bump can't be due to incorrect Standard Model calculations.

(The excess is unlikely to be due to some top-quark stuff because the excess events don't have too many bottom quarks in them and because the other measurements of the top don't look too bad - even though some other surprising claims by the Tevatron about the tops and antitops could be viewed as counter-evidence.)



This brings me to the theoretical explanations. Recall that the three leading explanations of this bump are
  1. Z' boson - a new U(1) force of Nature, see e.g. Cheung-Song and Lüst et al. (stringy)
  2. Technipion - a decaying scalar particle in the technicolor theories by K. Lane and friends
  3. Squark or slepton pairs - a superpartner of the top quark in R-parity violating theories
  4. Scalar fields breaking a new flavor symmetry - by Ann Nelson et al. - claims to explain the top-antitop asymmetry as well
  5. Many others
These were three explanations plus some bonus ones. ;-) What will happen next? We are waiting for CDF's competitors at the Tevatron, the D0 Collaboration, to announce their results about the issue. And of course, then there's the LHC. :-) So far, the LHC sees nothing there in those 33/pb or so. With the current recorded 0.5/fb that are being evaluated right now, the LHC should see it more clearly than the 7/pb of the Tevatron if it is there.

Wednesday, May 25, 2011

Electron dipole moment below 1e-27 e.cm at 90% C.L.

Nude Socialist discusses whether the electron is egg-shaped. I honestly couldn't figure out what they were really talking about before I looked at the paper in Nature,
Improved measurement of the shape of the electron
by Hudson, Kara, Smallman, Sauer, Tarbutt, and Hinds. The title still resembles some popular science (see a popular review in Nature) but the abstract already contains useful information.




At the 90% confidence level, the size of the electron dipole moment d_e is smaller than
|de| < 1.05 x 10-27 e.cm
This experiment significantly (a few times) - but not "qualitatively" - improves the previous upper bounds. Note that as early as in 2005, this blog was talking about planned experiments that wanted to measure the electric dipole moments at the accuracy 100 times more accurate than the figure above.

Recall that CP-violation is needed to produce these electric dipole moments. The CP-violation by the Standard Model's CKM matrix (the phase in it) is too small to be seen. It's about 1e-30 e.cm, or 1,000 times smaller than the newest upper bound. New physics such as supersymmetry may predict much larger and potentially observable EDMs (which may often be observable enough to kill the model). But it doesn't have to be so.

For example, Brhlík, Everett, Kane, and Lykken (1999) have shown that a D-brane world based on type I string theory naturally implies the vanishing of new contributions to the EDMs without any fine-tuning even though the new CP-violating phases may be large!



Lumo on monstrous moonshine in Japanese

I received thousands of requests from the people in the Fukushima region who wanted to understand my 2007 text about the monstrous moonshine in string theory more intimately. So I have finally learned Japanese today and translated the article to the Fukushima dialect of Japanese. Enjoy and don't get discouraged by the obscene words on the first line beneath the title! ;-)

Via K.N. Yokoyama



The Milky Way is gay

Our Galaxy has neighbors of the same sex. Risa Wachsler of Stanford has studied how many galaxies like to intimately share the place with other galaxies of the same sex. She found out that it is 4% of the galaxies. I guess that some of you may have expected the result. ;-)

Saturday, May 21, 2011

The Bousso-Susskind hypermultiverse

Leonard Susskind and Raphael Bousso are creative guys and famous physicists. Both of them are well-known for some papers about holography, too. Of course, the first scientist is still a bit more famous. They have just released a preprint to show that they're on crack and they are greatly enjoying it:
The Multiverse Interpretation of Quantum Mechanics
The ordinary multiverse with its infinitely many bubbles whose possible vacuum states are located in 10^{500} different stationary points of the stringy configuration space was way too small for them. So they invented a better and bigger multiverse, one that unifies the "inflationary multiverse", the "quantum multiverse", and the "holographic multiverse" from Brian Greene's newest popular book, The Hidden Reality.

Yes, their very first bold statement is that parallel universes in an inflating universe are the same thing as Everett's many worlds in quantum mechanics! ;-)

Sorry to say but the paper looks like the authors want to stand next to Lee Smolin whose recent paper - as much crackpottish as any paper he has written in his life so far - is about "a real ensemble interpretation" of quantum mechanics. Bousso and Susskind don't cite Smolin - but maybe they should! And in their next paper, they should acknowledge me for pointing out an equally sensible and similar paper by Smolin to them. ;-)




Just like your humble correspondent would always emphasize that the "many worlds" in Everett's interpretation of quantum mechanics are completely different "parallel worlds" than those in eternal inflation or those in the braneworlds, these famous physicists say - On the contrary, they're the same thing!

However, at least after a quick review of the paper, the drugs seem to be the only tool that you can find in the paper or in between its lines to convince you that it's the case. ;-)

It's a modern paper involving conceptual issues of quantum mechanics, so it treats decoherence as the main mechanism to address many questions that used to be considered puzzles. Good. However, everything that they actually say about decoherence is a little bit wrong, so their attempts to combine those new "insights" with similar "insights" resulting from similar misunderstandings of the multiverse - and especially the way how outcomes of measurements should be statistically treated in a multiverse - inevitably end up being double gibberish that is cooked from two totally unrelated components such as stinky fish and rotten strawberries.

In what sense decoherence is subjective

One of the first starting points for them to unify the "inflationary multiverse" and the "many worlds" of quantum mechanics is the following thesis about decoherence:
Decoherence - the modern version of wave-function collapse - is subjective in that it depends on the choice of a set of unmonitored degrees of freedom, the "environment".
That's a loaded statement, for many reasons. First of all, decoherence isn't really a version of the collapse. Decoherence is an approximate description of the disappearing "purity" of a state in macroscopic setups with various consequences; one of them is that there is no collapse. The probabilities corresponding to different outcomes continue to be nonzero so nothing collapses. They're nonzero up to the moment when we actually learn - experimentally - what the outcome is. At that point, we must update the probabilities according to the measurement. Decoherence restricts which properties may be included in well-defined questions - for example, insane linear superpositions of macroscopically different states are not good "basis vectors" to create Yes/No questions.

As first emphasized by Werner Heisenberg and then by anyone who understood the basic meaning of proper quantum mechanics, this "collapse" is just about the change of our knowledge, not a real process "anywhere in the reality". Even in classical physics, dice may have probabilities 1/6 for each number, but once we see "6", we update the probabilities to (0,0,0,0,0,1). No real object has "collapsed". The only difference in quantum physics is that the probabilities are not "elementary" but they're constructed as squared absolute values of complex amplitudes - which may interfere etc.; and in classical physics, we may imagine that the dice had the state before we learned it - in quantum physics, this assumption is invalid.

It may help many people confused by the foundations of quantum mechanics to formulate quantum mechanics in terms of a density matrix "rho" instead of the state vector "psi". Such a "rho" is a direct generalization of the classical distribution function on the phase space "rho" - it only receives the extra off-diagonal elements (many of which go quickly to zero because of decoherence), so that it's promoted to a Hermitian matrix (and the opposite side of the coin is that the indices of "psi" may only involve positions or only momenta but not both - the complementary information is included in some phases). But otherwise the interpretation of "rho" in quantum mechanics and "rho" in classical statistical physics is analogous. They're just gadgets that summarize our knowledge about the system via probabilities. Now, "psi" is just a kind of a square root of "rho" so you should give it the same qualitative interpretation as to "rho" which is similar to "rho" in classical statistical physics.

Second, is decoherence "subjective"? This is a totally equivalent question to the question whether "friction", "viscosity" (or other processes that dissipate energy) is subjective. In fact, both of these phenomena involve a large number of degrees of freedom and in both of them, it's important that many interactions occur and lead to many consequences that quickly become de facto irreversible. So both of these processes (or their classes) share the same arrow of time that is ultimately derived from the logical arrow of time, too.

First, let's ask: Is friction or viscosity subjective?

Well, a sliding object on a flat floor or quickly circulating tea in a teacup will ultimately stop. Everyone will see it. So in practice, it's surely objective. But is it subjective "in principle"? Do the details depend on some subjective choices? You bet.

Focusing on the tea, there will always be some thermal motion of the individual molecules in the tea. But what ultimately stops is the uniform motion of bigger chunks of the fluid. Obviously, to decide "when" it stops, we need to divide the degrees of freedom in the tea to those that we consider a part of the macroscopic motion of the fluid and those that are just some microscopic details.

The separation into these two groups isn't God-given. This calculation always involves some choices that depend on the intuition. The dependence is weak. After all, everyone agrees that the macroscopic motion of the tea ultimately stops. In the same way, the information about the relative phase "dissipates" into a bigger system, a larger collection of degrees of freedom - the environment - during decoherence. The qualitative analogy between the two processes is very tight, indeed.

But a punch line I want to make is that decoherence, much like viscosity, isn't an extra mechanism or an additional term that we have to add to quantum mechanics in order to reproduce the observations. Instead, decoherence is an approximate method to calculate the evolution in many situations that ultimately boils down to ordinary quantum mechanics and nothing else. It's meant to simplify our life, not to add some extra complications. Decoherence justifies the "classical intuition" about some degrees of freedom - what it really means is that interference phenomena may be forgotten - much like the derivation of equations of hydrodynamics justifies a "continuum description" of the molecules of the fluid.

Clearly, the same comment would be true about friction or viscosity. While the deceleration of the car or the tea is usefully described by a simplified macroscopic model with a few degrees of freedom, in principle, we could do the full calculation involving all the atoms etc. if we wanted to answer any particular question about the atoms or their collective properties. However, we should still ask the right questions.

When Bousso and Susskind say that there is an ambiguity in the choice of the environment, they misunderstand one key thing: the removal of this ambiguity is a part of a well-defined question! The person who asks the question must make sure that it is well-defined; it's not a job for the laws of physics. Returning to the teacup example, I may ask when the macroscopic motion of the fluid reduces to 1/2 of its speed but I must define which degrees of freedom are considered macroscopic. When I do so, and I don't have to explain that there are lots of subtleties to be refined, the question will become a fully calculable, well-defined question about all the molecules in the teacup and quantum mechanics offers a prescription to calculate the probabilities.

The case of decoherence is completely analogous. We treat certain degrees of freedom as the environment because the state of these degrees of freedom isn't included in the precise wording of our question! So when Bousso and Susskind say that "decoherence is subjective", it is true in some sense but this sense is totally self-evident and vacuous. The correct interpretation of this statement is that "the precise calculation [of decoherence] depends on the exact question". What a surprise!

In practice, the exact choice of the degrees of freedom we're interested in - and the rest is the environment - doesn't matter much. However, we must obviously choose properties whose values don't change frantically because of the interactions with the environment. That's why the amplitude in front of the state "0.6 dead + 0.8i alive" isn't a good observable to measure - the interactions with the environment make the relative phase terribly wildly evolving. Decoherence thus also helps to tell us which questions are meaningful. Only questions about properties that are able to "copy themselves to the environment" may be asked about. This effectively chooses a preferred basis of the Hilbert space, one that depends on the Hamiltonian - because decoherence does.

To summarize this discussion, at least in this particular paper, Bousso and Susskind suffer from the same misconceptions as the typical people who deny quantum mechanics and want to reduce it to some classical physics. In this paper's case, this fact is reflected by the authors' desire to interpret decoherence as a version of the "nice good classical collapse" that used to be added in the QM framework as an extra building block. But decoherence is nothing like that. Decoherence doesn't add anything. It's just a simplifying approximate calculation that properly neglects lots of the irrelevant microscopic stuff and tells us which parts of classical thinking (namely the vanishing of the interference between 2 outcomes) become approximately OK in a certain context.

Let's move on. They also write:
In fact decoherence is absent in the complete description of any region larger than the future light-cone of a measurement event.
If you think about it, the purpose of this statement is inevitably elusive, too. Decoherence is not just "the decoherence" without adjectives. Decoherence is the separation of some particular eigenstates of a particular variable and to specify it, one must determine which variable and which outcomes we expect to decohere. In the real world which is approximately local at low energies, particular variables are connected with points or regions in spacetime. What decoheres are the individual possible eigenvalues of such a chosen observable.

But the observable really has to live in "one region" of spacetime only - it's the same observable. The metric in this region may be dynamical and have different shapes as well but as long as we talk about eigenvalues of a single variable, and in the case of decoherence, we have to, it's clear that we also talk about one region only. Decoherence between the different outcomes will only occur if there's enough interactions, space, and time in the region for all the processes that dissipate the information about the relative phase to occur.

So it's completely meaningless to talk about "decoherence in spacelike separated regions". Decoherence is a process in spacetime and it is linked to a single observable that is defined from the fundamental degrees of freedom in a particular region. Of course, the region B of spacetime may only be helpful for the decoherence of different eigenvalues of another quantity in region A if it is causally connected with A. What a surprise. The information and matter can't propagate faster than light.
However, if one restricts to the causal diamond - the largest region that can be causally probed - then the boundary of the diamond acts as a one-way membrane and thus provides a preferred choice of environment.
This is just nonsense. Even inside a solid light cone, some degrees of freedom are the interesting non-environmental degrees of freedom we're trying to study - if there were no such degrees of freedom, we wouldn't be talking about the solid light cone at all. We're only talking about a region because we want to say something about the observables in that region.

At the same moment, for the decoherence to run, there must be some environmental degrees of freedom in the very same region, too. Also, as argued a minute ago - by me and by the very authors, too - the spatially separated pieces of spacetime are completely useless when it comes to decoherence. It's because the measurement event won't affect the degrees of freedom in those causally inaccessible regions of spacetime. Clearly, this means that those regions can't affect decoherence.

(A special discussion would be needed for the tiny nonlocalities that exist e.g. to preserve the black hole information.)

If you look at the light sheet surrounding the solid light cone and decode a hologram, you will find out that the separation of the bulk degrees of freedom to the interesting and environmental ones doesn't follow any pattern: they're totally mixed up in the hologram. It's nontrivial to extract the values of "interesting" degrees of freedom from a hologram where they're mixed with all the irrelevant Planckian microscopic "environmental" degrees of freedom.

They seem to link decoherence with the "holographic" degrees of freedom that lives on the light sheets - and a huge black-hole-like entropy of A/4G may be associated with these light sheets. But those numerous Planckian degrees of freedom don't interact with the observables we're able to study inside the light cone, so they can't possibly contribute to decoherence. Indeed, if 10^{70} degrees of freedom were contributing to decoherence, everything, including the position of an electron in an atom, would be decohering all the time. This is of course not happening. If you associate many degrees of freedom with light sheets, be my guest, it's probably true at some moral level that the local physics can be embedded into physics of the huge Bekenstein-Hawking-like entropy on the light sheet - but you must still accept (more precisely, prove) that the detailed Planckian degrees of freedom won't affect the nicely coherent approximate local physics that may be described by a local effective field theory - otherwise your picture is just wrong.

The abstract - and correspondingly the paper - is getting increasingly more crazy.
We argue that the global multiverse is a representation of the many-worlds (all possible decoherent causal diamond histories) in a single geometry.
This is a huge unification claim. Unfortunately, there's not any evidence, as far as I can see, that the many worlds may be "geometrized" in this way. Even Brian Greene in his popular popular book admits that there is no "cloning machine". You can't imagine that the new "many worlds" have a particular position "out there". The alternative histories are totally disconnected from ours geometrically. They live in a totally separate "gedanken" space of possible histories. By construction, the other alternative histories can't affect ours, so they're unphysical. All these things are very different from ordinary "branes" in the same universe and even from other "bubbles" in an inflating one. I don't know why many people feel any urge to imagine that these - by construction - unphysical regions (Everett's many worlds) are "real" but at any rate, I think that they agree that they cannot influence physics in our history.
We propose that it must be possible in principle to verify quantum-mechanical predictions exactly.
Nice but it's surely not possible. We can only repeat the same measurement a finite number of times and in a few googols of years, or much earlier, our civilization will find out it's dying. We won't be able to tunnel our knowledge elsewhere. The number of repetitions of any experiment is finite and it is not just a technical limitation.

There are many things we only observe once. Nature can't guarantee that everything may be tested infinitely many times - and it doesn't guarantee that.
This requires not only the existence of exact observables but two additional postulates: a single observer within the universe can access infinitely many identical experiments; and the outcome of each experiment must be completely definite.
In de Sitter space, the observables are probably not exactly defined at all. Even in other contexts, this is the case. Observers can't survive their death, or thermal death of their surrounding Universe, and outcomes of most experiments can't be completely definite. Our accuracy will always remain finite, much like the number of repetitions and our lifetimes.

In the next sentence, they agree that the assumptions fail - but because of the holographic principle. One doesn't need a holographic principle to show such things. After all, the holographic principle is an equivalence of a bulk description and the boundary description so any physically meaningful statement holds on both sides.

At the end, they define "hats" - flat regions with unbroken supersymmetry - and link their exact observables to some approximate observables elsewhere. Except that this new "complementarity principle" isn't supported by any evidence I could find in the paper and it isn't well-defined, not even partially. In the quantum mechanical case, complementarity means something specific - that ultimately allows you to write "P" as "-i.hbar.d/dx" - a very specific construction that is well-defined and established. In the black hole, complementarity allows you to explain why there's no xeroxing; the map between the degrees of freedom isn't expressed by a formula but there is evidence. But what about this complementarity involving hats? There's neither definition nor evidence or justification (unless you view the satisfaction of manifestly invalid and surely unjustified, ad hoc assumptions to be a justification).

If you read the paper, it is unfortunately motivated by misunderstandings of the conceptual foundations of quantum mechanics. In the introduction, they ask:
But at what point, precisely, do the virtual realities described by a quantum mechanical wave function turn into objective realities?
Well, when we measure the observables. Things that we haven't measured will never become "realities" in any sense. If the question is about the classical-quantum boundary, there is obviously no sharp boundary. Classical physics is just a limit of quantum physics but quantum physics fundamentally works everywhere in the multiverse. The numerical (and qualitative) errors we make if we use a particular "classical scheme" to discuss a situation may be quantified - decoherence is one of the calculations that quantifies such things. But classical physics never fully takes over.
This question is not about philosophy. Without a precise form of decoherence, one cannot claim that anything really "happened", including the specific outcomes of experiments.
Oh, really? When I say that it's mostly sunny today, it's not because I preach a precise form of decoherence. It's because I have made the measurement. Of course, the observation can't be 100% accurate because "sunny" and "cloudy" haven't "fully" decohered from each other - but their overlap is just insanely negligible. Nevertheless, the overlap never becomes exactly zero. It can't. For more subtle questions - about electrons etc. - the measurements are more subtle, and indeed, if no measurement has been done, one cannot talk about any "reality" of the property because none of them could have existed. The very assumption that properties - especially non-commuting ones - had some well-defined properties leads to contradictions and wrong predictions.

Decoherence cannot be precise. Decoherence, by its very definition, is an approximate description of the reality that becomes arbitrarily good as the number of the environmental degrees of freedom, their interaction strength, and the time I wait become arbitrarily large. I think that none of the things I say are speculative in any way; they consider the very basic content and meaning of decoherence and I think that whoever disagrees has just fundamentally misunderstood what decoherence is and is not. But the accuracy of this emergent macroscopic description of what's happening with the probabilities is never perfect, just like macroscopic equations of hydrodynamics never exactly describe the molecules of tea in a teacup.
And without the ability to causally access an infinite number of precisely decohered outcomes, one cannot reliably verify the probabilistic predictions of a quantum-mechanical theory.
Indeed, one can't verify many predictions of quantum mechanical properties, especially about cosmological-size properties that we can only measure once. If you don't like the fact that our multiverse denies you this basic "human right" to know everything totally accurately, you will have to apply for asylum in a totally different multiverse, one that isn't constrained by logic and science.
The purpose of this paper is to argue that these questions may be resolved by cosmology.
You know, I think that there are deep questions about the information linked between causally inaccessible regions - whether black hole complementarity tells you something about the multiverse etc. But this paper seems to address none of it. It seems to claim that the cosmological issues influence even basic facts about low-energy quantum mechanics and the information that is moving in it. That's surely not possible. It's just a generic paper based on misunderstandings of quantum mechanics and on desperate attempts to return the world under the umbrella of classical physics where there was a well-defined reality where everything was in principle 100% accurate.

But the people who are not on crack will never return to the era before the 1920s because the insights of quantum mechanics, the most revolutionary insights of the 20th century, are irreversible. Classical physics, despite its successes as an approximate theory, was ruled out many decades ago.

I have only read a few pages that I considered relevant and quickly looked at the remaining ones. It seems like they haven't found or calculated anything that makes any sense. The paper just defends the abstract and the introduction that they have apparently pre-decided to be true. But the abstract and and introduction are wrong.

You see that those would-be "revolutionary" papers start to share lots of bad yet fashionable features - such as the misunderstanding of the conceptual issues of quantum mechanics and the flawed idea that all such general and basic misunderstandings of quantum physics (or statistical physics and thermodynamics) must be linked to cosmology if not the multiverse.

However, cosmology has nothing to do with these issues. If you haven't understood a double-slit experiment in your lab or the observation of Schrödinger's cat in your living room and what science actually predicts about any of these things, by using the degrees of freedom in that room only, or if you haven't understood why eggs break but don't unbreak, including the degrees of freedom of the egg only, be sure that the huge multiverse, regardless of its giant size, won't help you to cure the misunderstanding of the basics of quantum mechanics and statistical physics.

The right degrees of freedom and concepts that are linked to the proper understanding of a breaking egg or decohering tea are simply not located far away in the multiverse. They're here and a sensible scientist shouldn't escape to distant realms that are manifestly irrelevant for these particular questions.

And that's the memo.

Thursday, May 12, 2011

Type IIB large extra dimensions



Calabi-Yau manifolds are so hot that they may make your LCD monitor or your eyes vibrate.

M. Cicoli, C.P. Burgess, and F. Quevedo have presented their interesting new scenario for string phenomenology:
Anisotropic Modulus Stabilisation: Strings at LHC Scales with Micron-sized Extra Dimensions
Their picture has
  • extra dimensions accessible at the LHC
  • no MSSM superpartners - a non-linear realization of SUSY in the SM sector
  • two extra dimensions may be multimicron-sized, much larger than the remaining four
  • the remaining four have a K3 shape
  • the large 6-volume, and/or the hierarchy between the 2 and 4 extra dimensions, is achieved by some new poly-instanton corrections to the superpotential

Their picture strikingly differs from most scenarios we're used to - where at least some of the superpartners are surely more accessible than extra dimensions.




However, many of the assumptions that are usually made about the models may be just some lore that is subject to intellectual inertia. Many qualitative things could be very different.

The new technical step they use, the poly-instanton corrections, is a method to generate new hierarchies in the size of the extra dimensions. So the whole 6-dimensional volume may be made large - which solves the hierarchy problem.

But the size of the K3 dimensions may be much smaller than the size of the remaining two dimensions of the base: the latter may be of a sub-millimeter size. In that case, they may break the SUSY at a sub-eV scale.

I still haven't understood how they avoid light superpartners if SUSY is broken at this super low, sub-eV scale. At any rate, if this super low SUSY breaking scale makes any sense, it could also be a promising way to solve the cosmological constant problem because the cosmological constant is not far from a quartic millielectronvolt.

Amusingly enough, they tend to predict LHC-accessible Kaluza-Klein modes - signs of extra dimensions - even if all the six dimensions are equally large. That would usually not be the case but in their model, the predicted modes live at some small cycles of their manifold.

Aside from the TeV-scale signs of extra dimensions from the previous paragraph, they also predict lots of light moduli.

Well, I am not able to prove that the model is impossible right now. It may change rather quickly and a bright reader may change it instantly. But I still feel that it is a very bold claim that such a radically different scenario may be made consistent with the known physics and constraints on new physics.

Tuesday, May 10, 2011

CoGeNT sees seasons (and maybe dark matter)

Should we be getting used to light supersymmetric 7 GeV photinos?

A fascinating, full-fledged war has erupted between the teams trying to detect the dark matter particles here on Earth. The two opposing armies of experimenters are called "Dark Matter Is Seen" (DMIS) and "Dark Matter Is Not Seen" (DMINS).



XENON100's mid-April negative results (fresh preprint) instantly came under fire (fresh preprint). Via Physics World

Each of the camps may be right.

Of course, the first team has a more interesting message. ;-) One year ago, I discussed observations by CoGeNT (of the DMIS coalition) that may have detected hundreds of bino-like 10 GeV dark matter particles. Such claims are directly contradicted by the DMINS axis - CDMS, XENON10, XENON100, and maybe others - that claims that there is nothing to be seen and the DMIS coalition - including CoGeNT, DAMA, PAMELA, and others who have suggested the existence of a signal - are conspiracy theorists. After all, DMINS may measure things much more accurately and there's nothing to be seen! ;-)

The CoGeNT-CDMS (sado-maso) war is particularly violent because both opposing teams use germanium and they're located in the same mine.

Who is right?

As Cosmic Variance and Resonaances (also: Phys Org, Discovery News, WIRED, Science News, IO9, Nude Socialist) freshly reported, CoGeNT has added an amazing weapon to their claims. They have measured their signal more accurately.




Now, they may say that the dark matter particle's mass should have mass 7-8 GeV - which is immensely accurate - and the cross section on nucleons should be 10^{-44} meter^2 - about 0.1 femtobarns. So they may say that the parameters live in the tiny bottle at the center of this images:



Even more impressively, they have acquired a 2.8-sigma evidence that the signal is actually annually modulated - it depends on the seasons. The existence and significance of this modulation is in striking agreement with the prediction by Chris Kelso and Dan Hooper from November 2010.



In the dark matter context, this modulation may be explained by the Earth's changing speed relatively to the dark matter halo around us. You may see a fresh 30-minute talk (48 minutes in total) about the newest CoGeNT result by its main boss, Dr Juan Collar.

I agree with Jester that it's likely that one of the coalitions at war is likely to be essentially right, so the other one has to be wrong. Their methods differ a little bit - what kinds of events are being removed. So it's likely that the DMINS axis is filtering out some events that resemble "not dark matter collisions" even though they're actually caused by the dark matter.

It is conceivable that this dispute will be solved very soon. Meanwhile, all theorists should morally prepare for the possibility that there is a dark matter particle whose mass is 7-8 GeV - it is far from impossible given the contradictory facts. Various peace negotiators have proposed a resolution that essentially concludes that the 7-8 GeV signal is genuine.



She fell from the skies: a theme song of the day



There are children on the Earth; the Earth is flying through the space; it's not being occupied by orbiting atoms only. The stars are also hiding children who are playing, maybe in the same way as we do. Chorus: Full of kids are those worlds that we're dreaming about every evening. We will sit in a spacecraft and we will fly fly fly fly to visit them. ... A rocket of those children is probably already flying here, maybe it will land tomorrow in the morning. On the border of the forest, they say it will be dropping towards the Sun, and it will stay on Earth forever. Chorus: Full of children are those worlds that we're dreaming about every evening. Once they will sit in an aircraft and we may see see see see see see them when they will arrive to pay us a visit.

"Spadla z oblakov" is a 13-part 1978 Slovak (well, Czechoslovak recorded in the Slovak language) TV soap opera for children. Miss Majka of planet Guru lands in the High Tatras, Slovakia. She starts to speak German but reloads her Slovak localization software. She can walk on the water, walls, speed up time, and so on, but otherwise is a cutely naive girl that is learning many things with the Slovak kids.



Armies

DMIS: DAMA/LIBRA, PAMELA, CoGeNT
DMINS: CDMS, XENON10, XENON100

Some time ago, CoGeNT wanted to disprove the annual modulation previously seen by DAMA. Instead, it was forced to join the coalition. We may really add HEAT, CAPRICE, MASS, and AMS-01 to the DMIS coalition because they've seen some excess of positrons above 7 GeV which could also be produced by pair-annihilation of the 7 GeV dark matter particles although the typical interpretation used to be different.

CRESST-II is also in the DMIS coalition and is likely to publish a 4.6-sigma excess of O(10 GeV) WIMPs, too.

Monday, May 9, 2011

Robert Brout: 1928-2011

Sadly, Robert Brout, an American Belgian physicist who was born in the New York City on June 14th, 1928, passed away on May 3rd, 2011.

Together with Francois Englert, they were the first pair that published a 1964 paper discovering and explaining the Higgs mechanism, or Englert-Brout-Higgs-Guralnik-Hagen-Kibble mechanism for short. They managed to include some arguments that the theory with massive gauge bosons produced in this way would be renormalizable, something that would be later proved by Gerardus 't Hooft who shared the 1999 Nobel prize for this discovery with his advisor Martinus Veltman who was bold enough to ask his student to solve the problem.

With all the five other authors, Brout still received the 2010 Sakurai Prize.




It's much less known that his article written with Englert and Edgard Gunzig not only won the 1978 Gravity Research Foundation's essay award but it also made some preliminary steps towards inflation.

Via Laurent Sacco



Ivo Pešák RIP

Czech clarinetist and comedian Mr Ivo Pešák died today, after complications following a bypass surgery he underwent 4 years ago. He has been a multi-dimensional patient; his collection included a serious form of diabetes, among many other things. His toes had to be amputated some time ago. Updates news say he died of pneumonia.



In the video above, Mr Pešák - the fatter guy - is seen as a member of DýzaBOYS, the "Czech Depeche Mode" - a band that was made as a parody of disco music but it could have actually competed with them.

Throughout much of his life, Mr Pešák was a part of Banjo Band of Ivan Mládek. Together with Mr Ivan Mládek, he also recorded his globally most famous video, Jožin z bažin, my most favorite song when I was 5 years old. Mr Pešák became an unlikely hero of dancing in 2008 when the video clip suddenly became, a whoppy 30 years after it was recorded, a top viral video in neighboring Poland.

Sunday, May 8, 2011

ATLAS at 94/pb: diphoton hysteria goes away

This is just a single link: in a list of ATLAS' notes, the last one is
Update of Background Studies in the Search for the Higgs Boson in the Diphoton Channel with the ATLAS Detector at sqrt(s) = 7 TeV
and it was released yesterday. Click at the images to see that after 94 inverse picobarns, there is absolutely no excess of diphoton events at 115 GeV.




This is a mostly official version of my previous report about the invalidity of the 115 GeV Higgs with 30-fold enhancement of the diphoton decays.

Tuesday, May 3, 2011

Why Frank Wilczek likes SUSY

Matthew Tamsett of the U.S. LHC writes about Frank Wilczek's recent talk in Texas that focused on supersymmetry.

If you have 80 spare minutes, here is a similar talk he gave a year ago - his J. Robert Oppenheimer lecture at UC Berkeley:



Based on Anticipating a New Golden Age (hep-ph).

The speaker comes to the podium around 11:00 and talks about the LHC and the Standard Model. By the way, the name of the Standard Model is "ridiculously modest". It should be called the Super Duper Heavenly Orgasmic Quantitative Holy Scripture by Wilczek and Friends. ;-)




Around 54:30, he switches to the topic of supersymmetry. Gauge coupling unification is his preferred argument for SUSY. The lecture is filled with animations, raps, and similar stuff. ;-)



In other supersymmetric news, Symmetry Breaking Magazine promotes a new method by Alves, Izaguirre, and Wacker to look for SUSY in the collider data.

They proposed a simple phenomenological method to search through various corners, including unexpected ones, and experimenters became fond of their method.