Tuesday, March 8, 2005

Littlest Higgs model & deconstruction

The main article about Nima Arkani-Hamed on this blog: click at his name...
This note is gonna be about the concept of and its different applications. Itay Yavin, a grad student from Harvard, just spoke about their littlest Higgs model, and I will mention some features of their model at the end of the paper.

What is deconstruction?

In philosophy, deconstruction is one of the characteristic methods associated with postmodernism and with the name of Jacques Derrida. Deconstruction is not attempting to read a text and judge it by the content; instead, it tries to interpret every sentence as a result of social and political conflicts between the author and her or his cultural environment. This goal of this kind of "critical" thinking is to show that the content does not make any sense that could be permanent; the categories and terms are neither objectively well-defined nor separable and they only exist within the given context.

Those who are interested in Feynman and deconstruction (in philosophy) may want to open a very recent article at Mormon Philosophy and Theology. My father and sister were baptized by mormons several years ago ;-) although their current "mormonity" is very limited - but nevertheless I hope that the link is not inappropriate.

Because The Reference Frame does not believe that all methods how to use the human brain are equally valuable, the paragraph above is everything we're gonna say about deconstruction in philosophy which we're gonna interpret as a generic example of flawed thinking from now on. We're going to use the term "deconstruction" in the particle physics sense. ;-) In this context, deconstruction is a particular method to construct new physical theories. It was pioneered by the following influential paper
Because Andrew Cohen is from Boston University, Sheldon Glashow rightfully identified deconstruction as an important contribution of Boston University to particle physics in the last 5 years - and an argument that Boston University's approach to theoretical particle physics may be superior to Harvard University's approach. That's an excellent argument - because deconstruction is definitely one of the most influential ideas in phenomenology in the last 5 years - except that Andrew did the work with Nima Arkani-Hamed from Harvard and Howard Georgi from Harvard while he was a visiting scholar at Harvard. ;-)




OK, so let's ask again, what is deconstruction? It is a procedure to construct theories that behave as theories with extra dimensions (something that the authors considered mostly because of the inspiration coming from string theory), but these dimensions are fake or discrete ones. In some very rough sense, deconstruction is similar to latticization. Imagine that you latticize three dimensions. At every link of the lattice - an edge connecting two nodes - there is an object U taking values in the group manifold - for example a unitary matrix of determinant 1 in the case in which we latticize SU(3) QCD.

You see that the manifold of values of U is a non-linear manifold, and correspondingly, the action will have a non-linear, "non-renormalizable" form. It will contain, for example, the plaquettes
  • Tr (U1 U2 U3 U4)
where U1, U2, U3, U4 are the unitary matrices associated with the edges of a minimal square inside the lattice. These plaquettes represent the action from the magnetic field F_{ij} associated with two latticized dimensions - assuming that there are at least two of them.

Whatever coordinates on the SU(3) group manifold you choose, you will see that the trace above will be a highly non-linear function of the coordinates. In the case of deconstruction, you want to replace the SU(3) group manifold by its Lie algebra, and the corresponding terms in the action shall be replaced by polynomial terms.

There are several more changes you must make. The first one is that you don't want to call the picture containing the lattice "new dimensions". Instead, you call the counterpart of the lattice a "theory space" - this is partially terminological change only. The theory is something that is capable to behave as a continuous space in the continuum limit. However, you only count the truly continuous dimensions as "dimensions". Consequently, you must also modify your interpretation of the gauge group. It's no longer one SU(3) group you have: the group living at each node (formerly known as a lattice site) is treated as an independent SU(3) group.

What about the links U? They used to live in the group manifold. Deconstruction linearizes this space, and so its counterparts of U live in the Lie algebra generating the given group. A Lie algebra of SU(M) coincides with the adjoint representation, which is the tensor product of the fundamental and the anti-fundamental representation (with the trace removed, which is a comment I had to add because of a picky reader who forced me to write SU properly instead of U). However, the degrees of freedom U live on the link that connects two different SU(M) groups, as we said, and therefore this U in theories of deconstruction will be living in the tensor product of the fundamental representation M of the first SU(M) group living at the first node, and the anti-fundamental Mbar representation of the second SU(M) group living at the second node. This tensor product is called the bi-fundamental representation.

So the kind of fields we want to work with include the gauge fields living at the nodes only (they can still be defined in additional continuous dimensions), and various fields transforming in the bi-fundamental representations associated with various links in your "theory space". The "theory space" including the links is a good diagramatic description of the field content. This diagram - being identified with the theory space - is called a "moose diagram" by the phenomenologists and a "quiver diagram" by the string theorists and mathematicians. It includes
  • nodes, each of them representing a factor SU(M) in the gauge group that lives in the continuous dimensions. This gauge group contributes the gauge field and perhaps its superpartners to the field content. In supersymmetric theories, the nodes "carry" a vector multiplet...
  • links, each of them representing a field transforming in the bi-fundamental representation (M,Mbar) under the two SU(M) groups that the link connects. These links contribute new matter fields to the field content - in a supersymmetric theory they're typically chiral multiplets (for N=1) or hypermultiplets (for N=2)...
A more general quiver diagram can also describe factors SU(M_i) with different values of the rank M_i. We also add various potential terms for the matter fields - and construct a kind of generic or less generic theory with a gauge group SU(M)^k where "k" is the number of nodes in the quiver diagram. OK, what's physics of these models?

Because of the analogy with the latticized gauge theories, physics of these models can look like a higher-dimensional gauge theory and in many cases one can prove that it is so. Nima, Andrew, and Howard originally also constructed models whose quiver diagrams contained two alternating types of nodes, SU(k1) and SU(k2). The two type of groups became strongly coupled at different scales, and one of them could have been forgotten (confined).

Is there some relation of these deconstructed models to string theory? Yes! Gauge theories are low energy limits of the worldvolume theories defined on D-branes. And when you put the D-branes on orbifolds, you will obtain exactly this type of quiver theories with product gauge groups, as analyzed by
For the simplest and most supersymmetric quiver diagrams, one can prove by T-duality that deconstruction leads, in the continuum limit, to the higher-dimensional gauge theories. In fact, you can start with 3+1 continuous dimensions and add 1 or 2 "discrete" dimensions with a particular quiver diagram (lattice), and you can show that you obtain
in the continuum limit. Although these six-dimensional theories were always very mysterious and no simple "Lagrangian" description is available, we can define them as pretty ordinary four-dimensional gauge theories with a lattice-like quiver diagram in the limit in which the vevs and the density of the quiver diagram goes to infinity in the right ways. (These theories can also be studied using their holographic duals, e.g. the 11D supergravity in AdS7, and by matrix models.) The proof that the stringy six-dimensional theories may be obtained in this way is based on
  • T-duality
  • and the fact that the cylinder can locally be obtained from a cone whose opening angle is sent to zero (orbifold C^2/Z_N for large N, which produces a quiver diagram with N nodes, to mention the basic example), and the distance of the D3-branes from the tip of the cone is sent to infinity at the same moment (the distance equals the vev)
Deconstruction can therefore define many of the nice theories we like as particular limits of lower-dimensional theories that are easier to understand. This has led to various ramifications.

Deconstruction in phenomenology

However, the most popular application of deconstruction is in particle phenomenology. What do we gain if we build models based on deconstruction? There are two major things we get:
  • The models can have many nice properties associated with extra dimensions without really having them (you can both eat the cake and have it, or how the proverb exactly goes) - and these properties already appear for a small number of lattice points
  • We may solve the small hierarchy problem
Concerning the first point, let me say that there have always been many positive features of superstringy models that looked "purely stringy". For example, one of the nice technical tools that string theory gives us is the symmetry breaking via the Wilson lines. In field theory, you normally need a lot of Higgs fields to break a large gauge symmetry, such as SO(10), to a smaller realistic group, such as the Standard Model. (More generally, string theory can give Standard-Model-like models with a small gauge group whose fermionic spectrum nevertheless organizes into representations of a grand unified group - something that the experiments definitely want us to want.)

In string theory, you can break the grand unified symmetry by Wilson lines: you identify some "cycle", a topologically non-trivial closed contour inside your manifold of extra dimensions (also known as torsion), and you postulate that the gauge field has a monodromy around this closed contour - the monodromy M is an element of the gauge group. The n-th power of this monodromy must be the identity if the n-th multiple of the contour is topologically trivial.

This breaks the gauge group to the subgroup of it that commutes with M. Note that this breaking of grand unified symmetries is a very popular tool. It was, for example, used in the recent heterotic standard model where the Wilson lines equal a particular Z_3 x Z_3 subgroup of the grand unified group. This tool is also efficient: you don't need many Higgses. For large grand unified groups, the exact representations for the Higgses would typically be huge, and full mechanisms to break the symmetry are hard to find. Note that the symmetry is only broken because of the existence of extra at least one extra dimension.

If we want to use deconstruction, we want to imagine that the number of dimensions continues to be 3+1. Is there some way to obtain an efficient breaking of the grand unified group via the Wilson lines, even though there are no Wilson lines if there are no extra dimensions? Yes, there is. We create artificial, discrete dimensions via deconstruction. See for example the paper
in which the gauge group is taken to be SU(5) x SU(5). These are two copies of a grand unified group, and you may imagine that it is the same group defined at two points - and these two points are the simplest approximation of an extra dimension. (Witten also finds an analogous model in terms of M-theory on a G_2 manifold.) Consequently, such a SU(5) x SU(5) model in 3+1 dimensions may behave much like an SU(5) grand unified model which allows one to break the gauge symmetry down to the Standard Model in a way that looks almost like Wilson lines!

As you can see, one can almost always immitate string theory by a sufficiently sophisticated effective field theory. It's not shocking: string theory is supposed to be, at low energies, a kind of field theory anyway. So one should not be surprised that we're able to immitate string theoretical tricks by a sufficiently powerful quantum field theory - one that we can consider to be a "truncated string field theory" that is almost in the same "universality class" as string theory in the sense that some of the purely stringy effects may be mimicked.

This is why deconstruction in quantum field theory may lead to many models that have similar nice properties as models that used to rely on stringy physics. Deconstruction is, together with holography, another example of the recent reconciliation of the phenomenological and string-theoretical cultures. Some ideas in string theory are obviously good and one can apply their counterparts in field theory - but it's hard to imagine that the ideas would be found without string theory.

Now we want to focus on the little Higgs models, keeping the paper by
as an example. Deconstruction may be relevant for the hierarchy problem. What's this problem? The Higgs boson is the ultimate God particle that gives the bare masses to all other massive elementary particles (much like God herself, it has not been seen yet). By dimensional analysis, there must exist new physics at the Planck scale, 10^19 GeV (quantum gravity), or at least at some higher energy scales than those available at the current collider(s).

If you want to study physical phenomena at these higher energies, you need to allow the particles in the loops of your Feynman diagrams to have comparably high energies. For the computation of the loop corrections to the Higgs boson mass (self-energy), this leads to quadratically divergent contributions: the result of the integrals goes like
  • Lambda^2
where Lambda is the cutoff, the maximal energy we allow, that can be as large as the Planck energy. But the total Higgs mass should be much smaller, below 1 TeV, and therefore the bare mass must be very finely tuned so that it cancels the loop corrections with an amazing accuracy - the terms like +-(10^19 GeV)^2 are cancelled and the remainder is as small as (115 GeV)^2. Who is responsible for this amazing and almost exact cancellation that looks so unnatural? There have been several dominant answers proposed:
  • God or extraterrestrial superintelligent civilizations that engineered our world
  • The anthropic principle that does not allow life to exist unless the cancellation occurs; some technicalities are similar to those in the first solution, some of them are different
  • Technicolor in which the Higgs is not an elementary particle, but is composed from techniquarks in a theory analogous to QCD (disfavored by high-precision experiments as well as the observed absence of flavor-changing neutral currents)
  • Supersymmetry that automatically guarantees cancellation between bosons and their superpartner fermions, and therefore the Higgs mass is tied to the supersymmetry breaking scale - it remains the technically and aesthetically preferred solution for most physicists
  • Randall and Sundrum who are able to create exponentially large hierarchies from the "warp factor" of anti de Sitter space (well, actually only the paper under "Randall" solves the hierarchy problem)
  • Deconstruction
You see, deconstruction is the last option. We should be more accurate: the models based on deconstruction, the little Higgs models, usually don't quite solve the "full" hierarchy problem - the gap between 115 GeV and 10^19 GeV. Usually they only solve the "little hierarchy problem". What is it?

Even with the nice solutions to the hierarchy problem listed above - which means supersymmetry in particular - we already know that some "reasonable" amount of tuning is taking place. If the Higgs mass were completely naturally explained by supersymmetry, the superpartners would already be seen (directly or indirectly). The experimental bounds show that the superpartners must be a bit heavier, and therefore the "truly" natural, protected value of the Higgs mass is calculated to be higher than what is needed for a weakly-coupled Standard Model and what is expected from the experiment.

Consequently, the true value of the Higgs mass is a bit smaller anyway - like 10 percent of the "natural" value - and therefore it is slightly tuned. It's a purely philosophical question whether you worry about this modest fine-tuning. I personally don't. Numbers of order 10 or 0.1 are fine with me: 10 is less than 4.pi, for example. But let's now believe that even this minor tuning is a problem. What does it mean to solve this small hierarchy problem?
  • It means to find a quantum field theory that is well-behaved up to 10 TeV or so - perhaps even perturbatively calculable - in which the Higgs mass is naturally gonna be much smaller than 10 TeV even though all coupling constants are chosen to be "of order one"
This task is exactly solved by the little Higgs models.
These models have the property that the little Higgs boson is a pseudo-Goldstone boson associated with an approximate global symmetry. The symmetry breaking is collective - many degrees of freedom must participate to break the symmetry. Quadratic divergences in its self-energy only occur at higher loops because the relevant Feynman diagrams must contain many types of vertices. The masslessness would be preserved by the vertices separately. See the paper of Itay and Jesse for more comments and references.

Tuning to a 10% accuracy is a slightly ugly thing, but adding lots of new fields is ugly, too. So we want to look at the simplest possible little Higgs model that already has the good features of the little Higgs model - namely the littlest Higgs model. ;-) This has an SU(5) group to start with. It's broken to SO(5) in the infrared (which is geometrically defined as the position of the IR brane in the AdS context of Itay and Jesse - in analogy with the RS model but with different boundary conditions). The remaining parameters in the coset SU(5) / SO(5) include the Higgs boson.

I don't plan to copy their whole paper but nevertheless, this kind of model is expected to become
  • a conformal field theory at very high energies, above 10 TeV, which means that the holographically dual geometry approaches Anti de Sitter space at infinity. Only an SU(2) x SU(2) subgroup of SU(5) is equipped by the gauge bosons, but nevertheless SU(5) is a pretty decent approximate global symmetry. The diagonal SU(2) group becomes the electroweak SU(2) factor - again, a deconstruction involving two nodes, much like in Witten's SU(5) x SU(5) case discussed above
  • a window between 1 TeV and 10 TeV in which SU(5) is confined to SO(5), and the Higgs lives in the coset. The Higgs also has a lot of partners etc.
  • the Standard Model below 1 TeV with a Higgs whose mass is naturally light...

Also, Jesse Thaler gave another nice and related talk on Wednesday (postdoc journal club) about "little technicolor".

A technical note: I strongly encourage the readers who are not interested in a particular article - or a class of articles - whatever their reason is - to ignore it instead of writing negative feedback in the "comments". There exists a limited feedback mechanism - the topics and formats that have its happy readers and that lead to a meaningful or at least lively discussion are most likely to be repeated in a mutated edition. However, I have also other independent criteria that decide about the composition of the articles, and please be aware that the feedback "I am not interested in it" is considered to be unconstructive - it is absolutely obvious that the readers can't be interested in everything or appreciate everything. Thanks for your consideration. ...