Then, I describe my intuitions that flow from this discuss and other readings, the pith of which is that I suspect that there is something missing in that renormalization equations of quantum gravity and that it is the absence of these terms, rather than the absence of a Higgs field and Higgs boson that address these issues in the Standard Model, that causes the Standard Model equations to blow up at the 1TeV to 10TeV energy level without a Higgs field and Higgs boson of at an appropriate mass scale.
I further speculate that these problems in the equations may have something to do with the omission of terms related to additional generations of fermions (perhaps even an infinite number at a sufficient high energy level, if there is not some physical limit on maximum energy) and/or the omission of quantum gravity considerations that might have an impact either by imposing quanta of distance and time, or by virtue of the fact that the otherwise Abelian equations of quantum electrodynamics (QED) are operating on a fundamentally non-Abelian space-time whose effects become meaningful at the relevant energy scale by breaking symmetries in the current QED equations that are Abelian to an extent that becomes meaningful relevant to the other terms of the QED equations at that point.
I also consider the less dramatic possibility that the infinite series that is approximated to calculate quantum field probability distributions by summing up a path integral over all possible Feynman diagram paths might have an (as yet undiscovered) exact finite series reduction that lacks the mathematical issues that the existing method of doing the calculations contain, and eliminates the need for a Higgs boson/Higgs field to make the equations stable at higher energies, and describe briefly the considerations that drive that intuition.
Any of these resolutions would imply that an almost invisible tweak in technical side of the approximations inherent in the equations of quantum mechanics used to make calculations today, that are virtually impossible to detect outside of atom smashers and Big Bang conditions, could solve almost all of the ills of the Standard Model without any new classes of fermions or bosons (although possibly with the necessity of additional generations of them), and without a Higgs boson or Higgs field, without extra dimensions or branes.
Now, I would be the first to tell you that I am no more than an educated layman with no formal instruction in physics beyond the undergraduate math major and intermediate physics class level, and that all my reading of the physics literature and popularizations of the field for educated layman in the far too long period since I graduated from college probably brings me only to the point of a first year graduate student in physics (at best) in terms of my knowledge of this part of the field.
I also don't claim to have a theory that solves all of these problems. I merely suggest that I have some intuitions about what form the answer make take and what the gist of the heuristic meaning of it and motivation for it at a level that might be communicated in blogs and science journalism if they were discovered might be. This is a hypothesis generating post, positing a conjecture, not a conclusion in the form of a theory that flows from the hypothesis. Put this post at the point in the scientific process where Einstein starts noodling around wondering what would happen if the speed of light is fixed and the equivalence principle and background independence still hold, rather than the point at which he actually formulates special relativity and general relativity, and before data points are in that could confirm that his thought experiments were physical.
So, with no further adieu, onto the discussion and the analysis.
ohwilleke says:
July 26, 2011 at 10:18 pm
People have been doing quantum field theory without a Higgs mass for a generation. Calculations have been made, predictions have been verified. Apart from indicating that the method used may lack a fully rigorous foundation, is the Higgs mass a bit like a LaPlace Transform or a complex value for current that only matters in intermediate step and doesn’t matter in the final conclusion?
In other words, what are the phenomenological consequences of the Higgs boson mass having one value v. another value apart from the fact that a few of them should be spit out in high energy collider experiments and that doesn’t happen.
The Discussion
Lawrence B. Crowell says:
July 26, 2011 at 10:30 pm
The simple fact is that something has to happen at around 1-10TeV in energy. The standard model of SU(2)xU(1) electroweak interactions has some experimental backing, at least with the massive W and Z bosons. At much higher energy than 10 TeV it is not possible to compute Feynman processes. In effect QFT becomes sick, and something must “happen.” The Higgs field is a form of potential which induces a change in the phase of the vacuum. It is similar to a statistical mechanics phase transition. So “something” does happen, but the basic Higgs theory appears to be in trouble. . . .
Ray Munroe says:
July 26, 2011 at 11:51 pm
The purpose of the Higgs boson is to supply the couplings that provide for mass, and to explain the longitudinal degrees-of-freedom (parallel to motion dgf’s required for massive particles with intrinsic spin of 1, 2, …) for the W and Z bosons (with respective masses of 80.4 and 91.2 GeV) while simultaneously breaking Electroweak symmetry and explaining the massless photon. There are enough constraining conditions here that a simple Higgs boson cannot have ‘just any mass’.
However SUSY requires two complex scalar doublets (8 degrees-of-freedom) to properly provide for fermionic mass (there is a substantial mass difference between the top and bottom quarks). Of these 8 dgf’s, 3 yield the longitudinal modes for the W and Z bosons while the other 5 dgf’s SHOULD yield physical scalar bosons – the MSSM Higgs sector with Light, Heavy, Pseudoscalar, and plus/minus Charged Higgs. We have more non-constrained degrees-of-freedom, and more open parameter space. . . .
As Lawrence pointed out, new TeV scale Physics must exist, or else Feyman diagrams and the Renormalization Group blow up. . . .
ohwilleke says:
July 27, 2011 at 9:00 pm
It is certainly obvious that a missing SM or light SUSY Higgs has all sorts of implications for the theoretical framework that makes sense to explain particle mass.
My question was much more narrow. What are the phenomenogical implications, for example, of a 116 GeV Higgs v. of 325 GeV Higgs, aside from the fact that we can discern a particle resonnance at that mass?
Crowell seems to be saying that it doesn’t matter much in the low energy calculations but dramatically screws up the business of doing QFT calculations somewhere around 1TeV to 10TeV (forgive me if this is an inaccurate paraphrase) in a phase transition-like. Are there any other implications? What is driving the blow up at 1TeV in the math?
Ray Munroe says:
July 28, 2011 at 1:34 pm
Hi Ohwilleke,
The Standard Model Higgs is highly enough constrained that it could not have a mass of 325 GeV, but we could certainly dream up more complex Higgs sectors that would be consistent with such a mass – for instance the less-constrained Minimal Supersymmetric Standard Model Higgs sector that I described above could have a Heavy Higgs in that mass range.
Radiative corrections (Feynman diagrams and the Renormalization Group Equations) SHOULD drive the Weak Scale mass (W, Z, Higgs? of ~100 GeV) up to the Planck Scale mass of 10^19 GeV. This is called the Hierarchy Problem, and the most generally accepted theoretical fix is Supersymmetry. This extra factor of 10^17-squared might as well be infinity when we are using perturbation theory to try to make accurate experimental predictions. Basically, Radiative corrections will consistently get more and more ‘incorrect’ around the TeV Scale, and will inevitably diverge without new physics at the TeV Scale.
I would agree with your paraphrase of Lawrence Crowell’s comment “it doesn’t matter much in the low energy calculations but dramatically screws up the business of doing QFT calculations somewhere around 1TeV to 10TeV”.
Lawrence B. Crowell says:
July 28, 2011 at 5:03 pm
Something does have to change at around this energy scale. The data so far is lack luster, but we are at about 1/1000 the total data expected, so there are lots more to come. Luminosities will improve and in another year or two the picture should be much clearer. The 2-σ results in the 120-150 GeV Higgs mass range is not a Hindenburg event for the standard model, but it is a Lead Zeppelin. However, Led Zeppelin was always one of my favorite rock bands. It should also be pointed out that the INTEGRAL result on the polarization of light at different wavelengths from a Gamma Ray Burstar indicates there is no quantum graininess to spacetime far below the Planck scale. So a vast archive of physics theory and phenomenology appears to be headed for the trash can. However, at the TeV scale of energy it is obvious that something does have to change in physics, so nature is likely to tell us something. We may find that our ideas about the Higgs are naïve in some way, or maybe that the entire foundations of physics suffers from some sort of fundamental dystufunction..
The Higgs particle is a form of Landau-Ginsburg potential theory used in phase transitions. Phase transitions are a collective phenomenon. With the Higgs field the thing which transitions is really the vacuum. This leads to two possible things to think about. Even if this transition of the vacuum takes place, we expect there to be a corresponding transition with QFT physics of single or a few particles. We might then have a problem that some people are familiar with. In a clean flask you can heat distilled water to above the boiling point with no phase change. If you then drop a grain of salt into the flask the water rather violently bumps. By doing single particle on particle scattering we may not have enough degrees of freedom to initiate the phase transition. The phase transition needs a measure of “noise” we are not providing. It might then be that the Higgs field will turn up in “messier” heavy ion experiments. The second possibility, which frankly I think might turn out to be the case, is that QFT has a problem with the vacuum. The Higgs field occurs in a large vacuum energy density, which in the light of matters such as the cosmological constant seems fictitious. It is the case QFT becomes a mess at 1-10 TeV, where the Higgs field becomes a sort of regulator which prevents divergences. However, the problem might in fact be that QFT is sick period, and the fix might involve something completely different from anything on the archives of theory.
If we are to stay at least somewhat in line with established physical theory, Technicolor is one option for a Higgs-less world. Technicolor is a sort of “transformation” of T-T-bar condensates into another form. Sugawara did this with u-d quarks as a way of constructing meson physics in the .1-1 GeV range back in the 1970s. This is really a similar idea. In the technicolor theory the “meson” is the Higgs boson. The mechanism for Higgs production most often looked for is T T-bar — > H, or equivalently H — > T T-bar, where the latter gives the decay channels one searches for as a Higgs signature. This sounds like a small change, one where the field that induces the symmetry breaking has dynamics, and the symmetry breaking process is not spontaneous.
However, Technicolor might lead to something. Suppose there is some momentum scale horizon, which is due to the end of conformal RG flow. This might also have something to do with the AdS ~ CFT, where gluon chains are dual to the quantum gravitation sector (graviton) on the AdS interior. We live on the boundary of the AdS, where there are no gravitons. We may find that attempting to exceed 10TeV in energy only gives more of the particles we know in the conformal broken phase. However, with the conformal breaking comes mass, and from mass we have classical gravity. So there may still be signatures of this sort of physics. The technicolor condensate might be a form of gluon chain dual to the graviton. If Technicolor leads to this type of physics, we may then have to search for different observables.
Bill K says:
July 28, 2011 at 11:50 pm
“In the technicolor theory the “meson” is the Higgs boson.”
Lawrence, I have a question about this. People often make the offhand comment, “Well, maybe the Higgs boson is composite.” But if you try to build one out of fermion-antifermion pairs, as they do in technicolor, you don’t get scalars, the mesons you get are technipions (pseudoscalar) and technirhos (vector). So it seems to me that a composite Higgs boson is going to be quite a different thing from an elementary one, and there would not be much chance of confusing the two.
Analysis
My intuition is that if indeed there is no Standard Model Higgs boson, and both SUSY and Technicolor are also unsupported, that the problem may be in the fine details of how renormalization is done.
Feynman himself had a strong intuition that something was wrong with the renormalization process that he developed and that it lacked rigor and hence might be inaccurate in some sets of circumstances. One possibility that he considered was that the infinite series approximation cutoff scale, which simply needs to be done consistently and produces very nearly the same result regardless of the actual scale used within reason might actually be a product of a discrete rather than continuous nature of space-time that could prevent infinities from cropping up where they otherwise could in theory. The latest experiments tending to show that space-time is continuous well below the Planck scale is discouraging on that front, but doesn't necessarily kill that intuition as a viable theory.
Reading Penrose on Quantum Field Theory and considering the fundamental disconnect between quantum mechanics and general relativity at a theoretical level suggests another quantum gravity motivated issue with the renormalization process at high energy levels such as the 1TeV to 10TeV level that the Standard Model renormalization calculations encounter in the absence of a low mass Higgs.
Penrose makes a major thematic point that quantum mechanics is fundamentally a linearized approximation which, due to the magic of calculus that allows us to analyze equations at infinitesimal distances permits us to ignore higher order terms of the infinitesimal and in practice causes us to toss them out of the equations that we work from all together (because the square or higher power exponent of a very small number is much much smaller than the lower powered exponents of it in an equation and generally can be discarded).
It could be that there is a non-linear term in what would be a truly rigorous statement of the renormalization process if it were worked out with real rigor that becomes material at higher energy levels on the order of 1-10 TeV and that the omission of these non-linear terms in the standard issue renormalization process, which was not worked out rigorously from first principles, is what is causing the mathematical glitches that make the Standard Model equations fail without a low mass Higgs.
Perhaps, for example, if you squeeze 1 TeV to 10 TeV of energy into the extremely tiny physical space involved in electroweak processes, general relativistic corrections to the spaces and times involved from a quantum gravity generalization of GR are necessary to keep the equations from blowing up, or perhaps there is some sort of self-interaction term that should be there and is normally irrelevant but is necessary to keep the equations on track at high energies.
Or, perhaps the impact of one or more undiscovered generations of Standard Model fermions with masses in the 1 TeV to 10 TeV range for the heaviest ones is prohibited by conservation of mass-energy until that energy level and add a set of loops to the renormalization equations that stabilize them. That, of course, merely kicks the can up to some higher energy phase state point, but perhaps there are actually a theoretically infinite number of generations with our current three generation standard model as a mere practical approximation, and each new phase state point in any given finite number of generations model is an indicator of the point at which the next generation of Standard Model fermions should be discovered.
The truth of the matter is that the Standard Model provides no intuition regarding what the masses of almost all of its particles should be without putting in physical constants by hand. The Higgs process is that way that those physical constants are implemented in a theory that is fundamentally massless in its construction, but it wouldn't be too surprising if there could be some other way which is a more accurate representation of the real world that generalizes the massless QFT equations in a way more fundamental than the "afterthought" method of Professor Higgs, although probably not a terribly different one as his method works so well up to the 1 TeV phase change region.
The current renormalization equations are analogous (more or less directly) to perturbative QCD, which involves much more difficult calculations than QED because it has a self-interaction term, is chiral and is non-Abelian (these statements aren't entirely independent of each other).* To overcome these difficulties, the alternative approach is to use the exact rather than perturbative approximation of the QCD equations on a lattice that shifts the approximation issues to a place in the calculations where they do less harm.
It could be that the "true" equations of QED have a similarly confounding term of some sort (perhaps even a chiral self-interaction term precisely in analogy to QCD that simply doesn't manifest at lower energy levels in measurable quantities), and that its omission, and our failure to shift from a perterbative regime to an exact regime at the appropriate point is what makes it look like we need a Higgs particle. Indeed, there is a certain elegance and unity in the notion that QCD breaks down an requires lattice approximations rather than perterbative approximations at low energy levels, while QED might require the same calculation method shift at high energy levels.
A quick google scholar look at "non-Abelian quantum electrodynamics" shows that efforts to formulate quantum electrodynamics as a non-Abelian gauge theory were a hot area of inquiry in the 1970s (for example Sidney D. Drell, Helen R. Quinn, Benjamin Svetitsky, and Marvin Weinstein, "Quantum electrodynamics on a lattice: A Hamiltonian variational approach to the physics of the weak-coupling region" (1979), with a few pioneering papers as early as the 1960s. But, apparently the success of renormalization in QED (and no doubt, the lack of the kind of computational power to do those kinds of lattice equations in realistic models that didn't come into being at an affordable price until the 21st century) starved interest from the subject and publishing on the topic appears to have been more sporadic since then.
There have, however, been some papers on the subject since then, such as Stephen L. Adler, "A new embedding of quantum electrodynamics in a non-abelian gauge structure" (1989), and this paper (2005).
Another hint that this might be what is going on in the absence of CP violation in the strong force despite the fact that the Yang-Mills equation that governs it has an expressly chiral term. Even more surprisingly, the also non-CP violating equations of general relativity, when expressed in terms of the Ashtekar formulation is also expressly chiral. The weak force, of course, and for that matter, the combined electroweak unification, where we do see CP violations, is also chiral.
Given all of this, would it be so remarkable to find that QED too has an omitted chiral and non-Abelian term that screws up the perturbative approximation at high energies? After all, one of the well known features of general relativity that has been repeatedly confirmed and never contradicted is that the geometry of space-time is fundamentally non-Abelian. Even if there is nothing about QED itself that is non-Abelian, the fact that it is really operating in a non-Abelian geometry rather than a Minkowski background (which incorporates special relativity but not the non-Abelian geometries of general relativity). And, in a non-Abelian geometry, at some point CP reversibility and symmetries that QED relies upon to provide highly accurate results with relatively simple quantum mechanical equations may become material enough to screw things up.
An effort to explore that line of reasoning (in a variation on the Standard Model with a Higgs boson) can be found at X. Calmet, B. Jurčo, P. Schupp, J. Wess and M. Wohlgenannt, "The standard model on non-commutative space-time" (2002):
We consider the standard model on a non-commutative space and expand the action in the non-commutativity parameter . No new particles are introduced . . . . We derive the leading order action. At zeroth order the action coincides with the ordinary standard model. At leading order in we find new vertices which are absent in the standard model on commutative space-time. The most striking features are couplings between quarks, gluons and electroweak bosons and many new vertices in the charged and neutral currents. We find that parity is violated in non-commutative QCD. The Higgs mechanism can be applied. QED is not deformed in the minimal version of the NCSM to the order considered.
(Calmet published a follow up paper along the same lines in 2007.)
Similarly, one can consider, D.J. Toms "Quantum gravitational contributions to quantum electrodynamics" (2010) arguing that "quantum gravity corrections to quantum electrodynamics have a quadratic energy dependence that result in the reduction of
the electric charge at high energies, a result known as asymptotic freedom."
Indeed, it wouldn't be too surprising to me if all CP violations in all of quantum physics are ultimately at some fundamental level a necessary phenomenological implication of the non-Abelian geometry of the space-time upon which the processes operate.
Another similar Higgs boson free formulation that flows from somewhat different intuitions but ends up in more or less the same place is Theodore J. Allen, Mark J. Bowick and Amitabha Lahiri, "Topological Mass Generation in 3+1 Dimensions." (1991).
Another Less Dramatic Possibility
I have seen others speculate that the "brute force" way that Feynman infinite series of equations that are summed up to get the quantum field in the Standard Model, which involve truly vast numbers of terms that must be added together to get the final result, but can be summed up by dramatically simpler classical electromagnetic laws with great accuracy in the vast majority of circumstances because almost all of those terms cancel out, may be susceptible to being written in a more compact non-infinite series form that would omit the vast number of the intermediate terms that cancel out in the end.
Finding a finite series that is exactly equal to an infinite series is a non-trivial and non-obvious undertaking of creative genius and has only been done in a quite modest number of cases in the entire history of mathematics.
But, if one found the right finite series of terms that was exactly equal to the infinite series Feynman diagram based approach for calculating the quantum field probability distribution, one might be able to remove the intermediate terms that rely on the Higgs boson/Higgs field mass entirely, allowing calculations to be done at arbitrarily high energy levels without a hitch.
Implications
Suppose that my intuition is right and that there is a term missing from the electroweak renormalization equations that would prevent them from blowing up in the absence of a Higgs boson. If this is true, it might be the case that there are no fundamental particles left to be discovered other than possible higher generation variations of Standard Model fermions (and perhaps right handed neutrinos) which is something that Technicolor, Supersymmetry, Supergravity, and Sting Theory generically predict.
Suppose too that there are only four space-time dimensions and that brane theory andKaluza-Klein dimensions are not part of the laws of nature as Sting Theory (a.k.a. M-theory) suggests, something that experiment has never given us a reason to doubt.
In that case, the only thing missing from the Standard Model is some means to reduce the number of parameters needed to assign masses to particles, fill in the CKM/PMNS matrix, and set the coupling constants of the three Standard Model forces. And, since each of these things can be empirically determined to the degree of precision necessary for any real world application, those missing pieces are essentially aesthetic consideration in a theory that is actually a complete description of the fundamental laws of nature at any less reductionist level.
Put another way, we could be as close as two or three equations and one or two missing terms in the existing QED equations from the Standard Model being a true Grand Unified Theory, and realistically, it might take only a couple more equations to get quantum gravity and a theory of everything as well. Indeed, quantum gravity, properly formulated, might well be the key to filling in the remaining blanks that prevent the Standard Model not only from being a Theory of Everything, but also a Grand Unified Theory.
* Self-interacting means that the boson that carries the force interacts with the force (e.g. photons don't have an electrical charge and hence aren't self-interacting, while gluons have a color charge and thus have strong force interactions with each other as well as fermions); chiral means that left handed and right handed particles (in an intrinsic spin sense) are treated separately in the mathematics; non-Abelian means that the way that the force works is path dependent because the equations involved don't obey the commutative law of ordinary algebra. The distinctions betwen the two are explored at some depth in this 2006 paper.