Alexandre Deur is a physicist at Jefferson Labs whom I have previous praised for his work on graviton self-interaction based alternative to dark matter analogous to the effects of gluon self-interaction in QCD (the part of the Standard Model of Particle Physics describing strong force interactions between quarks), which is his primary research area.
But, he's no slouch at his day job either, as a recent new paper that put him on the path to being a major innovator in the field rather than one cog of hundreds or thousands of physicists collaborating in conducting Big Science experiments (where he has most of his publications).
Deur and several colleagues posted a ground breaking preprint this week that links the fundamental scale constant of perturbative QCD (applied mostly to high energy collisions, a.ka. ultraviolet QCD, analytically (i.e. with equations) to the fundamental scale constant of non-perturbative QCD, which is used mostly to study the properties of quarks confined in hadrons that are not interacting at such high energies a.k.a. infrared QCD. This is a big deal because, in practice, ultraviolet QCD research is much more expensive than infrared QCD research. This allows comparatively cheap low energy research (millions and tens of millions of dollar experiments in many cases for a related set of experiments and including lots of data that has already been collected) to boost expensive high energy research (costing in the billions and tens of billions of dollars for each sustain experimental program each of which is forging into unknown territory about which we have no prior data).
Both the strength of high energy strong force interactions, described by perturbative QCD, and the lion's share of the masses of hadrons comprised of lighter quarks (which in turn provides the lion's share of the mass of the ordinary matter (as opposed to dark matter and dark energy) in the universe, both ultimately flow from the strong force coupling constant and the strength of the strong force color charge of quarks (which is identical in magnitude for all quarks).
But, in practice, physicists use the physical constant "lamda_s" to do many of the strong force calculations in perturbative QCD, and use the physical constant "kappa", to do many of the strong force calculations involved in establishing hadron masses from first principles in non-perturbative QCD. Both of these approaches are phenomenological approximations in their respective domains of applicability of the exact equations of QCD which are believed to be known, but are too mathematically intractable to calculate with directly.
This breakthrough means that experiments from perturbative QCD can now be used to provide the key QCD physical constants used to calculate hadron masses, while hadron mass measurement, in turn, can be used to determine the key QCD physical constant for making calculations in perturbative QCD. Previously, the two constants had to be measured separately in practice, even though everyone knew that there must be some analytical relationship between them.
The results are largely consistent with experimental data from both regimes (although there is one calculation that has a two sigma tension between the theoretical prediction determined in this manner and the experimental data), and the uncertainties are predominantly due to issues involved in calculating a numerical approximation of equations with infinite numbers of terms.
The actual accuracy with which the physical constants involved are known, roughly 3%-6%, isn't terribly impressive. But, in the long run, there is a clear path to reducing theoretical uncertainty by simply putting more resources from more powerful supercomputers on the problem so that the theoretical calculations can have less uncertainty by including far, far more terms in the calculation than anyone has been able to do to date with limited resources. And, since the light hadron masses are known to far more accuracy than the extremely high energy measurements of perturbative QCD, this connection could ultimately use constants determined in the precisely measured low energy QCD regime to dramatically improve the accuracy of calculations in the high energy perturbative QCD regime that applies, for example, in the high energy particle accelerator collisions conducted at the Large Hadron Collider (LHC) by the ATLAS and CMS experiments.
This paper is also a critical intermediate step in linking both the perturbative QCD and nonperturbative QCD calculations done in the real world to the exact equations of QCD and the fundamental physical constant of the Standard Model which is the strong force coupling constant, from which they are both in principle derived, thereby helping to make it possible to use first principles calculations using the actual exact equations of QCD of real world quantities. The equations and constants of perturbative QCD and non-perturbative QCD are both informed by knowledge of what the exact equations of QCD look like but ultimately are phenomenological approximations of the exact equations, rather than being rigorously and exactly derived matheatically from the exact equations of QCD.
Popular accounts of QCD and the Standard Model often read as if this is a solved problem. But while we believe that we know the exact equations of QCD, the quark masses and the strong force coupling constant with sufficient precision to make these calculations in principle, in fact, no one has yet managed to do it without major approximations, in practice.
The most precisely measured hadron masses (the proton, neutron and pion) are known to six significant digits, and even the least precisely determined ones (heavy hadrons with bottom quarks) are known to six significant digits. But, the strong force coupling constant is known only to about 0.5% precision. But, theoretically, it should be possible using only the light quark masses known to their current accuracy, the most precise several hadron masses, and the known exact equations of QCD, to calculate the strong force coupling constant to roughly 200 times as much accuracy as it is known today without conducting another experiment ever, if one has sufficient computational capacity. This paper is a major intermediate step in that direction.
Moreover, one of the reasons for a significant amount of the uncertainties in the experimentally determined quark masses in the Standard Model is due to the uncertainties in the strong force coupling constant together with the accuracy lost in numerical approximations of the true equations of QCD. So, improvement in measurement of the strong force coupling constant facilitated by this research has the potential to greatly improve the accuracy with which six other Standard Model fundamental constants are known using existing experimental data. And, knowing both the strong force coupling constants and the quark masses with more precision, in turn, also makes it possible to greatly improved the statistical power of experiments done to determine the four CKM mixing matrix parameters. This is because uncertainties regarding the Standard Model background predictions from QCD greatly reduce the statistical power of experiments measuring other Standard Model constants.
Finally, great precision in all of the physical constants going into QCD calculations which are used to determine Standard Model backgrounds in high energy particle accelerator experiments, in turn greatly improves the statistical power of experiments setting out to identify beyond the Standard Model physics.
For example, the primary decay path of the Standard Model Higgs boson is to quark-antiquark pairs of bottom quarks. But, lots of other Standard Model processes also produce quark-antiquark pairs of bottom quarks. The measurement of the Higgs boson signal in the bottom quark decay channel is determined by using perturbative QCD to make estimates of Standard Model bottom quark decay backgrounds from other processes, which have quite significant error bars of their own, and then to look at the total number of observed bottom quark decays observed to estimate the number of Higgs boson sourced bottom quark decays observed. But, since quantum mechanics is stochastic, the number of Higgs boson bottom quark decays expected even with perfect backgrounds is a gaussian distribution around a most likely number of bottom quark decays for any given Higgs boson mass, the expected number of Higgs bosons produced is subject to further statistical variation, and the backgrounds with error bars (only some of which are irreducible statistical variation) that are large compared to the expected signal. So, it is hard to see the Higgs boson in its main decay channel even when there are lots of Higgs boson bottom quark decays out there to be seen even at fairly low Tevatron energies. But, if you can dramatically reduce the non-statistical errors in the Standard Model background prediction, it would be much easier to distinguish the signal of bottom quark decays from Higgs bosons from other Standard Model backgrounds, even with Tevatron data which is far inferior to the LHC in energy scale and total number of events observed.
Going forward, reducing error bar noise in Standard Model backgrounds in the current LHC experiments would significantly improve the ability of ATLAS and CMS to confirm that the Higgs boson seen at the LHC at 125 GeV or so has all of decays expected at the frequencies expected for a Standard Model Higgs boson of that mass, or in the alternative, to see statistically significant differences from the Standard Model Higgs boson expectation even if they are quite subtle differences.
Similarly, if new physics manifest at some characteristic energy scale "lamda_BSM", the energy scale at which the new physics can be detected experimentally could be reduced by an order of magnitude or two, if we were able to leverage our existing precision knowledge of light hadron masses into more precise values of the Standard Model fundamental constants of QCD using improved mathematical approximations of the exact equations of QCD which papers like this one are bringing closer to reality.
If BSM physics exist at some level greater than the electroweak scale of O(100 GeV) and the GUT scale of O(10^16 GeV), we might be able to find them using current experiments using current numerical QCD methods at the scale of O(1000-10,000 GeV) using the LHC with currently available technology and perturbative QCD calculation accuracies. But, the kind of improvements that may be possible in QCD with much more accurately known QCD physical constants could stretch our experimental research to revealing or ruling out new physics up to scales of O(100,000-10,000,000 GeV) (i.e. 100-10,000 TeV).
There estimates may be a bit optimistic (because some Standard Model backgrounds have inherent statistical variation that is large relative to the expected signal even if the background is calculated perfectly, requiring experimenters to look at signals with little or no Standard Model background instead), but testing new physics up to the several hundred TeV scale with technology not involving any major technological breakthroughs not present at the LHC today is not unthinkable if we can make better progress on the math of QCD, which may be possible to achieve to a significant extent with nothing more than a big investment in spending on the supercomputers (without any advances in supercomputing technology itself from current levels) that are available to QCD physicists.
This paper is an important step in making these advances a function of our willingness to spend the money to allow our scientists to make absolutely inevitable and certain progress, as opposed to a gamble on whether no conceptual breakthroughs can be devised by physics geniuses, if those breakthroughs are even out there waiting to be discovered, which they might not be at some point.
Right now, most new physics scenarios have strong minimum energy scales, but their maximum energy scales are far in excess of the minimum energy scales at which they can be ruled out, But, if the parameter space in which new physics can be sought expands enough, the entire parameter space of many BSM theories may be possible to confirm at particular values or rule out, if we can simply improve the statistical power of present day LHC technology experiments by using more precision knowledge of Standard Model fundamental constants to more precisely predict the expected Standard Model backgrounds.
For example, the non-detection of proton decay and neutrinoless double beta decay places an energy scale ceiling on many kinds of supersymmetry (SUSY) theories. But, this ceiling is much higher than the minimum energy scales at which new physics from SUSY theories can be excluded using the LHC and other experimental data that is available. Increased experimental power from a more precise knowledge of the fundamental constants of QCD, however, might make it possible to close that gap for many kinds of SUSY theories. And since string theory almost universally assumes that its low energy approximation resembles fairly genetic versions of SUSY this could even make it possible to experimentally rule out immense swaths of the string theory landscape.
Lest I overhype too much, I do need to provide some perspective. Physicists have known that what Deur and his colleagues did was possible in principle for half a century. We knew already that this was a problem with a correct solution that was out there waiting to be found. But, the fact that it took half a century to get from knowing that the answer to this intermediate result was out there, and actually discovering it, is also a testament to how non-trivial an effort this very lucid paper really is in fact, even if it seems deceptively simple. The authors of this paper have not only reached an important intermediate result, but have also artfully make it look more much elementary and obvious than it actually was (much of the really hard stuff is hidden in results from QCD methods such as the light front method which are described only by bottom line result and citation in this paper).
This paper works within the framework of "light-front holographic QCD". It seems to explain bound states and confinement through a combination of an extremely simplified inter-parton potential, and a constriction in the geometry of the extra dimension, which (since the extra dimension corresponds to energy/length scales) translates somehow to an enforcement of confinement. I'm not clear on the details.
But "kappa" is, so far, a purely phenomenological parameter specific to this kind of bottom-up holographic model - it describes the warping of the fifth dimension and the resulting deviation from scale invariance. The LF school claims to connect it with a rather obscure non-holographic ansatz (dAFF) but I see handwaving.
Also, I don't see other people jumping on the bandwagon. This paper is a year old and got very little attention in the literature. And for years, LF hQCD seems to be the same few people writing all the papers. If this is a breakthrough, it hasn't been recognized as such by the authors' peers.
Therefore, I take a cautious attitude. I note the quantitative relationship between lambda_s and certain hadron masses (especially that of the rho meson?) that has been proposed. But I bear in mind that it might have some other explanation.
Post a Comment