Showing posts with label predictions. Show all posts
Showing posts with label predictions. Show all posts

Monday, August 11, 2025

Tau Lepton g-2 And Electric Dipole Moment

The substance of a new paper considering new physics that could arise in the tau lepton anomalous magnetic moment (g-2) and electric dipole moment (EDM) is purely ill-motivated speculation and doesn't deserve any discussion here. 

But the introduction to the paper (PDF) conveniently recaps the current state of experimental measurements of these properties (that are established to high precision in electrons and muons), and the Standard Model predictions for these quantities (which have been confirmed for electrons and muons):

The SM contributions to the electron and muon g−2 are precisely calculated and compared with experiments. On the other hand, a nonzero EDM requires CP violation which arises predominantly from the phase in the Cabibbo-Kobayashi-Maskawa (CKM) matrix within the SM, resulting in extremely suppressed predictions for charged lepton EDMs. Therefore, the discovery of a nonzero EDM indicates the existence of physics beyond the SM. 

In fact, precise measurements of the electron EDM [ed. the absolute value of which is currently experimentally limited to not more than 4.1 * 10^-30 ecm] already put stringent constraints on a wide range of new physics models [ed. especially supersymmetry]. While the current upper limit on the muon EDM [ed. the absolute value of which is currently experimentally limited to not more than 1 * 10^-19 ecm] hardly gives a constraint by itself, the projected experiments will reach the sensitivity to explore new physics at the electroweak scale. 

The dipole moments of the tau lepton are currently much less constrained than those of the electron and muon due to the tau’s short lifetime, but they will provide a valuable window into new physics effects that scale with lepton mass. 

The SM contribution to the tau g −2 is precisely calculated and found as a(SM)(τ) = (117717.1 ± 3.9) × 10^−8 [ed. i.e. 0.001177171(39)], including updates of the hadronic vacuum polarization contributions. On the other hand, its measurements at the Large Hadron Collider (LHC) are still not as precise: 

ATLAS: −0.057 < a(τ) < 0.024 (95% C.L.), (1.4) 

CMS: −0.0042 < a(τ) < 0.0062 (95% C.L.). (1.5) 

The SM contribution to the tau EDM is tiny. At the quark level, the leading contribution is given at the four-loop level, dSM τ = O(10^−47) ecm, while the hadron level long distance effect enhances the contribution to dSM τ ≃ −7.32 × 10^−38 ecm. 

The current experimental upper limits are:

−1.85 × 10^−17 ecm < Re(dτ) < 0.61 × 10^−17 ecm (95% C.L.), (1.6)

−1.03 ×10^−17 ecm < Im(dτ) < 0.23 × 10^−17 ecm (95% C.L.), (1.7) 

|dτ| < 2.9 × 10^−17 ecm (95% C.L.).  (1.8) 

Note that the complex form of the limits is due to the off-shell photon in the e+e− → τ+τ− process. 

There is also an indirect limit on |dτ| from the electron EDM constraint via the three-loop light-by-light mechanism, which is

 |dτ| < 1.1 ×10^−18 ecm for |de| < 1.1 × 10^−29 ecm, 

 |dτ| < 4.1 ×10^−19 ecm for |de| < 4.1 × 10^−30 ecm (1.9) 

providing a stronger constraint compared with the direct bounds. 

Although measurements of the tau g − 2 and EDM remain experimentally challenging, we can expect improved sensitivities for these observables by ongoing and projected experiments such as the Belle II experiment, Beijing Electron-Positron Collider (BEPCII) and Circular Electron-Positron Collider (CEPC). They will reach the sensitivities of |aτ| ∼ 10^−5 and |dτ| ∼ 10^−19 ecm.

Thus, the ongoing and projected experiments, which improve upon the status quo by about two orders of magnitude each, should show a tau g-2 consistent with 0.00118(1) and a tau EDM that is experimentally indistinguishable from zero

A different result would suggest new physics, which there is no good reason to suspect, or serious systemic errors in the experiments.

Previous discussion of the tau lepton's properties can be found in posts at this blog on May 28, 2024 and May 23, 2023

The Koide's rule predicted value for the tau lepton mass is 1776.96894(7) MeV.

The Particle Data Group's world average of the tau lepton mass is 1776.93 ± 0.09 MeV which is a precision of slightly under one part per 20,000. 

The experimentally measured mass of the tau lepton is consistent with the Koide's rule prediction made in 1981 at the 0.4 sigma level, even though the relevant masses were known much less precisely in 1981, and the experimental value has grown closer to the predicted value over time. 

There are dozens of lepton decay modes with a branching fraction of more than 1% in addition to many more less common decay modes. About 85% of those decay modes involve a decay to an electron or muon, plus one or more neutral particles (such as neutrinos, neutral pions, and/or neutral kaons).

Thursday, July 31, 2025

Predictions About Planet Nine

A new preprint sums up some of the expected properties of a hypothetical Planet Nine, which has been inferred from the orbits of other solar system object.
Evidence suggests the existence of a large planet in the outer Solar System, Planet Nine, with a predicted mass of 6.6 +2.6 / -1.7 Earth masses (Brown et al., 2024). Based on mass radius composition models, planet formation theory, and confirmed exoplanets with low mass and radius uncertainty and equilibrium temperature less than 600 K, we determine the most likely composition for Planet Nine is a mini-Neptune with a radius in the range 2.0 to 2.6 Earth radii and a H-He envelope fraction in the range of 0.6 percent to 3.5 percent by mass. Using albedo estimates for a mini-Neptune extrapolated from V-band data for the Solar Systems giant planets gives albedo values for Planet Nine in the range of 0.47 to 0.33. Using the most likely orbit and aphelion estimates from the Planet Nine Reference Population 3.0, we estimate Planet Nines absolute magnitude in the range of -6.1 to -5.2 and apparent magnitude in the range of +21.9 to +22.7. Finally, we estimate that, if the hypothetical Planet Nine exists and is detected by upcoming surveys, it will have a resolvable disk using some higher resolution world class telescopes.
David G. Russell, Terry L. White, "The Radius, Composition, Albedo, and Absolute Magnitude of Planet Nine Based on Exoplanets with Te(q) less than 600 K and the Planet Nine Reference Population 3.0" arXiv:2507.22297 (July 30, 2025).

Thursday, May 23, 2024

The Past, Present, And Future Of Modern Physics

The Past

Modern physics started a little more than a century ago in the early 1900s. 

The state of physics immediately before that point is still taught in universities as a classical approximation of more fundamental physical laws found in modern physics which is often called "classical physics."

Classical physics consists of Newtonian mechanics and Newtonian gravity and calculus, dating to the late 1600s, Maxwell's equations of electromagnetism and classical optics, the laws of thermodynamics and their derivation from statistical mechanics, fluid mechanics, and the proton-neutron-electron model of the atom exemplified in the Periodic Table of the Elements.

Modern physics starts with a scientific understanding of radioactive decay, special relativity, general relativity, and quantum mechanics, which was developed between about 1896 and 1930. It also includes nuclear physics and neutrino physics, that begin in earnest in the 1930s, the Standard Model of Particle Physics and hadron physics that were formulated in the modern sense in the early 1970s, with the three generations of fermions established by 1975. Various beyond the Standard Model extensions of that model mostly date to the 1980s or later, even though the first hints of some of them were considered earlier. The penultimate Standard Model particle to be experimentally confirmed was the top quark in 1994. The fact that neutrinos were massive and oscillate was confirmed by 1998. All particles predicted by the Standard Model of Particle Physics were discovered by 2012 when the Higgs boson was discovered. Subsequent research has confirmed that the particle discovered is a good match to the Standard Model Higgs boson.

Some of the notable developments in Standard Model physics since 2012 have been the experimental exclusion of many extensions of the Standard Model over ever increasing ranges of energies and parameter spaces, the discovery of many hadrons predicted in the Standard Model including tetraquarks and pentaquark together with measurements of their properties, progress in calculating parton distribution functions from first principles, and refinement of our measurements of the couple dozen experimentally determined physical constants of the Standard Model.

Modern physics also includes modern astrophysics and cosmology including the Big Bang Theory and the concept of black holes which coincided with general relativity, dark matter phenomena first observed and neutron stars were first proposed in the 1930s, neutron stars were first observed in the 1967 and the first observation of a black hole was in 1971, cosmological inflation hypotheses date to 1980, the possibility of dark energy phenomena was part of general relativity but it wasn't confirmed until 1998, the LambdaCDM model of cosmology was proposed in the mid-1990s and became the paradigm when dark energy was observationally confirmed in 1998. Quantum gravity hypotheses and hypotheses to explain baryogenesis and leptogenesis have seen serious development mostly since 1980 although early hypotheses along these lines have been around since the inception of modern physics.

The Present

As of 2024, modern physics is dominated by the "core theories" of special relativity, general relativity, and the Standard Model of Particle Physics, none of which have been clearly contradicted by observational evidence after more than a century of looking in the case of relativity and half a century of the Standard Model which has been refined since its original scheme only to expand it to exactly three generation of fermions and to attempt to integrate massive neutrinos and neutrinos oscillation.

On one hand, there are many areas of wide consensus in modern physics. Special relativity has been exhaustively confirmed experimentally and observationally. The predictions of General Relativity including the Big Bang, black holes, strong field behavior in contexts like the dynamics of massive binary systems and compact objects, and its predictions like the precession of Mercury and frame dragging in the solar system context have been observationally confirmed to high precision. Half a century of high energy physics experiments and cosmic ray and neutron star observations have never definitively contradicted the Standard Model (apart from expanding it to exactly three generation of fermions and adding massive neutrinos) and have made myriad predictions to exquisite precision.

There are a variety of open questions and matters of ongoing investigation in modern physics, however.

Two important phenomena predicted by the Standard Model: sphaleron interactions and glue balls (i.e. hadrons made entirely of gluons), have not yet been observed. We still don't know the absolute masses of the neutrino mass eigenstates or even if they have a "normal" or "inverted" mass hierarchy, the quadrant of one of the neutrino mass oscillation parameters, more than the vaguest estimate of the CP violating phase among the potential seven experimentally observed neutrino related physical constants, or the mechanism by which neutrino mass arises. We've seen patterns in the experimentally measured parameters of the Standard Model but have no solid theory to explain their values. While we can predict the "spectrum" of pseudoscalar and vector mesons and of three quark baryons with their properties for the most part, we are still struggling to explain the observed spectrum of scalar and axial vector mesons, we are in the early stage of working out the spectrum of possible four and five quark hadrons including both true four and five quark bound systems and "hadron molecules". And we can't even really predict, a priori, why the exact handful of light pseudoscalar and vector mesons are blends of valence quark combinations rather than individual particles although we can explain their structures with post-dictions. While in principle parton distribution functions can be calculated from first principles in the Standard Model, we've only managed to actually do that, somewhat crudely and in only a few special cases, in the last few years.

Investigation of just what triggers wave function collapse, how quantum entanglement works, the extent to which virtual particles and quantum tunneling must obey special and general relativity, and the correct "interpretation" of quantum mechanics is ongoing. At an engineering level, we are in the very early days of developing quantum computers.

We have a good working model of the residual strong force that binds nucleons in an atom, but have not derived all of nuclear physics or even the residual strong force, from the first principles of quantum chromodynamics (QCD), the Standard Model theory of the strong force. While we understand the principles behind sustainable nuclear fusion power generation, we don't have the engineering realization of it quite worked out. We have mapped out the periodic table of the elements and isotopes all of the way to quite ephemeral elements and isotopes that are only created synthetically, but we can't confidently predict whether or not there are as yet undiscovered elements that are in an island of stability. We are on the brink of mastering condensed matter physics issues like how to create high temperature superconductors and the structure and equation of state of neutron stars.

The search for deviations from the Standard Model in high energy physics has been relentless, particularly since 1994, when all Standard Model particles except the top quark had been discovered. Mostly this has been a tale of crushed dreams. Experiments have largely ruled out huge portions of the parameter space of supersymmetry, multiple Higgs doublet theories, technicolor, leptoquarks, preon theories, fourth or greater generation fermion theories, all manner of grand unified theories of particle physics, proton decay, neutron-antineutron oscillation, flavor changing neutral currents at the tree level, neutrinoless double beta decay or other affirmative evidence for Majorana neutrinos, lepton flavor violation, lepton unitarity violations, non-standard neutrino interactions, and sterile neutrinos. No dark matter candidates have been observed experimentally. We've even observationally ruled out changes in many of its fundamental constants for a period looking back of many billions of years.

There have been some statistically significant, but tiny, discrepancies between the experimentally measured value of the anomalous magnetic moment of the muon and the value predicted by the Standard Model, although this increasingly looks like it is a function of erroneous calculations of the predicted value rather than evidence of new physics. While most experimental anomalies suggesting new particles have been ruled out there have been some very weak experimental hints of a 17 MeV particle (X17) whose alleged experimental hints may have other explanations, and an electromagnetically neutral second scalar Higgs boson at about 95 GeV, neither of which have been fully ruled out, but neither of which is likely to amount to anything, and their have been weak experimental hints, that are to some extent mutually inconsistent between different experiments of one or two possible "sterile neutrinos" (which could also be a dark matter candidate). The anomalous magnetic moment of the electron measured experimentally isn't a perfect fit to its theoretically predicted value although it is very close. The experimentally measured couplings of the Standard Model Higgs boson aren't a perfect fit to the theoretical predictions although they are reasonably close to within experimental uncertainties. While early hints of lepton universality violations were ruled out when the experimental data improved, there are still some minor lepton flavor ratio anomalies out there. Still, there is every reason to think that these anomalies won't last and that the particle content of the Standard Model is complete with the possible exceptions of one or more particles involved in the mechanism that generates neutrino masses, a massless or nearly massless graviton, and one or more dark matter candidates with properties that make them almost impossible to detect in a particle collider.

Thus, while there are a few details and implementations left to work out, the Standard Model, high energy physics, and nuclear physics are close to being complete and we don't expect any new discoveries in this area to identify anything deeply wrong with what we know now, even though if we are really lucky, more research and greater precision might grant us a deeper understanding of why the Standard Model has the properties that it does.

The situation in astrophysics and cosmology is much less settled, and while the handful of particle colliders on the planet provide us with a trickle of new high energy physics data in some very narrow extensions of existing high energy physics parameter space, a host of new "telescopes" in the broad sense of the term is providing torrents of new astronomy observations that our existing modern physics theories struggle to explain. The leading paradigm in astrophysics and cosmology, the LambdaCDM "Standard Model of Cosmology" is a dead man walking, mostly for lack of a consensus on what to replace it with.

"Telescopes" aren't just visual light telescopes on the surface of the Earth anymore, and even those are vastly improved. Modern "telescopes" see the entire range of the electromagnetic spectrum from ultra-low frequency radio waves to ultra-high energy gamma waves, not just from Earth but also based in space, with extreme resolutions. We have neutrino "telescopes". We have "cosmic ray detectors" which observe non-photon particles that rain down on Earth from space. We have gravitational wave "telescopes" that have made many important discoveries, including the observation of many intermediate size black holes. In some cases we can do "multi-messenger astronomy" which combined signals from multiple kinds of telescopes that come from a single event (something that has places strong bounds on both the speed of neutrinos and the speed of gravitational waves, which have both been as predicted by special and general relativity).

We have overwhelming observational evidence of dark matter phenomena. But we have no dark matter particle candidate and no gravity based explanation for dark matter phenomena that fits all of the data and has secured wide acceptance, although we have an ever filling cemetery of ruled out explanations for dark matter, like MACHOs, primordial black holes, supersymmetric WIMPs, and cold dark matter particles that form NFW halos. Attempts to explain dark matter through the gravitomagnetic effects of general relativity in galaxies have also failed. And there are dozens of distinct types of observations contrary to the predictions of LambdaCDM. Fairly sophisticated comparisons of predictions with observations strong disfavor warm dark matter candidates and thermal relic GeV mass scale self-interacting dark matter candidates. QCD axions and QCD bound exotic hadrons are also largely ruled out. Toy model MOND can't be the exclusive explanation for dark matter phenomena, and neither can some of its relativistic generalizations like TeVeS, although MOND is a very simple theory that explains almost all dark matter phenomena observed up to the galaxy scale with a single new parameter and has with some mild generalizations explained the cosmic radio background (CMB) observations and the impossible early galaxies problem. But while MOND may be on the right track it doesn't quite get many features of galaxy clusters right (although a similar scaling law can work there) and has trouble with some out of disk plane feature of spiral galaxies. The data on wide binaries, which would distinguish between a variety of dark matter particle and gravitational based theories to explain dark matter phenomena are current inconclusive and have received contradictory interpretation. A variety of gravitational explanations of dark matter phenomena are promising, however, as are dark matter particle theories with extreme low mass dark matter candidates like axion-like particles which have wave-like properties.

We have inconsistent measurements of the Hubble constant which is one of two observations that contribute to our estimate of the amount of dark energy in a simple LambdaCDM model where dark energy is simply a specific constant value of the cosmological constant suggesting that the cosmological constant, lambda, may not actually be constant.

The allowed parameter space for cosmological inflation continues to narrow with the non-detection of primordial gravitational waves at ever greater precisions. There are strong, but not conclusive suggestions that the universe may be anisotropic and inhomogeneous even at the largest scales, contrary to the prevailing cosmology paradigm.

There are no widely accepted satisfactory answer to the question of baryogenesis and leptogenesis, although improvements in the precision and energy scale reach of the Standard Model and improved astronomy observations increasingly push any meaningful baryogenesis and leptogenesis closer to the Big Bang, with any significant changes in the aggregate baryon number and lepton number of the universe pretty much pushed to the first microsecond after the Big Bang at this point.

None of the unknowns in astrophysics and cosmology have any real practical engineering implications. But the Overton window of possibilities that are being seriously investigated in these fields is vastly broader than in high energy physics or quantum mechanics.

The Future

The good news is that the torrent of astronomy data that is pouring in from many independent research groups is providing us with the data we need to more definitively rule out or confirm various competing hypotheses in astrophysics and cosmology. We have a lot of shiny new tools both in the form in many different kinds of vastly improved "telescopes" and in the form of profoundly improved computational power and artificial intelligence tools to analyze this vast amount of new data to allow us to have scientific advances in these fields which are not just driven by observations and not group think or the sociology of the discipline, even though it may take the deaths of a generation or two of astrophysicists for the field to fully free itself from outdated ideas that are no longer supported by the data.

Even if we don't reach a consensus around a new observationally supported and theoretically consistent paradigm in my lifetime of two or three more decades, I am very hopeful that we will do so within the lives of my children and of my grandchildren to be.

I have strong suspicions about what the new paradigm will look like.

Dark matter phenomena will be explained most likely by a gravitational explanation with strong similarities to work of Alexandre Deur, whether or not his precise attempt to derive his conclusions from non-perturbative general relativity effects holds up. The consensus will also probably likely be that quantum gravity does exist, although it may take quite a while to prove that with observations.

Also, like Deur, and unlike the vast majority of other explanations, I think that the ultimate explanation for dark energy phenomena will conserve mass-energy both globally and locally and will not be a true physical constant. The cosmological constant of general relativity will probably ultimately be abandoned even though it is a reasonable first order approximation of what we observe. This will also mean that the aggregate mass-energy of the universe at any given time will be finite and conserved; it will be a non-zero boundary condition at t=0 of the Big Bang.

Thus, I suspect that a century from now, the consensus will be that there are no dark matter particles and there is no dark energy substances, and that these were just products of theoretical misconceptions akin the the epicycles to explain celestial mechanics that preceded the discovery that their motions could be explained to all precision available at the time with Newtonian gravity.

It will take time, but I expect that cosmological inflation will ultimately be ruled out.

I expect that new advances in astrophysics will rule out faster than light travel or information transfer and wormholes.

I don't think that we will have a conclusive explanation of baryogenesis and leptogenesis, even within my grandchildren's lives, but I do think that a mirror universe hypothesis that there is an antimatter universe that exists before the Big Bang in which time runs in the opposite direction will become one of the leading explanations. This is because I think that we will make significantly more progress towards ruling out new high energy physics and any possibility of matter creation in periods closer and closer to the Big Bang. This will make non-zero baryon number and lepton number in increasingly short time frames immediately after, although not at, the Big Bang impossible.

I think that there are even odds that we will discover that sphaleron interactions are actually physically impossible in any physical possible scenario doesn't their theoretical possibility in the Standard Model, possibly due to a maximum local mass-energy density.

In the area of high energy physics, I don't expect any new particles to be discovered (apart from possible evidence for the existence of a massless graviton), and I expect the Standard Model to stand the test of time. I do expect some refinements of our theories of wave function collapse and quantum interpretations. I don't expect new high energy physics at higher energies. I don't expect that we will find new heavy elements in islands of stability.

I expect that neutrinos will be found to have Dirac rather than Majorana mass that is somehow made possible without new particles despite the current lack of a clear path to do so. Most likely, our understanding of neutrino mass and of the Higgs field Yukawas of the Standard Model particles will involve a dynamical balancing of the Higgs vev between particles that can transform into each other in W boson interactions via W boson interactions, as opposed to giving such a central role to the coupling of these particles to a Higgs boson. Self-interactions with the fields to which the various fundamental particles couple will also play a role in generating their masses.

I expect that, as our measurements of the fundamental particle masses get more precise, that the sum of the square of the masses of the fundamental particles will indeed be found to equal the square of the Higgs vev. Vast amounts of ink will be spilled once this is confirmed, over why fundamental boson masses give rise to slightly more than half of the Higgs vev, while fundamental fermions give rise to slightly less than half of it. This could reduce the number of free independently experimentally measured mass and coupling constant parameters of the Standard Model from eighteen (fifteen masses and three coupling constants) minus one for the electroweak relationship between the W and Z boson masses and coupling constants, to eight or fewer (three coupling constants, the W, two fermion masses, and two lepton masses).

I expect that we will have a good first principles explanation of the full spectrum of observed hadrons and that we will have first principles calculations of all of their parton distribution functions.

I expect that we will be able to work out the residual strong force that binds nucleons exactly from first principles, and that we will be able to calculate with quantum computers that there are no undiscovered islands of stability in heavy elements or isotopes.

There are even odd that we will discover a deeper explanation for the values of the parameters in the CKM and PMNS matrixes by sometime in my grandchildren's lives, and better than even odds that we will be able to reduce the number of free parameters in those two matrixes to less than the current eight. I wouldn't be surprised if the CP violating parameters of the CKM and PMNS matrixes were found to have an independent source from the other six parameters of those matrixes.

I don't think that we will develop a Lie group Grand Unified Theory or Theory of Everything, or that string theory will ever work, although I do think that we are more likely than not to develop a theory of quantum gravity that can be integrated into the Standard Model more cleanly. Indeed, gravitationally based dark matter phenomena may turn out to be a quantum gravity effect that is actually absent from classical general relativity.

Monday, February 19, 2024

The XENONnT Dark Matter Experiment

The XENONnT dark matter experiment will have about ten times more sensitivity to WIMPS (a type of dark matter candidate) than the previous XENON1T. 

I fully expect that it will produce a null result without detecting any traces of dark matter that don't turn out to be false positives in the end.

Friday, March 17, 2023

When Will We Hit Major Neutrinoless Double Beta Decay Thresholds?

We don't have an absolute neutrino mass measurement. 

But due to the observed oscillations between neutrino mass eigenstates, we know that the sum of the three neutrino masses can't be less than about 60 meV if neutrinos have a "normal mass ordering" and can't be less than about 100 meV if neutrinos have an "inverted mass ordering."

The sum of the three neutrino masses could be greater than these minimums. If the sum of the three masses is greater than these minimums, the smallest neutrino mass is equal to a third of the amount by which the relevant minimum is exceeded.

So, for example, if the lightest of the three neutrino masses is 10 meV, then the sum of the three neutrino masses is about 90 meV in a normal mass ordering and about 130 meV in an inverted mass ordering.

If neutrinos have Majorana mass, the Majorana neutrino masses are related to the rate at which neutrinoless double beta decay can occur. The greater the Majorana mass, the more frequent neutrinoless double beta decay should be.

Right now, the non-detection of neutrinoless double beta decay so far puts a cap on the maximum Majorana mass of the neutrinos that is larger than the minimum mass of the inverted neutrino mass ordering. But, it is starting to get close.

As of July of 2022, we could determine with 90% confidence, based upon the non-detection of neutrinoless beta decay in a state of the art experiment establish a minimum half-life for the process of 8.3 * 10^25 years.

As illustrated by the chart below (from this source), an inverted mass hierarchy for neutrinos is ruled out at a half life of about 10^29 years (an improvement by a factor of 1200 in the excluded neutrinoless double beta decay half life over the current state of the art measurement). 

Majorana mass of any kind becomes problematic even in a normal mass hierarchy in about 10^32 or 10^33 years (an improvement by a factor of 1.2 million to 12 million over the current state of the art). 

We aren't there yet, but the likelihood that scientists will have experiments that will either detect neutrinoless double beta decay or rule out Majorana mass neutrinos even if they have a normal Majorana neutrino mass hiearchy, in perhaps 15-30 years, is quite good. 

Ruling out an inverted Majorana neutrino mass hierarchy based upon the non-observation of neutrinoless double beta decay is something that can probably be achieved in half that amount of time, perhaps as soon as the year 2030.

It is probably easier to overestimate how long it will take to achieve this goal than it is to underestimate how long it will take. 

Cosmology bounds on the neutrino mass, and hints from neutrino oscillation studies both favor a normal neutrino mass hierarchy over an inverted neutrino mass hierarchy. So, the discovery of Majorana mass neutrinos, if they do exist, is probably not just around the corner.

For what it is worth, there seem to be deep problems with both of the two main kinds of neutrino mass that have been proposed: Dirac mass and Majorana mass.

The biggest problem with Majorana mass seems to be our ability to clearly distinguish neutrinos and antineutrinos experimentally. 

But, according to the same analysis, the biggest problem with Dirac mass is that it would seem to imply the existence of sterile neutrino counterparts to the three "active" neutrinos and three "active" antineutrinos, even though there is really no evidence to suggest that they exist at all.

I've asked about, and not received a compelling answer to, the question of why these two theoretical proposals are the only possible ways for neutrinos to acquire rest mass. 

But, it seems to me that if your scientific analysis seems to rule (or at least strongly disfavor) both of the theoretical choices that you have considered to explain something, that it is likely that both possibilities are wrong and that the true answer is another approach that is not yet one of the choices.

It is certainly notable that neutrino masses are on the order of a billion (10^9) times smaller than their counterpart charged lepton masses. 

Could that be somehow related to the ratio of the strength of the weak force that dominates the dynamics of neutrinos to the strength of the electromagnetic fore that dominates the dynamics of electrons, muons, and tau leptons, which is about 10^11?

What if every kind of Standard Model fermion had its electroweak self-interaction as one source of rest mass, and the Higgs mechanism as an additional source of rest mass in all charged Standard Model fermions but not in neutrinos?

Could neutrinos have a third kind of mass component, perhaps derived from particle self-interactions, that might exist in every particle, but which would only be measurable in neutrinos?

It is hard to think sensibly about that possibility so it is such untried ground.

Thursday, January 12, 2023

Calculating The Proton and Neutron Electric Dipole Moment

The Standard Model assumes, and there are multiple theoretical arguments to support, that there is no charge parity (CP) violation (which is equivalent to dependence upon the direction of time) in the strong force. 

There is an obvious place in the Standard Model equations of the strong force to insert a CP violation parameter, however, which is called the θ term. The θ term is zero if there is no CP violation in the strong force. 

But, while the theory assumes that the θ term is zero, experiments can never directly rule out a very small non-zero value for the θ term. 

A non-zero value for the θ term would have important qualitative implications, especially as a possible source of matter-antimatter asymmetry in the universe. The Standard Model and available observational evidence strongly support that this matter-antimatter asymmetry is an "initial condition" of the universe, contrary to the naive expectation that there should be equal amounts of matter and antimatter at the moment of the Big Bang. As a new paper discussed below explains:
Symmetries and their breaking are essential topics in modern physics, among which the discrete symmetries C (charge conjugation), P (parity), and T (time reversal) are of special importance. This is partially because the violation of the combined C and P symmetries is one of the three Sakharov conditions that are necessary to give rise to the baryon asymmetry of the universe (BAU). However, despite the great success of the standard model (SM), the weak baryogenesis mechanism from the CP violation within the SM contributes negligibly (∼ 16 orders of magnitude smaller than the observed BAU). This poses a hint that, besides the possible θ term in QCD, there could exist beyond-standard-model (BSM) sources of CP violation and thus the study of CP violation plays an important role in the efforts of searching for BSM physics.
One of the main ways to probe the magnitude of the θ term is to measure the electric dipole moments of the proton and the neutron, which are measurements that can be made with exquisite precision. No non-zero electric dipole moment has been observed for either the proton or the neutron. But strict upper bounds on this electromagnetic property of the nucleons have been established and those bounds can be incrementally improved over time.

A new paper uses Lattice quantum chromodynamics (QCD) methods to calculate from first principles the relationship between the observable quantities of the proton and neutron electric dipole moments, and the theoretical Standard Model parameter which is the θ term. 

The paper concludes that the electric dipole moment of the neutron is −0.00148(35)θ¯ e⋅fm and that the electric dipole moment of the proton is 0.0038(14)θ¯ e⋅fm. 

Thus, the θ term is about 675 times the magnitude of the neutron electric dipole moment and about 263 times the magnitude of the proton electric dipole moment, although both are zero if the θ term is zero (except for a weak force contribution about five orders of magnitude smaller than the current experimental limit). This also implies the the ratio of the electron dipole moment of the proton to the electron dipole moment of the neutron should be about -2.6.

The body text of the paper explains that:
The first experimental upper limit on the neutron EDM (nEDM) was given in 1957  as ∼ 10^−20 e·cm. During the past 60 years of experiments, this upper limit has been improved by 6 orders of magnitude. The most recent experimental result of the nEDM is 0.0(1.1)(0.2) × 10^−26 e·cm, which is still around 5 orders of magnitude larger than the contribution that can be offered by the weak CP violating phase. Currently, several experiments are aiming at improving the limit down to 10^−28 e·cm in the next ∼10 years. 
. . .

By using the most recent experimental upper limit of dn, our results indicate that θ¯ < 10^−10. 

This limit is equivalent to less than ± 1.2 x 10^-13 e·fm, which implies that the magnitude of the θ term must be less than about ± 10^-10, a constraint that will improve by about two orders of magnitude in the next decade.

This is too small by more than ten orders of magnitude to make a meaningful dent in the Sakharov conditions. The θ term would have to be roughly on O(1) after running to extremely high energy scales to explain the matter-antimatter asymmetry of the universe. But, the strong force becomes weaker, not stronger, at higher energy scales, so CP violation in the strong force should be less important at these energy scales, not more important.

Personally, I'm confident that the θ term is exactly zero, and that there are no new CP violating physics at higher energies, at least up to about the GUT scale, that explain the matter-antimatter asymmetry of the universe. 

Neither the zero value of the θ term, nor the existence of matter-antimatter asymmetry in the universe at a infinitesimal time after the Big Bang are "problems" in physics to be solved. They are simply descriptive features of our reality.

The paper and its abstract are as follows:
We calculate the nucleon electric dipole moment (EDM) from the θ term with overlap fermions on three domain wall lattices with different sea pion masses at lattice spacing 0.11 fm. Due to the chiral symmetry conserved by the overlap fermions, we have well defined topological charge and chiral limit for the EDM. Thus, the chiral extrapolation can be carried out reliably at nonzero lattice spacings. We use three to four different partially quenched valence pion masses for each sea pion mass and find that the EDM dependence on the valence and sea pion masses behaves oppositely, which can be described by partially quenched chiral perturbation theory. With the help of the cluster decomposition error reduction (CDER) technique, we determine the neutron and proton EDM at the physical pion mass to be dn=−0.00148(14)(31)θ¯ e⋅fm and dp=0.0038(11)(8)θ¯ e⋅fm. This work is a clear demonstration of the advantages of using chiral fermions in the nucleon EDM calculation and paves the road to future precise studies of the strong CP violation effects.
Jian Liang, et al., "Nucleon Electric Dipole Moment from the θ Term with Lattice Chiral Fermions" arXiv:2301.04331 (January 11, 2023).

Wednesday, January 11, 2023

More Doubt Cast On Reactor Neutrino Anomalies

Yet another apparent experimental observation discrepancy from the Standard Model bites the dust, although, this particular one had already been in doubt anyway. I am personally confident there are no beyond the Standard Model sterile neutrinos.

Discrepancies between reactor neutrino experiments and theory may be the result of errors in the analysis of electron data that form the basis of the neutrino predictions.
From here. A synopsis of the Letter in the journal publishing it explains that:
Several experiments have been set up outside nuclear reactors to record escaping antineutrinos. The data generally agrees with theory, but at certain energies, the antineutrino flux is 6–10% above or below predictions. These so-called reactor antineutrino anomalies have excited the neutrino community, as they could be signatures of a hypothetical sterile neutrino (see Viewpoint: Getting to the Bottom of an Antineutrino Anomaly). But a new analysis by Alain Letourneau from the French Atomic Energy Commission (CEA-Saclay) and colleagues has shown that the discrepancies may come from experimental biases in associated electron measurements.

The source of reactor antineutrinos is beta decay, which occurs in a wide variety of nuclei (more than 800 species in a typical fission reactor). To predict the antineutrino flux, researchers have typically used previously recorded data on electrons, which are also produced in the same beta decays. This traditional method takes the observed electron spectra from nuclei, such as uranium-235 and plutonium-239, and converts them into predicted antineutrino spectra. But Letourneau and colleagues have found reason to doubt the electron measurements.

The team calculated antineutrino spectra—as well as the corresponding electron spectra—using a fundamental theory of beta decay. This method works for some nuclei, but not all, so the researchers plugged the gaps using a phenomenological model. They were able to treat all 800-plus reactor beta decays, finding “bumps” in the antineutrino flux that agree with observations. Similar features are predicted for electron spectra, but they don’t show up in the data. The results suggest that an experimental bias in electron observations causes the reactor antineutrino anomalies. To confirm this hypothesis, the researchers call for new precision measurements of the fission electrons.
The Letter and its abstract are as follows:
We investigate the possible origins of the reactor antineutrino anomalies in norm and shape within the framework of a summation model where β− transitions are simulated by a phenomenological model of Gamow-Teller decay strength. The general trends of divergence from the Huber-Mueller model on the antineutrino side can be reproduced in both norm and shape. From the exact electron-antineutrino correspondence of the summation model, we predict similar distortions in the electron spectra, suggesting that biases on the reference spectra of fission electrons could be the cause of the anomalies.

A science article aimed at the general public ends its story on this paper with this quote from a neutrino physicist:
“We still have other anomalies in neutrino physics that we cannot explain,” she says. But taking all neutrino studies together, Huber says, the evidence for the sterile neutrino isn’t very strong: “It’s not a good global fit to the data.”

The preprint of this Letter was previously blogged in this post

Monday, December 5, 2022

The State Of Neutrino Physics

Scientists are making steady progress in quantifying the properties of the neutrinos. There are seven experiments probing the neutrino oscillation parameters, another seeking to directly measure the absolute value of the lightest neutrino mass, and multiple astronomy collaborations using indirect means to measure neutrino properties including those like IceCube that measure income neutrinos from space directly. As a result of these experiments we are steadily closing the gap of what we know. 

The prospects look very good for a precisely known full set of Standard Model neutrino parameters over the next ten to fifteen years, with significant improvements even in the next five years.

In absolute terms, the neutrino masses are already the most precisely known Standard Model parameters and three of the four mixing parameters are also known with decent precision. But, the relative precision with which we know these parameters it the lowest in the Standard Model although this state of affairs may not last too long.

A new new Snowmass 2021 paper has a nice six color chart showing how much progress has been made since 1998 and 2002 when the fact that neutrinos have mass was first discovered.

What we know and don't know is recapped in the executive summary from the Snowmass 2021 paper, the balance of which reviews the various experimental efforts that are underway to answer those questions.

The discovery of neutrino oscillations in 1998 and 2002 added at least seven new parameters to our model of particle physics, and oscillation experiments can probe six of them. 
To date, three of those parameters are fairly well measured: the reactor mixing angle θ(13), the solar mixing angle θ(12), and the solar mass splitting ∆m^2(21), although there is only one good measurement of the last parameter. Of the remaining three oscillation parameters, we have some information on two of them: we know the absolute value of the atmospheric mass splitting ∆m^2(31) fairly well, but we do not know its sign, and we know that the atmospheric mixing angle θ(23) is close to maximal ∼ 45º, but we do not know how close, nor on which side of maximal it is. Finally, the sixth parameter is the complex phase δ related to charge-parity (CP) violation, which is largely unconstrained. 
Determining these remaining three unknowns, the sign of ∆m^2(31), the octant of θ(23), and the value of the complex phase δ, is of the utmost priority for particle physics. In addition to the absolute neutrino mass scale which can be probed with cosmological data sets, they represent the only known unknown parameters in our picture of particle physics. 
It is our job as physicists to determine the parameters of our model. The values of these parameters have important implications in many other areas of particle physics and cosmology, as well as providing insights into the flavor puzzle. To measure these parameters, a mature experimental program is underway with some experiments running now and others under construction. In the current generation we have NOvA, T2K, and Super-Kamiokande (SK) which each have some sensitivity to the three remaining unknowns, but are unlikely to get to the required statistical thresholds. Next generation experiments, notably DUNE and Hyper-Kamiokande (HK) are expected to get to the desired thresholds to answer all three oscillation unknowns. Additional important oscillation results will come from JUNO, IceCube, and KM3NeT. This broad experimental program reflects the fact that there are many inter-connected parameters in the three-flavor oscillation picture that need to be simultaneously disentangled and independently confirmed to ensure that we truly understand these parameters. 
To achieve these ambitious goals, DUNE and HK will need to become the most sophisticated neutrino experiments constructed to date. Each requires extremely powerful neutrino beams, as many measurements are statistics limited. Each will require a very sophisticated near detector facility to measure that beam, as well as to constrain neutrino interactions and detector modeling uncertainties, which are notoriously difficult in the energy ranges needed for oscillations. To augment the near detectors, additional measurements and theory work are crucial to understand the interactions properly, see NF06. Finally, large highly sophisticated far detectors are required to be able to reconstruct the events in a large enough volume to accumulate enough statistics. DUNE will use liquid argon time projection chamber (LArTPC) technology most recently demonstrated with MicroBooNE. LArTPCs provide unparalleled event reconstruction capabilities and can be scaled to large enough size to accumulate the necessary statistics. HK will expand upon the success of SK’s large water Cherenkov tank and build a new larger tank using improved photosensor technology. 
It is fully expected that with the combination of experiments described above, a clear picture of three-flavor neutrino oscillations should emerge, or, if there is new physics in neutrino oscillations (see NF02 and NF03), that should fall into stark contrast in coming years. . . .

The body text continues to make some useful observations: 

The Jarlskog invariant, which usefully quantifies the “amount” of CP violation, for the quark matrix is J(CKM) = +3 × 10^−4 J(max) while for leptons it could be much larger, |J(PMNS)| < 0.34 J(max) where J(max) ≡ 1/(6√3) ≈ 0.096. Understanding this mystery of CP violation is a top priority in particle physics. . . .

There are various other means of probing the six oscillation parameters. In particular, measurements of the absolute mass scale can provide information about the mass ordering in some cases. 

Kinematic end-point experiments such as KATRIN, ECHo, HOLMES, and Project-8 are sensitive to the sum of for all i of |Uei|^2*m(i)^2 which is greater than or equal to 10 meV in the NO and greater than or equal to 50 meV in the IO, although these experiments may not have sensitivity to the mass ordering before oscillation experiments do. 

Cosmological measurements of the cosmic microwave background temperature and polarization information, baryon acoustic oscillations, and local distance ladder measurements lead to an estimate that the sum of for all i of m(i) < 90 meV at 90% CL which mildly disfavors the inverted ordering over the normal ordering since the sum of for all i of m(i) greater than or equal to 60 meV in the NO and greater than or equal to 110 meV in the IO; although these results depend on one’s choice of prior of the absolute neutrino mass scale. Significant improvements are expected to reach the σ(the sum of for all m(ν)) ∼ 0.04 eV level with upcoming data from DESI and VRO, see the CF7 report, which should be sufficient to test the results of local oscillation data in the early universe at high significance, depending on the true values. 

• If lepton number is violated via an effective operator related to neutrino mass, then we expect neutrino-less double beta decay to occur proportional to m(ββ) = |the sum of for all i of U(ei)^2*m(i)| which could be as low as zero in the NO but is expected to be > 1 meV in the IO, thus a detection below 1 meV would imply the mass ordering is normal. The latest data from KamLAND-Zen disfavors some fraction of the inverted hierarchy for favorable nuclear matrix element calculations which are fairly uncertain. 

• Finally, a measurement of the cosmic neutrino background is sensitive, in principle, to a combination of the absolute mass scale, whether neutrinos are Majorana or Dirac, and the mass ordering. 

Among these non-oscillation measurements, only the cosmological sum of the neutrino masses is likely to be sensitive to the atmospheric mass ordering within the next decade. 

Consensus is starting to build around (1) a normal mass ordering, (2) a second quadrant (i.e. greater than 45º) value for θ(23), (3) a near minimal value of the lowest absolute neutrino mass, and (4) a complex phase δ for neutrino oscillation CP violation that is non-zero and is close to, but not exactly, maximal.

Neutrinoless double beta decay also remains elusive, if it exists at all. It is undoubtedly already constrained to be very rare, at a minimum and the constraints will continue to grow more strict in the coming years (unless it is finally discovered).

Tuesday, August 23, 2022

A Hypothetical Pre-Big Bang Universe And More Conjectures

There are at least three plausible solutions to the question of why we live in a matter dominated universe when almost all processes experimentally observed conserve the number of matter particles minus the number of antimatter particles. 

One is that the initial Big Bang conditions were matter dominated (as our post-Big Bang universe almost surely was a mere fraction of a second after the Big Bang). There is no scientific requirement that the universe had any particular initial conditions.

A second is that there are new, non-equilibrium physics beyond the Standard Model that do not conserve baryon and lepton number and are strongly CP violating at extreme high energies. No new physics is necessary for the Standard Model to continue to make mathematical sense to more than 10^16 GeV, known as the grand unified theory (GUT) scale. And, there is no evidence of such physics yet. But the most powerful particle colliders and natural experiments that function as particle colliders have interaction energies far below the 10^16 GeV. The most powerful man made collider, the Large Hadron Collider (LHC), is probing energies on the order of 10^4 GeV, about a trillion times lower than those of the immediate vicinity in time of the Big Bang.

A third is that matter, which can be conceived of as particles moving forward in time, dominates our post-Big Bang universe, while there is a parallel pre-Big Bang mirror universe dominated by antimatter, which can be conceived of as particles moving backward in time. To the extent that this calls for beyond the Standard Model physics, the extensions requires are very subtle and apply only at the Big Bang singularity itself. 

I tend to favor this quite elegant approach, although the evidence is hardly unequivocal in favoring it over the alternatives, and it may never be possible to definitively resolve the question.

The introduction to a new paper and its conclusion, below, explain the features and virtues of this third scenario. 

The paper argues that the primary arrow of time (since fundamental physics observes CPT symmetry to high precision) is entropy as one gets more distant in time from the Big Bang, that cosmological inflation and primordial gravitational waves is not necessary in this scenario, and in this scenario, it makes sense that the strong force would not violate CP symmetry, despite the fact that there is an obvious way to insert strong force CP violation into the Standard Model Lagrangian. 

In contrast, cosmological inflation is quite an ugly theory, with hundreds of variants, many of which can't be distinguished from each other with existing observations.

In a series of recent papers, we have argued that the Big Bang can be described as a mirror separating two sheets of spacetime. Let us briefly recap some of the observational and theoretical motivations for this idea. 

Observations indicate that the early Universe was strikingly simple: a fraction of a second after the Big Bang, the Universe was radiation-dominated, almost perfectly homogeneous, isotropic, and spatially flat; with tiny (around 10^−5 ) deviations from perfect symmetry also taking a highly economical form: random, statistically gaussian, nearly scale-invariant, adiabatic, growing mode density perturbations. Although we cannot see all the way back to the bang, we have this essential observational hint: the further back we look (all the way back to a fraction of a second), the simpler and more regular the Universe gets. This is the central clue in early Universe cosmology: the question is what it is trying to tell us. 

In the standard (inflationary) theory of the early Universe one regards this observed trend as illusory: one imagines that, if one could look back even further, one would find a messy, disordered state, requiring a period of inflation to transform it into the cosmos we observe. 

An alternative approach is to take the fundamental clue at face value and imagine that, as we follow it back to the bang, the Universe really does approach the ultra-simple radiation-dominated state described above (as all observations so far seem to indicate). Then, although we have a singularity in our past, it is extremely special. Denoting the conformal time by τ , the scale factor a(τ) is ∝ τ at small τ so the metric gµν ∼ a(τ)^2ηµν has an analytic, conformal zero through which it may be extended to a “mirror-reflected” universe at negative τ. 

[W]e point out that, by taking seriously the symmetries and complex analytic properties of this extended two-sheeted spacetime, we are led to elegant and testable new explanations for many of the observed features of our Universe including: (i) the dark matter; (ii) the absence of primordial gravitational waves, vorticity, or decaying mode density perturbations; (iii) the thermodynamic arrow of time (i.e. the fact that entropy increases away from the bang); and (iv) the homogeneity, isotropy and flatness of the Universe, among others. 

In a forthcoming paper, we show that, with our new mechanism for ensuring conformal symmetry at the bang, this picture can also explain the observed primordial density perturbations. 

In this Letter, we show that: (i) there is a crucial distinction, for spinors, between spatial and temporal mirrors; (ii) the reflecting boundary conditions (b.c.’s) at the bang for spinors and higher spin fields are fixed by local Lorentz invariance and gauge invariance; (iii) they explain an observed pattern in the Standard Model (SM) relating left- and right-handed spinors; and (iv) they provide a new solution of the strong CP problem. . . . 

In this paper, we have seen how the requirement that the Big Bang is a surface of quantum CT symmetry yields a new solution to the strong CP problem. It also gives rise to classical solutions that are symmetric under time reversal, and satisfy appropriate reflecting boundary conditions at the bang. 

The classical solutions we describe are stationary points of the action and are analytic in the conformal time τ. Hence they are natural saddle points to a path integral over fields and four-geometries. The full quantum theory is presumably based on a path integral between boundary conditions at future and past infinity that are related by CT-symmetry. The cosmologically relevant classical saddles inherit their analytic, time-reversal symmetry from this path integral, although the individual paths are not required to be time-symmetric in the same sense (and, moreover may, in general, be highly jagged and non-analytic). 

We will describe in more detail the quantum CT-symmetric ensemble which implements (12), including the question of whether all of the analytic saddles are necessarily time-symmetric, and the calculation of the associated gravitational entanglement entropy, elsewhere.

The paper and its abstract are as follows:
We argue that the Big Bang can be understood as a type of mirror. We show how reflecting boundary conditions for spinors and higher spin fields are fixed by local Lorentz and gauge symmetry, and how a temporal mirror (like the Bang) differs from a spatial mirror (like the AdS boundary), providing a possible explanation for the observed pattern of left- and right-handed fermions. By regarding the Standard Model as the limit of a minimal left-right symmetric theory, we obtain a new, cosmological solution of the strong CP problem, without an axion.
Latham Boyle, Martin Teuscher, Neil Turok, "The Big Bang as a Mirror: a Solution of the Strong CP Problem" arXiv:2208.10396 (August 22, 2022).

Some of their key earlier papers by some of these authors (which I haven't yet read and don't necessarily endorse) are: "Gravitational entropy and the flatness, homogeneity and isotropy puzzles" arXiv:2201.07279, "Cancelling the vacuum energy and Weyl anomaly in the standard model with dimension-zero scalar fields" arXiv:2110.06258, "Two-Sheeted Universe, Analyticity and the Arrow of Time" arXiv:2109.06204, "The Big Bang, CPT, and neutrino dark matter" arXiv:1803.08930, and "CPT-Symmetric Universe" arXiv:1803.08928.

Moreover, if Deur's evaluation of gravitational field self-interactions (which is most intuitive from a quantum gravity perspective but he claims can be derived from purely classical general relativity) is correct, then observations attributed to dark matter and dark energy (or equivalently a cosmological constant) in the LambdaCDM Standard Model of Cosmology, can be explained with these non-Newtonian general relativity effects in weak gravitational fields, predominantly involving galaxy and galaxy cluster scale agglomerations of matter. 

This would dispense with the need for any new particle content in a theory of everything beyond the almost universally predicted, standard, plain vanilla, massless, spin-2 graviton giving rise to a quantum gravity theory that could be theoretically consistent with the Standard Model. 

It would also imply that there are no new high energy physics that need to be discovered apart from one at the very Big Bang singularity itself where matter and antimatter pairs created according to Standard Model physics rules segregate between the post-Big Bang and pre-Big Bang universe at this point of minimum entropy, to explain all of our current observations. 

We would need no dark matter particles, no quintessence, no inflatons, no axions, no supersymmetric particles, no sterile neutrinos, no additional Higgs bosons, and no new forces.

We aren't quite there. We have some final details about neutrino physics to pin down. Our measurements of the fundamental particle masses, CKM matrix elements, and PMNS matrix elements need greater precision to really decisively support a theory behind the source of these physical constants. We have QCD to explain hadrons but can't really do calculations sufficient to derive the spectrum of all possible hadrons and all of their properties yet, even though it is theoretically possible to do so. And, of course, there are lots of non-fundamental physics questions in both atomic and larger scale lab physics and in the formation of the universe that are almost certainly emergent from these basic laws of physics in complex circumstances, which we haven't yet fully explained.

There would also be room for further "within the Standard Model" physics to derive its three forces plus gravity, and couple dozen physical constants from a more reductionist core, but that is all. And, there is even some room in the form of conjectures about variants on an extended Koide's rule and the relationship between the Higgs vacuum expectation value and the Standard Model fundamental particles to take that further.

It is also worth noting that even if Deur's treatment of gravitational field self-interactions is not, as claimed, possible to derive from ordinary classical General Relativity, either because it is a subtle modification of Einstein's field equations, or because it is actually a quantum gravity effect, there is still every reason to prefer his gravitational approach, that explains all dark matter and dark energy phenomena and is consistent with astronomy observations pertinent to cosmology (for example, explaining the CMB peaks and resolving the impossible early galaxy problem) with a simple and elegant theory, neatly paralleling QCD, that has at least two fewer degrees of freedom than the LambdaCDM Standard Model of Cosmology despite fitting the observational data better at the galaxy and galaxy cluster scales.

And, Deur's approach is pretty much the only one that can explain the data attributed to dark energy in a manner the does not violate conservation of mass-energy (a nice compliment to a mirror universe cosmology that is time symmetric since conservation of mass-energy is deeply related to time symmetry).

Deur's paradigm has the potential to blow away completely the Standard Model of Cosmology, and the half century or so of astronomy work driven by it and modest variation upon it, in addition to depriving lots of beyond the Standard Model particle physics concepts of any strong motivation.

Milgrom's Modified Newtonian Dynamics (MOND) has actually done a lot of the heavy lifting in showing that observational data for galaxies can be explained, for observations within this toy model theory's limited domain of applicability, without dark matter. 

But Deur's theory, by providing a deeper theoretical justification for the MOND effects that it reproduces, by extending these observations of galaxy clusters and cosmology scale phenomena, by making the theory naturally relativistic in a manner fully consistent with all experimental confirmations of General Relativity, and by providing an elegant solution to observations seemingly consistent with dark energy or a cosmological constant, unifies and glows up MOND's conclusions in a way that makes a gravitational explanation of dark matter far more digestible and attractive to astrophysicists who have so far clung to the increasingly ruled out dark matter particle hypotheses.

A mirror universe cosmology, likewise, has the potential to stamp out the theoretical motivation for all sorts of new physics proposals that simply aren't necessary to explain what we observe as part of a new paradigm of the immediate Big Bang era cosmology.

We are now in a position where physicists can see fairly clearly what the metaphorically promised land of a world where the laws of physics are completely known, even if the scientific consensus hasn't yet caught up with this vision.

It turns out that many of the dominant topics of theoretical physics discussion over the last half-century, from dark matter, to dark energy, to cosmological inflation, to supersymmetry, to string theory, to the multiverse, to cyclic cosmologies, to the anthropic principle, to technicolor, to multiple Higgs doublets, to a grand unified theory or theory of everything uniting physics into a single master Lie group or Lie algebra, do not play an important role in that future vision. Likewise, this would dispense with the need for the many heavily analyzed, but less subtle than Deur's variations on Einstein's Field Equations as conventionally applied, which are the subject of regular research.

If the scientific method manages to prevail over scientific community sociology, in a generation or two, all of these speculative beyond the Standard Model physics proposals will be discarded, and we will be left with a moderately complicated explanation for the universe that nonetheless explains pretty much everything. 

I may not live to see that day come, but I have great hope that my grandchildren or great-grandchildren might live in this not so distant future when humanity has grandly figured out all of the laws of physics in a metaphysically naturalist world.

Tuesday, July 12, 2022

Another Paper Supports the BMW Calculation of Muon g-2

Several recent papers have confirmed the BMW calculation of the hadronic contribution to muon g-2, the anomalous magnetic moment of the muon, whose Standard Model prediction is consistent with the experimental result, and have shown strong tensions with the Theory Initiative calculation of the Standard Model predicted value of the hadronic contribution to muon g-2 based by electron-positron collisions.

A new preprint today continues that trend.

Why Care?

For those of you who haven't been paying attention, this is a big deal because muon g-2 is an observable which is a global measure of the consistency of the lower energy Standard Model physics with reality at extreme precision. It implicates all three of the Standard Model forces, although, predictably (since QCD calculations, which involve the strong force, are always the least precise), the greatest uncertainty is in the QCD part of the calculation even though it is responsible for only a very small part of the aggregate value of muon g-2.

If the correct Standard Model prediction matches the experimental result, which it would if the BMW calculation is correct, then beyond the Standard Model physics are extremely tightly constrained and possibilities like electroweak scale supersymmetry are rule out. In this case, most beyond the Standard Model physics have to be at energy scales sufficient in excess of those that are implicated in the material terms of the muon g-2 calculation to allow these new physics to "decouple" from this calculation. So, even if there are new physics out there to be discovered, they aren't likely to appear at the next generation particle collider.

On the other hand, if the correct Standard Model prediction differs significantly from the experimental result, which it would if the Theory Initiative calculation were correct, then new physics beyond the Standard Model have to be just around the corner and would likely be visible at a next generation particle collider and hinted at in the highest energy data from the current Large Hadron Collider (LHC), since it hasn't been definitively discovered yet. Moreover, the magnitude of the deviation from the Standard Model would be quantified quite precisely, further narrowing the range of possible BSM theories.

The Theory Initiative paper's data driven approach, if it is flawed, is most likely to be wrong either because the data were inserted into an otherwise theoretical calculation incorrectly in some subtle respect, or because the systemic uncertainty in the data it is relying upon was understated.

It also bears noting that the Theory Initiative discrepancy with BMW value in the Standard Model prediction is confined to the QCD portion of the calculation. But despite the fact that discrepancies between theory and experiment are most common in the various methods of operationalizing QCD calculations, there are very few theories out there that propose that the QCD portion of the Standard Model is what needs to be tweaked with new physics. Instead, almost all of the scientific debate is over how best to operationalize a profoundly challenging theory to use to do calculations.

Most of the theoretical proposals to reconcile the experimentally measured value of muon g-2 with the Theory Initiative's calculation involve particles and forces beyond any of the three Standard Model components and assume that any new physics, if they are present, don't contaminate the electron-positron collision data used to make its data based estimation of the hadronic part of the muon g-2 calculation.

But, instead, the replications of the BMW calculation of the hadronic component of muon g-2 produce a strong tension between the estimate based upon the electron-positron collision data (which the Theory Initiative assumes is free of new physics) and the lattice QCD calculations directly using QCD done by BMW and increasingly by multiple other groups confirming BMW's result. If the Theory Initiative's assumptions about the data driven methods it is using are correct, this is an apples to apples comparison. 

On the other hand, if the BMW group has done the calculations right, and the Theory Initiative is relying on data with a correctly estimated magnitude of systemic error, and has integrated this data correctly into the overall calculation, then it would seem that the electron-positron collision data is what is defying the Standard Model, but in a way that somehow doesn't manifest with muons.

Increasingly, conventional wisdom is starting to conclude that one of those things (or both) is true, and that the BMW calculation of the expected Standard Model value of muon g-2 is correct, in which case, we are in a "physics desert."

Other Tests Of The Standard Model

The Particle Data Group's comprehensive assembly and organization of particle physics data rules out directly all manner of specific possible deviations from the Standard Model and beyond the Standard Model theories experimentally. Usually, these exclusions aren't absolute, but the parameter space for new physics below the 1 TeV energy scale (and sometimes beyond it) has been profoundly narrowed by direct exclusions in the LHC data.

Muon g-2 isn't the only global test of the Standard Model that is reinforcing this conclusion either.

The branching fractions of Higgs boson decays are another global test of the Standard Model, since any beyond the Standard Model particle that acquires its mass via the Higgs mechanism that is not greatly above the energy scale of a top quark-antiquark pair (about 350 GeV), would greatly change all of the branching fractions. But, the more measurements the LHC does, the closer the properties of the Higgs boson are to those predicted by the Standard Model, and the less room  there is for novel decays not predicted by the Standard Model.

The decays of the W and Z bosons, likewise, have long closely confirmed the Standard Model's predictions and are a global test of the set of Standard Model particles that interact via the weak force (i.e. all of the massive fundamental particles of the Standard Model, but not gluons and photons).

The deviations between the experimental values of the CKM matrix entries and the theoretical expectation that the probability of any given quark transitioning via a W boson to one of three other possible quarks should equal 100%, are manageable, although there are some mild tensions.

Among other things, all of these global measures strongly disfavor a fourth generation of Standard Model particles, or higher order fundamental gauge bosons like the W' or Z' or extra Higgs bosons.

As we better understand how mass is generated by gluons in QCD, we are finding that QCD's methods are sound. See, e.g., here (a global review) and here (noting that omitting a third-loop from QCD calculations produces smaller errors than had been previously expected in calculations where energy scales are low enough to allow bottom quark contributions to be ignored). We are also finding that QCD bound structures more complex than two valence quark mesons and three valence quark baryons, which are theoretically possible in QCD but had not been definitively identified until the last decade or so, really do exist.

There are some remaining anomalies in particle physics that are obvious measurement errors, like the outlier recalculated W boson mass from old CDF experiment data collected at Fermilab (even if we can't yet tell exactly what the source of this error is), the discrepancy in the neutron lifetime between two measurement methods, and Russian false alarms of neutrinoless beta decay detections that have been repeatedly contradicted by experiments everywhere else.

Limits on Lorentz invariance and CPT violations also continue to be very strict. See also here.

There Is Only One Credible Anomaly Left

So, we are left with only one really credible remaining anomaly, which is the apparent violation of charged lepton universality in semi-leptonic B meson decays. Figuring out what is going on there is challenging. 

It could be that there are flaws in the Standard Model prediction calculation that ignores a mass dependent source of differences between decays to tau leptons, muons, and electrons (and their antiparticles), perhaps overlooking some theoretically sound but neglected process that when you calculate it turns out to be more important than believed. 

The hypothesis that omitted processes are responsible for the apparent violation of lepton universality is further supported by the fact that when lepton universality is apparently violated, the excess charged leptons are always less massive than the heavier charged leptons. This suggests an additional omitted process that generates enough mass-energy in the end state to produce lighter, but not heavier, charged leptons.

For example, if the process produces fewer tau leptons than muons, the muons produced up to roughly the number of tau leptons produced probably comes from the main W boson decay considered in the predicted ratio, while the excess of muons over the number of tau leptons produces may come from some other process that doesn't have the 1.78 GeV of mass-energy to produce a final state tau lepton.

It could be that the experiments aren't actually distinguishing between the backgrounds they are trying to exclude and the signals that they are looking for in the way that they believe that they are. It could also be that the results are simply due to some other experimental systemic error or to statistical flukes expected given the look elsewhere effect.

The biggest issue, in my view, with this anomaly, is that in the Standard Model, the leptons in a semi-leptonic decay of a hadron are always produced via the production and subsequent decay of a W boson, and W bosons should decay in the same way no matter how they are produced. But, in every other circumstance in which we observe semi-leptonic or fully leptonic decays of hadrons via an intermediate production of W bosons, of which there are perhaps half a dozen, lepton universality is observed.

So, any explanation of lepton universality violations in semi-leptonic B meson decays has to preserve the lepton universality non-violation seen in all other W boson mediated processes and shouldn't be due to some new property of the W boson that is at the root of most of the other complexities of the Standard Model.

My Bayesian priors strong favor the prediction that somehow or other, these lepton universality violations will go away, and leave the Standard Model with a complete success in all circumstances.

Over the decade and a half of carefully following the field (which is itself younger than I am), all sorts of anomalies have been touted only to be resolved without new physics: the muonic proton radius, the superluminal neutrino, the 750 GeV particle, the 20something GeV particle, the muon g-2 anomaly, and the reactor anomalies in neutrino physics leading to the hypothesis that there are sterile neutrinos. I'm sure that I've omitted half a dozen less notable ones.

Really the only truly "new physics" that has been discovered in the last forty years has been the discovery that neutrinos have mass and oscillate (which, of course, has largely been accepted as part of the core theory of fundamental physics by now, even though not all of the details have been worked out completely).

Similarly, "dark energy" while it seems mysterious, is the Lambda in the Standard Model of Cosmology's LambdaCDM model, and the cosmological constant in the equations of general relativity. It doesn't require new physics because it is already part of gravity.

I'd be remiss, of course, if I didn't mention the biggest anomaly of all, which hasn't shown up at any particle collider, which is dark matter phenomena.

As I haven't been shy in mentioning, I think that Alexandre Deur has it right and that dark matter phenomena (and indeed dark energy phenomena too) are due to the self-interaction of gravitational fields present in plain old classical General Relativity without a cosmological constant that has been around for more than a century, in an effect that has been widely overlooked, but that no one has rebutted for more than a decade, because everyone else is using a Newtonian approximation in astronomy and cosmological applications when it is inappropriate to do so.

This is quite an out on a limb position. It has only one fundamental experimentally measured physical constant (Newton's constant G) which has already been measured to the parts per ten thousand level. You are stuck with one set of equations (although they can be expressed in more than one way) that have been set in stone for more than a century. 

Nobody disputes that gravitational field self-interaction is a thing and leads to non-linear effects in general relativity, as is routinely considered in strong gravitational fields. But, the justification for the Newtonian approximation in the physics of galaxies and larger structures is very shallow and back of napkin in character in the case of mass-energy distributions that aren't spherically symmetric, which lots of the universe is not at the scale of galaxies, galaxy clusters, and the cosmic web.

But, amazingly, it seems to work, replicating the results of MOND in spiral galaxies, but doing it one better in the case of galaxy clusters, and refining it slightly in the case of elliptical galaxies.

Certainly, the cold dark matter that is the CDM of the LambdaCDM model is wrong. Not every single other option has been ruled out, but all of the other either posit a fifth force, either a self-interaction term or a very weak interaction with ordinary matter term, or quantum behavior associated with exceedingly light dark matter particles (which in the case of axion-like particle theories, converge onto a quantum gravity variation of General Relativity in which the behavior of ultra-low energy graviton self-interactions produce the same results). So, all of the dark matter particle alternatives eventually start looking like gravitational modifications anyway, and still struggle to reproduce what we observe.

But, jf I'm wrong, I think it is more likely that General Relativity needs actual slight modification in weak fields, than it is that dark matter particles exist, even though that is the dominant paradigm right now. MOND works too well and too consistently for vanilla CDM to be right.