Wednesday, April 7, 2021

Muon g-2 Experiment Results Confirm Previous Measurement UPDATED


Either Fermilab has confirmed that it is extremely likely that there are new physics out there, based on the pre-today gold standard calculation of the Standard Model prediction, or this new result, combined with two new calculations of the parts of the Standard Model prediction that account for most of the theoretical error in that prediction, which were released today, combined to show that the Standard Model is complete and there are no new physics out there. The no new physics conclusion is very likely to prevail when the dust settles, undermining further the motivation for building a new next generation particle collider.

Run-1 of a new Fermilab measurement of muon g-2 (which is an indirect global measurement of deviations from the Standard Model of Particle Physics) in the E989 experiment has confirmed the previous Brookhaven National Laboratory measurement done twenty years ago. 

The combined Brookhaven and Fermilab results shows a 4.2 sigma deviation from the Standard Model of Particle Physics prediction. The statistical significance of this discrepancy will almost surely increase as the results from Runs 2, 3 and 4 are announced, eventually reducing the margin of error in the experimental measurement by a factor of four. 

Also today, a Lattice QCD collaboration known as BMW released a new paper in Nature that concludes contrary to the consensus calculation of the Standard Model predicted value of muon g-2 released last summer, that the leading order hadron vacuum polarization calculation which is the dominant source of theoretical error in the Standard Model prediction should be calculated in a different matter that it turns out is consistent to within 1.6 sigma of the combined muon g-2 measurement (and to within 1.1 sigma of the Fermilab measurement) and suggests that the Standard Model is complete and requires no new physics. Meanwhile another preprint announced an improved calculation of the hadronic light by light contribution to the Standard Model prediction that also moves the prediction closer to the experimental value although not enough by itself to alleviate the large discrepancy between the old Standard Model prediction and the experimental result. 

The results (multiplied by 10^11 for easier reading, with one standard deviation magnitude in the final digits shown in parenthesis after each result, followed by the statistical significance of the deviation from the old Standard Model prediction) are as follows:

Fermilab (2021): 116,592,040(54) - 3.3 sigma
Brookhaven's E821 (2006): 116,592,089(63) - 3.7 sigma
Combined measurement: 116,592,061(41)  - 4.2 sigma
Old Standard Model prediction: 116,591,810(43)

BMW Standard Model prediction: 116,591,954(55)

The BMW prediction is 134 x 10^-11 higher than the Old Standard Model prediction.

Compared to the old Standard Model prediction, the new measurement would tend to support the conclusion that there are undiscovered new physics beyond the Standard Model which give rise to the discrepancy of a specific magnitude (about 50% more than the electroweak contribution to muon g-2, or 3.3% of the QCD contribution to muon g-2), from an unknown source.

But it is also very plausible that error estimate in the quantum chromodynamics (QCD) component of the Standard Model theoretical prediction, especially the Hadronic Vacuum Polarization component, which is the dominant source of error in that prediction, is understated by approximately a factor of three (i.e. it is closer to the 2% error margin of purely theoretical Lattice QCD calculations, than to the estimated 0.6% error derived from using other experiments involving electron-positron collisions as a proxy from much of the QCD calculation). The BMW paper does that in a way that seems to resolve the discrepancy.

And new calculation of the hadronic light by light contribution to the muon g-2 calculation was also released on arXiv today (and doesn't seem to be part of the BMW calculation). This increases the contribution from that component from 92(18) x 10^-11 in the old calculation to 106.8(14.7) x 10^-11. 

This increase by itself of 14.8 x 10^11 in the same direction of the BMW calculation adjustment, if they are independent, results in a combined increase in the Standard Model prediction of 151.8 x 10^-11, a substantial share of the gap between the measurements.

Fermilab's Muon g-2 Measurement Announcement

'Fermilab announced its measurement of the muon anomalous magnetic moment (usually called muon g-2, but actually (g-2)/2) this morning in a Zoom web seminar. The result was unblinded on February 25, 2021. 170 people have kept the secret since then. There were 5000 Zoom participants on the announcement and more on YouTube (which is still available). 

Multiple papers will be released on arXiv this evening. The lead paper is here. Technical papers are here and here and here.

The last time this quantity was measured was in 2001 with the final results announced in 2006 by the Brookhaven National Laboratory and some of the equipment used in this experiment was moved from Brookhaven to Fermilab in Chicago and then upgraded to conduct this experiment.

This is a preliminary first announced result from Run-1 and will be refined with more data collection and analysis. 

More results will be announced in the coming years form Fermilab as more data is collected and the systemic error is reduced with refinements to the adjustments made for various experimental factors, and another independent experiment will also produce a measurement in a few years.

The Result

Tension with Standard Model is 4.2 sigma.

The last measurement was made by Brookhaven National Labs (BNL) in its E821 experiment in data collection completed in 2001 with the final results released in 2006.

E821:  116,592,089(63) x 10^−11

This result: 116,592,040(54) x 10^−11

The two results (conducts with some of the same equipment upgraded for this experiment) are consistent with each other to within less than one standard deviation.

Combined result: 116,592,061(41) x 10^−11

Standard Model prediction: 116,591,810(43) × 10^−11

Expressed as a two sigma confidence interval the Standard Model prediction time 10^11 is: 116,591,724 to 116,591,896.

E821 v. SM prediction: 279 x 10^-11
Magnitude of 1 sigma: 76.3 x 10^−11
Difference in sigma: 3.657 sigma

Fermilab v. SM prediction: 230 x 10^-11
Magnitude of 1 sigma: 69.0 x 10^−11
Difference in sigma: 3.333 sigma

Combined v. SM prediction: 251 x 10^−11
Magnitude of 1 sigma: 59.4 x 10^−11
Difference in sigma: 4.225 sigma

The final experimental error is expected to be reduced to 16 x 10^−11. If the best fit value stays relatively stable, the result should be a 5 sigma discovery of new physics by Run-2 or Run-3.  Data collection is currently underway in Run-4, but data analysis of the previous runs is underway.


We care about this measurement so much because it is a global measurement of the consistency of the Standard Model with experimental reality that is impacted by every part of the Standard Model. Any deviation from the Standard Model shows up in muon g-2. 

If muon g-2 is actually different from the Standard Model prediction then there is something wrong with the Standard Model, although we don't know what is wrong, just how big the deviation is from the Standard Model.

In absolute terms, the measured value is still very close to the Standard Model prediction. The discrepancy between the new experimentally measured value announced today is roughly 2 parts per million.

Still, the experimental value is in strong tension with the Standard Model prediction (about 17% smaller in absolute terms than the Brookhaven measurement, but only 9% weaker than the Brookhaven measurement in terms of statistical significance). The combined result is becoming more statistically significant than the Brookhaven result alone.

How Could This Evidence Of New Physics Be Wrong?

If this is not "new physics", then realistically, the discrepancy has to be due to a significant overestimate of the QCD contribution to the theoretical prediction that is flawed in some way not reflected in the estimated theory error. 

For the experimental result and the theoretical prediction to be consistent at the two sigma level that is customarily considered untroubling, the QCD error would have to be 124 * 10^-11 rather than about 43 * 10^-11, i.e. about 1.8% v. 0.6% estimated. 

This is certainly not outside the realm of possibility, and indeed is reasonably probable. The error in the pure lattice QCD measurement is about 2% and the improvement is due to using experimental results from electron-positron collisions in lieu of lattice QCD calculations to determine part of the QCD part of the theoretical estimate. 

There is a precedent for a similar kind of issue leading to an apparent discrepancy between the Standard Model and experimental results. The muonic hydrogen proton radius puzzle (which appeared to show that protons "orbited" by muons in an atom were significantly smaller than muons "orbited" by an electron in ordinary hydrogen) was ultimately resolved because the ordinary hydrogen proton radius determined in old experiments were more inaccurate than estimated in the papers measuring them, while the new muonic hydrogen measurement of the proton radius was correct for both ordinary and muonic hydrogen as new measurements of the proton radius in ordinary hydrogen confirmed. 

The fact that the hadronic vacuum polarization experimental data used in the theoretical calculation of muon g-2 were derived from rather old electron-positron collisions, rather than from recent muon-antimuon collisions could easily be the source of the discrepancy.

A new paper published in Nature today, which I wasn't aware of when I wrote the analysis above, makes precisely this argument, and does a more precise HVP calculation that significantly increases the HVP contribution the muon g-2 bringing it into line with the experimental results. The pertinent part of the abstract states:
The most precise, model-independent determinations [of the leading order HVP contribution to muon g-2] so far rely on dispersive techniques, combined with measurements of the cross-section of electron–positron annihilation into hadrons. To eliminate our reliance on these experiments, here we use ab initio quantum chromodynamics (QCD) and quantum electrodynamics simulations to compute the LO-HVP contribution. 
We reach sufficient precision to discriminate between the measurement of the anomalous magnetic moment of the muon and the predictions of dispersive methods. Our result favours the experimentally measured value over those obtained using the dispersion relation. Moreover, the methods used and developed in this work will enable further increased precision as more powerful computers become available.

Jester, a physicists who has access to the paper and runs the Resonaances blog in the sidebar tweets this about the conclusion of the new paper (called the BMW paper):

The paper of the BMW lattice collaboration published today in Nature claims (g-2)_SM = 0.00116591954(55), if my additions are correct. This is only 1.6 sigma away from the experimental value announced today.

I calculate a 1.1 sigma discrepancy from the new Fermilab result, which is a difference of 86(77) x 10^-11, and a 1.6 sigma discrepancy from the combined result, which is a difference of 107(68.6) x 10^-11. Both are consistent with the BMW paper calculation of the Standard Model prediction. 

Quanta magazine nicely discusses the BMW group's calculation and how it differs from the one from last summer that that Fermilab used as a benchmark. Their pre-print was posted February 27, 2020, and last revised on August 18, 2020. (The 14-person BMW team is named after Budapest, Marseille and Wuppertal, the three European cities where most team members were originally based.)

They made four chief innovations. First they reduced random noise. They also devised a way of very precisely determining scale in their lattice. At the same time, they more than doubled their lattice’s size compared to earlier efforts, so that they could study hadrons’ behavior near the center of the lattice without worrying about edge effects. Finally, they included in the calculation a family of complicating details that are often neglected, like mass differences between types of quarks. “All four [changes] needed a lot of computing power,” said Fodor.

The researchers then commandeered supercomputers in Jülich, Munich, Stuttgart, Orsay, Rome, Wuppertal and Budapest and put them to work on a new and better calculation. After several hundred million core hours of crunching, the supercomputers spat out a value for the hadronic vacuum polarization term. Their total, when combined with all other quantum contributions to the muon’s g-factor, yielded 2.00233183908. This is “in fairly good agreement” with the Brookhaven experiment, Fodor said. “We cross-checked it a million times because we were very much surprised.” 

With a new calculation of the hadronic light by light contribution (which reduces the relative margin of error in that calculation from 20% to 14%) as well, mentioned above, the differences falls to 73.7(77) x 10^-11 (which is less than 1 sigma) and 92.3(68.6) x 10^-11 (which is 1.3 sigma), respectively, which would be excellent agreement between theory and experiment indeed. 

Also, even if the experimental error falls to 17 x 10^-11 as expected, if the BMW theoretical error remains at 55 x 10^-11, the magnitude of sigma for comparing them will still be 57.6, which falls much more slowly than the precision of the experimental result. Basically, the Fermilab Run-2 to Run-4 measurements are almost sure to remain consistent with the BMW theoretical prediction, despite their much improved accuracy.

If The Discrepancy Is Real, What Does This Tell Us About New Physics?

But it is also plausible that there are new physics out there to be discovered that the muon g-2 measurement is revealing. The new BMW collaboration result announced in Nature today, however, strongly disfavors that conclusion.

The muon g-2 measurement doesn't tell us what we are missing, but it does tell us how big of an effect we are missing.

Conservatively, using the smaller absolute difference between the new Fermilab result and the Standard Model prediction as a reference point, the discrepancy is 50% bigger than the entire electroweak contribution to the calculation. It is 3.3% of the size of the entire QCD contribution.

Given other LHC results excluding most light beyond the Standard Model particles that could interact directly or indirectly with muons, any new particles contributing to muon g-2 would probably have to be quite heavy, and would probably have an interaction not at the tree level of direct interactions with muons, but via some sort of indirect loop effect with a virtual particle that interacts with the muon and in turn, interacts with it.

But really, aside from knowing that something new is likely to be out there, the range of potential tweaks that could produce the observed discrepancy is great.

A host of arXiv pre-prints released today propose BSM explanations for the anomaly (assuming that it is real) including the following thirty-one preprints: here, here, here, here, here, here, here, here, here, here, here, here, herehere, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, and here.

Every single one of them is rubbish, unconvincingly renews a flawed and tired theory that is already disfavored by other data, and isn't worth reading or even mentioning individually. 

The utter worthlessness of the proposed explanations for a large muon g-2 anomaly, if there is one, is one more reason to suspect that the BMW calculation of the muon g-2 anomaly is correct and that this consistency largely rules out muon g-2 related new physics.

Real World Significance For Future Experiments

This experimental result is, by far, the strongest justification for building a new bigger particle collider to see if it can observe directly the new physics that the muon g-2 discrepancy seems to point towards.

It also provides excellent motivation to redo at higher precision the electron-positron collisions (last done at the Large Electron-Positron collider which was dismantled in 2001), to determine if this key experimental contribution to the Standard Model prediction is flawed and is the real source of the discrepancy.

Experimental Error Analysis

462 parts per billion error in Run-1. Analysis of error is a big part of the two years wait.

On track to 100 ppb systemic and statistical errors for a combined 140 ppb total error in final result.

Theory Error Analysis

The actual muon magnetic moment g(µ) is approximately 2.00233184(1). 

This  would be exactly 2 in relativistic quantum mechanical Dirac model, and would be exactly 1 if classical electromagnetism were correct. The Standard Model's "anomalous" contribution is due to virtual interactions of muons with other Standard Model particles. Mostly these other interactions involve emitting and absorbing photons (the electroweak contribution). But it can also involve interactions with other kinds of virtual particles, which give rise to the electroweak and QCD contributions to the muon anomalous magnetic moment. The heavier the particle, the smaller the contribution is make to muon g-2, all other things being equal.

The muon g-2 is limited to the quantum correction factors from the Dirac model in the muon magnetic moment which is why it is called the muon anomalous magnetic moment. It is (g-2)/2.

The state of the art theoretical calculation of muon g-2 is found at T. Aoyama, et al., "The anomalous magnetic moment of the muon in the Standard Model" arXiv (June 8, 2020).

Muon g-2 = QED+EW+QCD

QED = 116 584 718.931(104) × 10^−11

EW = 153.6(1.0) × 10^−11


* HVP = 6845(40) × 10−11 (0.6% relative error).

* HLbL (phenomenology + lattice QCD) = 92(18) × 10−11 (20% relative error)

Theory error is dominated by the QCD contribution (the hadronic vacuum polarization and the hadronic light by light part of the calculation).

Other Coverage

Woit discusses the growing consensus that the experimental results that have been confirmed are solid, but that the unreconcilable theoretical predictions of the Standard Model value will take time to resolve. As he explains:
The problem is that while the situation with the experimental value is pretty clear (and uncertainties should drop further in coming years as new data is analyzed), the theoretical calculation is a different story. It involves hard to calculate strong-interaction contributions, and the muon g-2 Theory Initiative number quoted above is not the full story. The issues involved are quite technical and I certainly lack the expertise to evaluate the competing claims. To find out more, I’d suggest watching the first talk from the FNAL seminar today, by Aida El-Khadra, who lays out the justification for the muon g-2 Theory Initiative number, but then looking at a new paper out today in Nature from the BMW collaboration. They have a competing calculation, which gives a number quite consistent with the experimental result. . . .

So, the situation today is that unfortunately we still don’t have a completely clear conflict between the SM and experiment. In future years the experimental result will get better, but the crucial question will be whether the theoretical situation can be clarified, resolving the current issue of two quite different competing theory values.
Lubos Motl has a post on the topic which refreshes our memories about the ultra-precision calculation and measurement of the electron anomalous magnetic moment which has been measured at values confirming theory to 13 significant digits (compared to 6 significant digits for muon g-2). The value of the electron magnetic moment g (not g-2) most precisely measured in 2006 by Gerry Gabrielse, et al., is:


This translates to an electron g-2 of:


The experimentally measured muon g-2 is greater than the electron g-2 by a factor of approximately:


This value is consistent with the theoretical prediction (which has less contamination from non-electroweak physics terms because the electron is less massive making it easier to calculate).

So does Tommaso Dorigo at his blog (which also reminds readers of his March 29, 2021 post betting that lepton universality will prevail of anomalies to the contrary notwithstanding some fairly high sigma tensions with that hypothesis discussed at Moriond 2021, one of the big fundamental physics conferences, a position I agree with, but not quite so confidently).

The New York Times story has nice color coverage and background but doesn't add much of substance in its reporting, and fails to emphasize the key issue (the conflict in the theoretical predictions) that Woit and Jester recognized.

A discussion at Physical Review Letters helpfully reminds us of other work on the topic in the pipeline:
Fermilab’s new results provide compelling evidence that the answer obtained at Brookhaven was not an artifact of some unexamined systematics but a first glimpse of beyond-SM physics. While the results announced today are based on the 2018 run, data taken in 2019 and 2020 are already under analysis. We can look forward to a series of higher-precision results involving both positive and negative muons, whose comparison will provide new insights on other fundamental questions, from CPT violation to Lorentz invariance [10]. This future muon g−2 campaign will lead to a fourfold improvement in the experimental accuracy, with the potential of achieving a 7-sigma significance of the SM deficit.

Other planned experiments will weigh in over the next decade, such as the E34 experiment at J-PARC, which employs a radically different technique for measuring g−2 [11]. E34 will also measure the muon electric dipole moment, offering a complementary window into SM deviations.
[10] R. Bluhm et al., “CPT and Lorentz Tests with Muons,” Phys. Rev. Lett. 84, 1098 (2000); “Testing CPT with Anomalous Magnetic Moments,” 79, 1432 (1997). 
[11] M. Abe et al., “A new approach for measuring the muon anomalous magnetic moment and electric dipole moment,” Prog. Theor. Exp. Phys. 2019 (2019).

Sabine Hossenfelder's tweet on the topic, while short, as the genre dictates, is apt:

Re the muon g-2, let me just say the obvious: 3.3 < 3.7, 4.2 < 5, and the suspected murderer has for a long time been hadronic contributions (ie, "old" physics). Of course the possibility exists that it's new physics. But I wouldn't bet on it.

See also my answer to a related question at Physics Stack Exchange

UPDATE April 9, 2021:

The new muon g-2 paper count has surpassed 47.

Jester has some good follow up in analysis in this post and its comments. Particularly useful are his comments on what kind of missing physics could explain the 4.2 sigma anomaly if there is one. 

But let us assume for a moment that the white paper value is correct. This would be huge, as it would mean that the Standard Model does not fully capture how muons interact with light. The correct interaction Lagrangian would have to be (pardon my Greek)

The first term is the renormalizable minimal coupling present in the Standard Model, which gives the Coulomb force and all the usual electromagnetic phenomena. 
The second term is called the magnetic dipole. It leads to a small shift of the muon g-factor, so as to explain the Brookhaven and Fermilab measurements. This is a non-renormalizable interaction, and so it must be an effective description of virtual effects of some new particle from beyond the Standard Model. 
. . . 
For now, let us just crunch some numbers to highlight one general feature. Even though the scale suppressing the effective dipole operator is in the EeV range, there are indications that the culprit particle is much lighter than that. 
First, electroweak gauge invariance forces it to be less than ~100 TeV in a rather model-independent way.
Next, in many models contributions to muon g-2 come with the chiral suppression proportional to the muon mass. Moreover, they typically appear at one loop, so the operator will pick up a loop suppression factor unless the new particle is strongly coupled. The same dipole operator as above can be more suggestively recast as

The scale 300 GeV appearing in the denominator indicates that the new particle should be around the corner! Indeed, the discrepancy between the theory and experiment is larger than the contribution of the W and Z bosons to the muon g-2, so it seems logical to put the new particle near the electroweak scale.
He continues in a comment stating:

the g-2 anomaly can be explained by a 300 GeV particle with order one coupling to muons, or a 3 TeV particle with order 10 coupling to muons, or a 30 GeV particle with order 0.1 coupling to muons 

He also quips that:

I think it has a very good chance to be real. But it's hard to pick a model - nothing strikes me as attractive

Jester doesn't take the next natural step in that analysis, which is to note that the LHC has already scoured the parameter space and come up empty in the vast majority of the possibilities in the energy scales from 30-300 GeV (or at the electroweak scale more generally). 

A measurement suggesting a major new particle "just around the corner", when exhaustive searches for such a particle by other means have come up empty, favors the BMW result. Many other commentators have similarly urged that experiment should be the judge of which of two competing and divergent theoretical predictions is most likely to be correct. In that debate, Jester also comments on the tweaks that have been made to the BMW paper as it has evolved:

the BMW value has changed by one sigma after one year. More precisely, in units of the 10^-11 they quote a_µ{L0-HVP} = 7124(45), 7087(53), 7075(55) in V1 (Feb'20), V2 (Aug'20), and Nature (Apr'21), respectively. Not that I think that there is anything wrong with that - it is completely healthy that preprint values evolve slightly after feedback and criticism from the community.

UPDATE (April 12, 2021):

Another worthwhile comment from Jester on the same thread:
The point is that the dipole operator displayed in the blog post is not gauge invariant under the full SU(3)xSU(2)xU(1) symmetry of the Standard Model. To make it gauge invariant you need to include the Higgs field H, and the operator becomes ~ 1/(100 TeV)^2 H (\bar L \sigma_\mu\nu \mu_R) B_\mu\nu where L is the lepton doublet containing the left-hand muon and the muon neutrino. If you replace the Higgs with its vacuum expectation value, = (0,v), v \sim 200 GeV you obtain the muon dipole operator from the blog post. Because the scale appearing in the denominator of the gauge invariant operator is 100 TeV, the maximum mass of the particle that generates it is 100 TeV. Sorry if it's too technical but I have no good idea how to explain it better.

And also:

This paper discusses the difficulties for the axion explanation of muon g-2:

And, also:
In the Standard Model the role of the Higgs field is indeed to give masses, not only to W and Z, but also to fermions. In this latter case you can also think of the Higgs as the agent of chirality violation. Without the Higgs field chirality would be conserved, that is to say, left-handed polarized fermions would always stay left-handed, and idem for the right-handed ones. Why is this relevant? Because the dipole term I wrote violates chirality, flipping left-handed muons into right-handed ones, and vice-versa. This is a heuristic argument why the Higgs is involved in this story.

4Gravitons has a well done post on the subject:

the upshot is that there are two different calculations on the market that attempt to predict the magnetic moment of the muon. One of them, using older methods, disagrees with the experiment. The other, with a new approach, agrees. The question then becomes, which calculation was wrong? And why? . . .

the two dueling predictions for the muon’s magnetic moment both estimate some amount of statistical uncertainty. It’s possible that the two calculations just disagree due to chance, and that better measurements or a tighter simulation grid would make them agree. Given their estimates, though, that’s unlikely. That takes us from the realm of theoretical uncertainty, and into uncertainty about the theoretical. The two calculations use very different approaches. 
The new calculation tries to compute things from first principles, using the Standard Model directly. The risk is that such a calculation needs to make assumptions, ignoring some effects that are too difficult to calculate, and one of those assumptions may be wrong. 
The older calculation is based more on experimental results, using different experiments to estimate effects that are hard to calculate but that should be similar between different situations. The risk is that the situations may be less similar than expected, their assumptions breaking down in a way that the bottom-up calculation could catch.

None of these risks are easy to estimate. They’re “unknown unknowns”, or rather, “uncertain uncertainties”. And until some of them are resolved, it won’t be clear whether Fermilab’s new measurement is a sign of undiscovered particles, or just a (challenging!) confirmation of the Standard Model.

Matt Strassler's eventual post "Physics is Broken!" is less insightful, although he does pull back from his overstated headline.

Ethan Siegel ("Starts With A Bang" blogger and Forbes columnist) meanwhile, counsels caution:

Ideally, we’d want to calculate all the possible quantum field theory contributions — what we call “higher loop-order corrections” — that make a difference. This would include from the electromagnetic, weak-and-Higgs, and strong force contributions. We can calculate those first two, but because of the particular properties of the strong nuclear force and the odd behavior of its coupling strength, we don’t calculate these contributions directly. Instead, we estimate them from cross-section ratios in electron-positron collisions: something particle physicists have named “the R-ratio.” There is always the concern, in doing this, that we might suffer from what I think of as the “Google translate effect.” If you translate from one language to another and then back again to the original, you never quite get back the same thing you began with. . . . 

But another group — which calculated what’s known to be the dominant strong-force contribution to the muon’s magnetic moment — found a significant discrepancy. As the above graph shows, the R-ratio method and the Lattice QCD methods disagree, and they disagree at levels that are significantly greater than the uncertainties between them. The advantage of Lattice QCD is that it’s a purely theory-and-simulation-driven approach to the problem, rather than leveraging experimental inputs to derive a secondary theoretical prediction; the disadvantage is that the errors are still quite large.

What’s remarkable, compelling, and troubling, however, is that the latest Lattice QCD results favor the experimentally measured value and not the theoretical R-ratio value. As Zoltan Fodor, professor of physics at Penn State and leader of the team that did the latest Lattice QCD research, put it, “the prospect of new physics is always enticing, it’s also exciting to see theory and experiment align. It demonstrates the depth of our understanding and opens up new opportunities for exploration.”
While the Muon g-2 team is justifiably celebrating this momentous result, this discrepancy between two different methods of predicting the Standard Model’s expected value — one of which agrees with experiment and one of which does not — needs to be resolved before any conclusions about “new physics” can responsibly be drawn.


Mitchell said...

At first I assumed that BMW is correct and the muon g-2 measurements have in fact been confirming standard model all along.

But there is another possibility. Lattice calculations are complicated and many choices of method must be made along the way. What if those choices have been biased towards reproducing the already known muon g-2 measurements?

Mitchell said...

With two major BSM claims (muon g-2 and B-meson decay), this is an interesting time for those of us who favor neo-minimalist models. Do we spend any time thinking about how to most economically account for these effects, if they are truly real and due to new physics?

New scalars seem to be the 'least disruptive' addition to the standard model. But these effects seem to require something that interacts differently with different flavors; one wonders how that would be motivated...

By the way, are you aware that Marni Sheppeard is probably dead? Tommaso Dorigo blogged about it. She had been missing for months, and human remains were found in the New Zealand mountains last month. I mention her because she was (as I define it) a neo-minimalist too, and we could do with her input at a time like this.

andrew said...

"Marni Sheppeard is probably dead? Tommaso Dorigo blogged about it. She had been missing for months, and human remains were found in the New Zealand mountains last month. I mention her because she was (as I define it) a neo-minimalist too, and we could do with her input at a time like this."

Thanks for letting me know.

While I have only known her via the Internet, I have corresponded with her a number of times, considered her a friend, and I have always supported her efforts. She made a few visits to the U.S. and I had hoped to meet her in person one of those times.

At one point, a few years ago, when her posts were indicating that she was in a moment of personal crisis, I mobilized people I knew in New Zealand (where I lived for a year in high school, and where a former girlfriend of mine from my U.S. hometown now lives as well as someone I know from an Internet legal forum) to assist her, although things got resolved before they were able to act.

Marni's life was one a tragic genius, akin in many respects to that of John Nash. It fills me with sadness and mourning to hear the news. Do you have a link to the post?

andrew said...


All collaborations have known the Brookhaven numbers for twenty years so that doesn't distinguish between the results. The corresponding author for the BMW papers stated in an interview that they were surprised, even shocked, to learn that their results matched the Brookhaven result when it was finished, so I think that a claim of bias is a bit of a stretch.

I've read both side's papers and I have to say that I think (as an educated layman but admittedly not a specialist) that I think the BMW side of the discussion appears more credible and sound to me on the merits. So, I strongly lean towards BMW being correct.

andrew said...

I'm not inclined to consider what BSM physics would explain these effects, because I think that muon g-2 is not real, and I've posted previously at this blog about the strong lines of reasoning suggesting that the lepton non-universality seen in certain B meson decays is not real. The core argument is that weak force decay is always mediated by W bosons and there is no reason for it to exist in these particular W boson decays from B mesons, but not in myriad other W boson decays. So, I assume that it is probably systemic and/or theory error aggravated by a statistical fluke in one of the lowest event count categories of decays where it is possible to observe decays to more than one flavor of lepton.

Still, the likelihood that B-meson decays are not universal is much more plausible than the possibility that there is a real muon g-2 anomaly at this point.

If it is real, I would presume that it is a two component process. One universal, and another not and probably more indirect.

andrew said...

If it is a two component process, you'd expect that it was mediated through some intermediary with a mass significantly greater than 1.4 GeV (i.e. well above the charm quark mass), and probably well above the W boson mass of about 80.4 GeV since the ratio is about 87% with an implicit 87%ish (or a bit less) for the primary lepton universal W boson decay and about 13%ish+ for the additional decay. Both the W boson and the hypothetical mediator would presumably be virtual, with BSM mediator being more massive than than W boson causing it to be more highly suppressed relative to the W boson process. The fact that there is an association with B meson decays but not D meson decays that aren't arising from B meson decays, would suggest that the virtual process is enhanced in B meson decays relative to D meson decays due to the fact that b quarks are closer in mass to the BSM mediator than charm quarks.

The fact that the muons appear less often than electrons would suggest that the secondary process is mass-energy conservation limited in many B meson decays, with enough energy to produce an electron end product but not always enough to produce a muon end product.

The B->sll process is tricky. You primary path should be: b --> W- + c --> w- + W+ + s --> l- + anti-v + l+ + v + s with the v and anti-v being invisible process. I could imagine that some of the time the l- and l+ annihilate into something not observed, or are only virtual, leaving v and anti-v that are converted via a virtual Z to e- and e+ but are often energy constrained to not produce a µ- and µ+ pair or a tau+ and tau- pair.

It could very well be that some sort of back reaction process like that happens at one or two loops beyond the NLO or NNLO or NNNLO or whatever level of depth the calculations are worked out to, but remains significant because a lot fewer v and anti-v via virtual Z to e- and e+ reactions are energy permitted than any other the other possibilities at that remote loop, but that the probability of the triple W-, w+ and Z boson chains, all of which are suppressed because of the extent to which the virtual masses exceed the on shell energy in the system rarely have enough oomf too push the final Z boson back reaction in any circumstance other than a B msson decay. The fact that it is neutral B mesons rather than charged ones, also suggests to me a likelihood of Z boson back reaction.

andrew said...

Link found: