The most precise, model-independent determinations [of the leading order HVP contribution to muon g-2] so far rely on dispersive techniques, combined with measurements of the cross-section of electron–positron annihilation into hadrons. To eliminate our reliance on these experiments, here we use ab initio quantum chromodynamics (QCD) and quantum electrodynamics simulations to compute the LO-HVP contribution.
We reach sufficient precision to discriminate between the measurement of the anomalous magnetic moment of the muon and the predictions of dispersive methods. Our result favours the experimentally measured value over those obtained using the dispersion relation. Moreover, the methods used and developed in this work will enable further increased precision as more powerful computers become available.
Jester, a physicists who has access to the paper and runs the Resonaances blog in the sidebar tweets this about the conclusion of the new paper (called the BMW paper):
The paper of the BMW lattice collaboration published today in Nature claims (g-2)_SM = 0.00116591954(55), if my additions are correct. This is only 1.6 sigma away from the experimental value announced today.
I calculate a 1.1 sigma discrepancy from the new Fermilab result, which is a difference of 86(77) x 10^-11, and a 1.6 sigma discrepancy from the combined result, which is a difference of 107(68.6) x 10^-11. Both are consistent with the BMW paper calculation of the Standard Model prediction.
Quanta magazine nicely discusses the BMW group's calculation and how it differs from the one from last summer that that Fermilab used as a benchmark. Their pre-print was posted February 27, 2020, and last revised on August 18, 2020. (The 14-person BMW team is named after Budapest, Marseille and Wuppertal, the three European cities where most team members were originally based.)
They made four chief innovations. First they reduced random noise. They also devised a way of very precisely determining scale in their lattice. At the same time, they more than doubled their lattice’s size compared to earlier efforts, so that they could study hadrons’ behavior near the center of the lattice without worrying about edge effects. Finally, they included in the calculation a family of complicating details that are often neglected, like mass differences between types of quarks. “All four [changes] needed a lot of computing power,” said Fodor.
The researchers then commandeered supercomputers in Jülich, Munich, Stuttgart, Orsay, Rome, Wuppertal and Budapest and put them to work on a new and better calculation. After several hundred million core hours of crunching, the supercomputers spat out a value for the hadronic vacuum polarization term. Their total, when combined with all other quantum contributions to the muon’s g-factor, yielded 2.00233183908. This is “in fairly good agreement” with the Brookhaven experiment, Fodor said. “We cross-checked it a million times because we were very much surprised.”
With a new calculation of the hadronic light by light contribution (which reduces the relative margin of error in that calculation from 20% to 14%) as well, mentioned above, the differences falls to 73.7(77) x 10^-11 (which is less than 1 sigma) and 92.3(68.6) x 10^-11 (which is 1.3 sigma), respectively, which would be excellent agreement between theory and experiment indeed.
Also, even if the experimental error falls to 17 x 10^-11 as expected, if the BMW theoretical error remains at 55 x 10^-11, the magnitude of sigma for comparing them will still be 57.6, which falls much more slowly than the precision of the experimental result. Basically, the Fermilab Run-2 to Run-4 measurements are almost sure to remain consistent with the BMW theoretical prediction, despite their much improved accuracy.
On track to 100 ppb systemic and statistical errors for a combined 140 ppb total error in final result.
* HVP = 6845(40) × 10−11 (0.6% relative error).* HLbL (phenomenology + lattice QCD) = 92(18) × 10−11 (20% relative error)
The problem is that while the situation with the experimental value is pretty clear (and uncertainties should drop further in coming years as new data is analyzed), the theoretical calculation is a different story. It involves hard to calculate strong-interaction contributions, and the muon g-2 Theory Initiative number quoted above is not the full story. The issues involved are quite technical and I certainly lack the expertise to evaluate the competing claims. To find out more, I’d suggest watching the first talk from the FNAL seminar today, by Aida El-Khadra, who lays out the justification for the muon g-2 Theory Initiative number, but then looking at a new paper out today in Nature from the BMW collaboration. They have a competing calculation, which gives a number quite consistent with the experimental result. . . .So, the situation today is that unfortunately we still don’t have a completely clear conflict between the SM and experiment. In future years the experimental result will get better, but the crucial question will be whether the theoretical situation can be clarified, resolving the current issue of two quite different competing theory values.
2.00231930436182(52)
This translates to an electron g-2 of:
0.00115965218091(26)
The experimentally measured muon g-2 is greater than the electron g-2 by a factor of approximately:
0.00000626843
Fermilab’s new results provide compelling evidence that the answer obtained at Brookhaven was not an artifact of some unexamined systematics but a first glimpse of beyond-SM physics. While the results announced today are based on the 2018 run, data taken in 2019 and 2020 are already under analysis. We can look forward to a series of higher-precision results involving both positive and negative muons, whose comparison will provide new insights on other fundamental questions, from CPT violation to Lorentz invariance [10]. This future muon g−2 campaign will lead to a fourfold improvement in the experimental accuracy, with the potential of achieving a 7-sigma significance of the SM deficit.Other planned experiments will weigh in over the next decade, such as the E34 experiment at J-PARC, which employs a radically different technique for measuring g−2 [11]. E34 will also measure the muon electric dipole moment, offering a complementary window into SM deviations.
[10] R. Bluhm et al., “CPT and Lorentz Tests with Muons,” Phys. Rev. Lett. 84, 1098 (2000); “Testing CPT with Anomalous Magnetic Moments,” 79, 1432 (1997).
[11] M. Abe et al., “A new approach for measuring the muon anomalous magnetic moment and electric dipole moment,” Prog. Theor. Exp. Phys. 2019 (2019).
Sabine Hossenfelder's tweet on the topic, while short, as the genre dictates, is apt:
Re the muon g-2, let me just say the obvious: 3.3 < 3.7, 4.2 < 5, and the suspected murderer has for a long time been hadronic contributions (ie, "old" physics). Of course the possibility exists that it's new physics. But I wouldn't bet on it.
See also my answer to a related question at Physics Stack Exchange.
UPDATE April 9, 2021:
The new muon g-2 paper count has surpassed 47.
Jester has some good follow up in analysis in this post and its comments. Particularly useful are his comments on what kind of missing physics could explain the 4.2 sigma anomaly if there is one.
But let us assume for a moment that the white paper value is correct. This would be huge, as it would mean that the Standard Model does not fully capture how muons interact with light. The correct interaction Lagrangian would have to be (pardon my Greek)The first term is the renormalizable minimal coupling present in the Standard Model, which gives the Coulomb force and all the usual electromagnetic phenomena.
The second term is called the magnetic dipole. It leads to a small shift of the muon g-factor, so as to explain the Brookhaven and Fermilab measurements. This is a non-renormalizable interaction, and so it must be an effective description of virtual effects of some new particle from beyond the Standard Model.
. . .
For now, let us just crunch some numbers to highlight one general feature. Even though the scale suppressing the effective dipole operator is in the EeV range, there are indications that the culprit particle is much lighter than that.
First, electroweak gauge invariance forces it to be less than ~100 TeV in a rather model-independent way.
Next, in many models contributions to muon g-2 come with the chiral suppression proportional to the muon mass. Moreover, they typically appear at one loop, so the operator will pick up a loop suppression factor unless the new particle is strongly coupled. The same dipole operator as above can be more suggestively recast asThe scale 300 GeV appearing in the denominator indicates that the new particle should be around the corner! Indeed, the discrepancy between the theory and experiment is larger than the contribution of the W and Z bosons to the muon g-2, so it seems logical to put the new particle near the electroweak scale.
the g-2 anomaly can be explained by a 300 GeV particle with order one coupling to muons, or a 3 TeV particle with order 10 coupling to muons, or a 30 GeV particle with order 0.1 coupling to muons
He also quips that:
I think it has a very good chance to be real. But it's hard to pick a model - nothing strikes me as attractive
Jester doesn't take the next natural step in that analysis, which is to note that the LHC has already scoured the parameter space and come up empty in the vast majority of the possibilities in the energy scales from 30-300 GeV (or at the electroweak scale more generally).
A measurement suggesting a major new particle "just around the corner", when exhaustive searches for such a particle by other means have come up empty, favors the BMW result. Many other commentators have similarly urged that experiment should be the judge of which of two competing and divergent theoretical predictions is most likely to be correct. In that debate, Jester also comments on the tweaks that have been made to the BMW paper as it has evolved:
the BMW value has changed by one sigma after one year. More precisely, in units of the 10^-11 they quote a_µ{L0-HVP} = 7124(45), 7087(53), 7075(55) in V1 (Feb'20), V2 (Aug'20), and Nature (Apr'21), respectively. Not that I think that there is anything wrong with that - it is completely healthy that preprint values evolve slightly after feedback and criticism from the community.
UPDATE (April 12, 2021):
The point is that the dipole operator displayed in the blog post is not gauge invariant under the full SU(3)xSU(2)xU(1) symmetry of the Standard Model. To make it gauge invariant you need to include the Higgs field H, and the operator becomes ~ 1/(100 TeV)^2 H (\bar L \sigma_\mu\nu \mu_R) B_\mu\nu where L is the lepton doublet containing the left-hand muon and the muon neutrino. If you replace the Higgs with its vacuum expectation value, = (0,v), v \sim 200 GeV you obtain the muon dipole operator from the blog post. Because the scale appearing in the denominator of the gauge invariant operator is 100 TeV, the maximum mass of the particle that generates it is 100 TeV. Sorry if it's too technical but I have no good idea how to explain it better.
And also:
This paper discusses the difficulties for the axion explanation of muon g-2: http://arxiv.org/abs/2104.03267
In the Standard Model the role of the Higgs field is indeed to give masses, not only to W and Z, but also to fermions. In this latter case you can also think of the Higgs as the agent of chirality violation. Without the Higgs field chirality would be conserved, that is to say, left-handed polarized fermions would always stay left-handed, and idem for the right-handed ones. Why is this relevant? Because the dipole term I wrote violates chirality, flipping left-handed muons into right-handed ones, and vice-versa. This is a heuristic argument why the Higgs is involved in this story.
4Gravitons has a well done post on the subject:
the upshot is that there are two different calculations on the market that attempt to predict the magnetic moment of the muon. One of them, using older methods, disagrees with the experiment. The other, with a new approach, agrees. The question then becomes, which calculation was wrong? And why? . . .the two dueling predictions for the muon’s magnetic moment both estimate some amount of statistical uncertainty. It’s possible that the two calculations just disagree due to chance, and that better measurements or a tighter simulation grid would make them agree. Given their estimates, though, that’s unlikely. That takes us from the realm of theoretical uncertainty, and into uncertainty about the theoretical. The two calculations use very different approaches.
The new calculation tries to compute things from first principles, using the Standard Model directly. The risk is that such a calculation needs to make assumptions, ignoring some effects that are too difficult to calculate, and one of those assumptions may be wrong.
The older calculation is based more on experimental results, using different experiments to estimate effects that are hard to calculate but that should be similar between different situations. The risk is that the situations may be less similar than expected, their assumptions breaking down in a way that the bottom-up calculation could catch.None of these risks are easy to estimate. They’re “unknown unknowns”, or rather, “uncertain uncertainties”. And until some of them are resolved, it won’t be clear whether Fermilab’s new measurement is a sign of undiscovered particles, or just a (challenging!) confirmation of the Standard Model.
Matt Strassler's eventual post "Physics is Broken!" is less insightful, although he does pull back from his overstated headline.
Ethan Siegel ("Starts With A Bang" blogger and Forbes columnist) meanwhile, counsels caution:
Ideally, we’d want to calculate all the possible quantum field theory contributions — what we call “higher loop-order corrections” — that make a difference. This would include from the electromagnetic, weak-and-Higgs, and strong force contributions. We can calculate those first two, but because of the particular properties of the strong nuclear force and the odd behavior of its coupling strength, we don’t calculate these contributions directly. Instead, we estimate them from cross-section ratios in electron-positron collisions: something particle physicists have named “the R-ratio.” There is always the concern, in doing this, that we might suffer from what I think of as the “Google translate effect.” If you translate from one language to another and then back again to the original, you never quite get back the same thing you began with. . . .
But another group — which calculated what’s known to be the dominant strong-force contribution to the muon’s magnetic moment — found a significant discrepancy. As the above graph shows, the R-ratio method and the Lattice QCD methods disagree, and they disagree at levels that are significantly greater than the uncertainties between them. The advantage of Lattice QCD is that it’s a purely theory-and-simulation-driven approach to the problem, rather than leveraging experimental inputs to derive a secondary theoretical prediction; the disadvantage is that the errors are still quite large.What’s remarkable, compelling, and troubling, however, is that the latest Lattice QCD results favor the experimentally measured value and not the theoretical R-ratio value. As Zoltan Fodor, professor of physics at Penn State and leader of the team that did the latest Lattice QCD research, put it, “the prospect of new physics is always enticing, it’s also exciting to see theory and experiment align. It demonstrates the depth of our understanding and opens up new opportunities for exploration.”
While the Muon g-2 team is justifiably celebrating this momentous result, this discrepancy between two different methods of predicting the Standard Model’s expected value — one of which agrees with experiment and one of which does not — needs to be resolved before any conclusions about “new physics” can responsibly be drawn.
At first I assumed that BMW is correct and the muon g-2 measurements have in fact been confirming standard model all along.
ReplyDeleteBut there is another possibility. Lattice calculations are complicated and many choices of method must be made along the way. What if those choices have been biased towards reproducing the already known muon g-2 measurements?
With two major BSM claims (muon g-2 and B-meson decay), this is an interesting time for those of us who favor neo-minimalist models. Do we spend any time thinking about how to most economically account for these effects, if they are truly real and due to new physics?
ReplyDeleteNew scalars seem to be the 'least disruptive' addition to the standard model. But these effects seem to require something that interacts differently with different flavors; one wonders how that would be motivated...
By the way, are you aware that Marni Sheppeard is probably dead? Tommaso Dorigo blogged about it. She had been missing for months, and human remains were found in the New Zealand mountains last month. I mention her because she was (as I define it) a neo-minimalist too, and we could do with her input at a time like this.
"Marni Sheppeard is probably dead? Tommaso Dorigo blogged about it. She had been missing for months, and human remains were found in the New Zealand mountains last month. I mention her because she was (as I define it) a neo-minimalist too, and we could do with her input at a time like this."
ReplyDeleteThanks for letting me know.
While I have only known her via the Internet, I have corresponded with her a number of times, considered her a friend, and I have always supported her efforts. She made a few visits to the U.S. and I had hoped to meet her in person one of those times.
At one point, a few years ago, when her posts were indicating that she was in a moment of personal crisis, I mobilized people I knew in New Zealand (where I lived for a year in high school, and where a former girlfriend of mine from my U.S. hometown now lives as well as someone I know from an Internet legal forum) to assist her, although things got resolved before they were able to act.
Marni's life was one a tragic genius, akin in many respects to that of John Nash. It fills me with sadness and mourning to hear the news. Do you have a link to the post?
@Mitchell
ReplyDeleteAll collaborations have known the Brookhaven numbers for twenty years so that doesn't distinguish between the results. The corresponding author for the BMW papers stated in an interview that they were surprised, even shocked, to learn that their results matched the Brookhaven result when it was finished, so I think that a claim of bias is a bit of a stretch.
I've read both side's papers and I have to say that I think (as an educated layman but admittedly not a specialist) that I think the BMW side of the discussion appears more credible and sound to me on the merits. So, I strongly lean towards BMW being correct.
I'm not inclined to consider what BSM physics would explain these effects, because I think that muon g-2 is not real, and I've posted previously at this blog about the strong lines of reasoning suggesting that the lepton non-universality seen in certain B meson decays is not real. The core argument is that weak force decay is always mediated by W bosons and there is no reason for it to exist in these particular W boson decays from B mesons, but not in myriad other W boson decays. So, I assume that it is probably systemic and/or theory error aggravated by a statistical fluke in one of the lowest event count categories of decays where it is possible to observe decays to more than one flavor of lepton.
ReplyDeleteStill, the likelihood that B-meson decays are not universal is much more plausible than the possibility that there is a real muon g-2 anomaly at this point.
If it is real, I would presume that it is a two component process. One universal, and another not and probably more indirect.
If it is a two component process, you'd expect that it was mediated through some intermediary with a mass significantly greater than 1.4 GeV (i.e. well above the charm quark mass), and probably well above the W boson mass of about 80.4 GeV since the ratio is about 87% with an implicit 87%ish (or a bit less) for the primary lepton universal W boson decay and about 13%ish+ for the additional decay. Both the W boson and the hypothetical mediator would presumably be virtual, with BSM mediator being more massive than than W boson causing it to be more highly suppressed relative to the W boson process. The fact that there is an association with B meson decays but not D meson decays that aren't arising from B meson decays, would suggest that the virtual process is enhanced in B meson decays relative to D meson decays due to the fact that b quarks are closer in mass to the BSM mediator than charm quarks.
ReplyDeleteThe fact that the muons appear less often than electrons would suggest that the secondary process is mass-energy conservation limited in many B meson decays, with enough energy to produce an electron end product but not always enough to produce a muon end product.
The B->sll process is tricky. You primary path should be: b --> W- + c --> w- + W+ + s --> l- + anti-v + l+ + v + s with the v and anti-v being invisible process. I could imagine that some of the time the l- and l+ annihilate into something not observed, or are only virtual, leaving v and anti-v that are converted via a virtual Z to e- and e+ but are often energy constrained to not produce a µ- and µ+ pair or a tau+ and tau- pair.
It could very well be that some sort of back reaction process like that happens at one or two loops beyond the NLO or NNLO or NNNLO or whatever level of depth the calculations are worked out to, but remains significant because a lot fewer v and anti-v via virtual Z to e- and e+ reactions are energy permitted than any other the other possibilities at that remote loop, but that the probability of the triple W-, w+ and Z boson chains, all of which are suppressed because of the extent to which the virtual masses exceed the on shell energy in the system rarely have enough oomf too push the final Z boson back reaction in any circumstance other than a B msson decay. The fact that it is neutral B mesons rather than charged ones, also suggests to me a likelihood of Z boson back reaction.
Link found: https://www.science20.com/tommaso_dorigo/remembering_marni-253758
ReplyDelete