Pages

Wednesday, April 28, 2021

No Evidence Of Higgs Channel Dark Matter

The Results

The ATLAS experiment, using the full strength 13 TeV LHC collisions of Run-2 has looked for Higgs portal dark matter and reports in a new preprint that its search has come up empty. 

The experimental ATLAS results were almost a perfect match to the predicted Standard Model backgrounds in its search using four different event selection criteria to make the result robust. All of the experimental results were within one sigma plus rounding to the nearest discrete measurable outcome.

This leaves no meaningful signal of a beyond the Standard Model Higgs portal dark matter decay, which ATLAS compared to several different Higgs portal dark matter models which it considered as benchmarks. 

The exclusion is stronger than direct dark matter detection experiments for dark matter particle masses of under 2 GeV for the dark matter model it examined.

Additionally, the results for the 𝑍'B model are interpreted in terms of 90% CL limits on the DM–nucleon scattering cross section, as a function of the DM particle mass, for a spin-independent scenario. For a DM mass lower than 2 GeV, the constraint with couplings sin 𝜃 = 0.3, 𝑔𝑞 = 1/3, and 𝑔𝜒 = 1 placed on the DM–nucleon cross section is more stringent than limits from direct detection experiments at low DM mass, showing the complementarity between the several approaches trying to unveil the microscopic nature of DM.

As decisively as this study ruled out Higgs portal dark matter, this analysis is actually generous in favor of the possibility of Higgs portal dark matter. 

If there really was Higgs portal dark matter, it would significantly throw off the percentage of all Higgs boson decays which were to Standard Model predicted decay modes as well, which isn't analyzed in this paper, but which is also not observed in the manner that this kind of tweak to the Standard Model would cause.

The Paper

The paper and its abstract are as follows:

A search for dark-matter particles in events with large missing transverse momentum and a Higgs boson candidate decaying into two photons is reported. The search uses 139 fb1 of proton-proton collision data collected at s=13 TeV with the ATLAS detector at the CERN LHC between 2015 and 2018. No significant excess of events over the Standard Model predictions is observed. The results are interpreted by extracting limits on three simplified models that include either vector or pseudoscalar mediators and predict a final state with a pair of dark-matter candidates and a Higgs boson decaying into two photons.

ATLAS Collaboration, "Search for dark matter in events with missing transverse momentum and a Higgs boson decaying into two photons in pp collisions at s√=13 TeV with the ATLAS detector" arXiv 2104.13240 (April 27, 2021).

Implications For Dark Matter Particle Searches

This study doesn't entirely rule out Higgs portal dark matter. Higgs portal dark matter could conceivably appear only in decays that don't involve photon pairs, if you twist your model to have that property. But it continues the relentless march of the LHC to rule out the existence of new particles of several TeV mass or less that could be dark matter candidates on multiple fronts.

Between the LHC and direct dark matter detection experiments, there are fewer and fewer places in the dark matter particle parameter space for a dark matter particle candidate that has any interactions whatsoever with Standard Model matter could be hiding. 

Yet truly "sterile" dark matter particle candidates that interact with ordinary matter only via gravity, whether or not it has dark sector self-interactions, are very hard to square with the observed tight alignment between inferred dark matter halo distributions and ordinary matter (as previous posts at this blog have explained). 

Furthermore, as previously posts at this blog have explained, dark matter candidates with more than several TeV of mass that the LHC can search for are also disfavored. 

These candidates are too "cold" (i.e. these particles would have too low of a mean velocity) to fit the astronomy observations of galaxy dynamics and large scale structure if it was produced as a thermal relic (for which dark matter mean velocity and dark matter particle mass are related via a viral theorem).

But conventional lambda CDM cosmology demands a nearly constant amount of dark matter for almost the entire history of the universe (and observations place significant model independent bounds on variations in the amount of inferred dark matter in the universe), even if dark matter particles are not produced as thermal relics. So, any dark matter candidate that is not a thermal relic has to be produced in a non-thermal process in equilibrium in the current universe that would have to exist. There are really no plausible candidates for such a process that aren't ruled out observationally, however. 

What Was ATLAS Looking For And Why?

As discussed below, one promising kind of dark matter candidate is Higgs portal dark matter. If Higgs boson portal dark matter exists, there should be decays of pairs of photons and something else which should produce (for the reasons discussed below) a very clean experimental signal. 

Some kinds of Higgs portal dark matter models also involve decays from beyond the Standard Model electrically neutral bosons, such an additional heavy Z boson called a Z' predicting in a variety of beyond the Standard Model theories, or pseudo-scalar Higgs boson called the HA predicted in "two Higgs doublet  model" (2HDM) extensions of the Standard Model predicted also predicted in a variety of Standard Model theories including essentially all supersymmetry theories.

By looking for missing energy (which is a typical way to search for new particles, as discussed below) in diphoton decays of Higgs bosons, the Large Hadron Collider (LHC) can search for Higgs portal dark matter, and the ATLAS Collaboration at the LHC did just that.

Why Search For Higgs Portal Dark Matter Candidates?

One class of dark matter candidates which has attracted a great deal of attention is "Higgs portal" dark matter candidates. These dark matter candidates would not have electromagnetic charge, strong force color charge, or weak force interactions, and hence would be nearly "sterile" dark matter candidates, but would couple to the Higgs boson like all other massive Standard Model particles which would provide a process by which these dark matter particles are created and would allow for very slight interactions (indirectly through virtual Higgs bosons loops) with ordinary matter. These are properties that generically we would like dark matter candidates to have, and this doesn't preclude the interaction of dark matter particles with each other via some force that only interacts with particles in the dark sector. 

Thus, it could be "self-interacting dark matter" with a means of formation shortly after the Big Bang and slight interaction with baryonic matter via a virtual Higgs boson interaction, which are properties that would all be good for a particle dark matter candidate that can come close to reproducing reality to have. 

Another desirable feature of Higgs portal dark matter candidates is that if they exist, they would be observable at sufficiently powerful particle colliders like the LHC.

Searches For Beyond The Standard Model "Invisible" Decay Products

The LHC has searched at the highest available energies for the production of particles that its detectors aren't able to see (e.g. because the decay products are long lived enough to exit the collider before being detected and/or not electrically charged) in a wide variety of processes.

This is done by looking for "missing traverse momentum", i.e. by looking at a decay of a known Standard Model particle, adding up the mass-energy of the observed decay products, adding to the detected mass-energy the predicted mass-energy from neutrino decays which are "invisible" to the detectors in a collider because they are electrically neutral and long lived, and adjusting the result for the fact that the detectors are known to miss a well quantified percentage of detectable particles produced in ordinary decays because the detectors aren't 100% perfect.

Missing traverse momentum is a generic signal of long lived, electrically neutral beyond the Standard Model particles, including dark matter candidates, so LHC experiments are constantly vigilant for statistically significant amounts of missing traverse momentum produced from its collisions.

Why Look At High Energy Photon Pairs?

One of the less common decays of the Higgs boson, predicted by the Standard Model, is  a decay to a pair of photons. It is an attractive one to study, and was one of the Higgs boson decays that was central to the discovery of the Higgs boson in 2012 in the first place even though other kinds of Higgs boson decays are much more common because it produces a very "clean" signal. 

The signal is "clean" because there are very few "background" processes that generate photon pairs. The main other Standard Model fundamental or composite particles that do are Z boson at about 91.19 GeV and the neutral pion at about 135 MeV.  The frequency and character of the Standard Model predicted decays are very well characterized and understood. Putting a floor on the energy of the photon pairs considered in one's event selection easily screens out these decays.

The Standard Model Higgs boson has a mass of about 125 GeV, all of which is translated into photon energy in its Standard Model decay into photons pairs, so focusing in on decays producing photon pairs with combined energies of 120 GeV to 130 GeV screens out all of the non-Higgs decay based background as this study did.

Thursday, April 22, 2021

The Search For Intelligent Life Other Than On Earth

It would be a worldview and Copernican perspective revolution if we discovered intelligent life somewhere other than Earth. The Drake equation is the leading way of trying to estimate how common such a possibility might be.

One related question that receives less attention, is what the likelihood is that if we encounter evidence of intelligent life outside Earth that this form of life will be extinct when we encounter it.

Without doing formal math, my intuition is that the probability of us encountering evidence of an extinct form of intelligent life before we encounter intelligent life that is not extinct is rather high.

Wednesday, April 21, 2021

A Short Guide To Beyond The Standard Model Physics Proposals That Are Wrong

A June 4, 2019 paper from the LHC reviews a plethora of beyond the Standard Model theories, most of which are motivated by supersymmetry or grand unified theory (GUT) models. It is one of the more comprehensive and up to date available reviews of such models.

Every single one of these theories is completely wrong.

These theories are the Lotus eater traps of modern physics that should be dead on arrival and summarily rejected in any serious effort to understand our reality.

Not a single one of them can even reproduce all of the particles of the Standard Model, without creating any particles that are not observed, or phenomena that have not been experimentally ruled out. 

The more compliant supersymmetry (SUSY) models are with experimental constraints, the less well they achieve the purposes for which they were designed.

It shocks me, to some extent, that it seems as if almost no one has even seriously tried to produce a model with no right handed neutrinos, no extra Higgs bosons, no other new fermions or bosons, no dark matter candidates, three generations of fundamental fermions, and that preserves baryon number and lepton number. 

But, perhaps this is simply because the entire enterprise is doomed,  and ill motivated by observational evidence in the first place, and it simply can't be done.

Some of these models explain charged lepton flavor universality violations (usually with leptoquarks), which is pretty much the only significant anomaly between the Standard Model and observation that hasn't been ruled out (or seems likely to be ruled out soon based upon what we know), but even if the anomaly turns out to be real, my own sense is that the solutions proposed to explain these anomalies are almost certainly not the correct ones.

Friday, April 16, 2021

Gravity and Locality

This post is a restatement of a Physics Forum post answering the question "is gravity action at a distance or is it a local force" and a specific subquestion related to string theory and loop quantum gravity which I include in my answer:

Gravitational Waves and Hypothetical Gravitons Propagate At The Speed Of Light

The affirmative detection of gravitational waves by LIGO and other gravitational wave detectors, coinciding to within error bars of photon evidence of a black hole-neutron star merger, strongly supports the view that gravity is a local effect that propagates at the speed of light, rather than an instantaneous "at a distance" effect.

General relativity, and every graviton based quantum gravity theory adopts this position.

Localization Issue In Classical General Relativity And The Issues It Poses For Quantum Gravity

This said, however, there are issues like the localization of gravitational energy, and the self-interactions arising from gravitational fields, which conventional general relativity theory as conventionally applied does not recognize.

Similarly, while individual gravitational interactions in general relativity conserve mass-energy, in general relativity with a cosmological constant the aggregate amount of mass-energy in the universe (apart from gravitational potential energy) increases with time at a global level, although it is arguably possible to describe a gravitational potential energy concept to address this which IMHO doesn't really work but a minority of GR theorists say overcomes the mass-energy conservation issue.

The theoretical aspects of GR that disfavor localization or don't conserve mass-energy area particularly problematic when trying to quantize gravity, because a quantum particle based theory pretty much entirely has to be a bottom up theory that derives all global properties from individual local interactions that the theory permits.

Entanglement and Locality

In quantum gravity theories, it is conceivable that gravitons could become entangled with each other leading to correlations in the properties of the entangled particles analogous to those seen in quantum mechanics in other contexts.

But entanglement effects always involve particles that share a light-cone in space-time, and as a practical matter, while we can measure the correlated properties of entangled photons or other Standard Model particles, we do not have the capacity to measure the properties of individual gravitons, either individually, or statistically, so there is no way to resolve entanglement questions related to quantum gravity in the straightforward direct way that we do with Standard Model particles.

The gist of entanglement is that the correlations observed require the sacrifice of at least one of three axioms that we can usually resort to in physics: causality, locality, or realism. So, while sacrificing locality is one way to get entanglement-like effects, it is not the only one. Arguably, they are all equivalent ways of expressing the same concept and the fact that this is not manifestly obvious simply indicates that our language is not aligned with underlying concepts of how Nature really works.

In the case of interactions involving photons, gravitons and gluons, I personally often find it convenient to, and tend to favor, sacrificing "causality" (i.e. the arrow of time) rather than locality, because fundamental massless spin-1 bosons always move at the speed of light, and hence, do not experience the passage of time in their own reference frame, so it makes sense that interactions mediated by fundamental massless spin-1 bosons should not experience an arrow of time either, thus disallowing CP violation (which empirically is not observed) in interactions mediated by these massless bosons, and essentially stating that the line in space-time that entangled massless bosons follow basically amount to simultaneous points in the space-time coordinates that are best suited to judging causality. But to some extent the decision regarding which axiom to sacrifice in entanglement situations is a stylistic one with no measurable real world effects.

But, lots of loop quantum gravity oriented quantum gravity theories adopt causality as a bedrock axiom and treat the dimension of time in some sense distinctly for space dimensions, so one must sacrifice either locality or reality to some degree in these causation affirming LQG theories.

Similarly, setting up a graviton entanglement experiment (or even a "natural experiment" that would entangle gravitons somehow so that we could measure these effects) is beyond our practical experimental and observational capacity.

Decoherence

Another possible angle to get at this issue which is attracting attention is to look at a phenomena in which a group of Standard Model particles acts "coherently" in the absence of outside interactions. In ordinary daily life, we are bombarded by also sorts of particles that leads to rapid decoherence except in rarified circumstances. But in a deep space vacuum, a coherent group of particles can be expected to travel vast distances with only slight non-gravitational interactions with the outside environment.

Theorists can use even very incomplete quantum gravity theories in which lots of quantities can't be calculated to then calculate the extent to which a flux of gravitons would lead to decoherence of that group of Standard Model particles sooner than it would in the absence of such interactions (see, e.g., here).

The rate at which decoherence emerges in objects in the deep vacuum is thus a physical observable that helps us tease out the mechanism by which gravity works.

Non-Local Gravity Theories

There are explicitly non-local formulations of gravity and papers on this topic address a lot of the issues that the OP question seems to be getting at. Rather than try to explain them myself, I'll defer to the articles from the literature below that discuss these theories.

Literature For Further Reading

Some recent relevant papers include:

* Ivan Kolář, Tomáš Málek, Anupam Mazumdar, "Exact solutions of non-local gravity in class of almost universal spacetimes" arXiv: 2103.08555

* Reza Pirmoradian, Mohammad Reza Tanhayi, "Non-local Probes of Entanglement in the Scale Invariant Gravity" arXiv: 2103.02998

* J. R. Nascimento, A. Yu. Petrov, P. J. Porfírio, "On the causality properties in non-local gravity theories" arXiv: 2102.01600

* Salvatore Capozziello, Maurizio Capriolo, Shin'ichi Nojiri, "Considerations on gravitational waves in higher-order local and non-local gravity" arXiv: 2009.12777

* Jens Boos, "Effects of Non-locality in Gravity and Quantum Theory" arXiv: 2009.10856

* Jens Boos, Jose Pinedo Soto, Valeri P. Frolov, "Ultrarelativistic spinning objects in non-local ghost-free gravity" arXiv: 2004.07420

The work of Erik Verlinde, for example, here, also deserves special mention. He has hypothesized that gravity may not actually be a distinct fundamental force, and may instead, be an emergent interactions that arises from the thermodynamic laws applicable to entropy and/or entanglement between particles arising from Standard Model interactions.

His theories approximate the observed laws of gravity, sometimes including reproduction of dark matter and/or dark energy-like effects, although some early simple attempts that he made to realize this concept have been found to be inconsistent with observational evidence.

Particular Theories
Does string theory or loop quantum gravity have hypotheses on what gravity is or how it arises?
In string theory, either a closed or open string gives rise in certain vibration patterns to gravitons which carries the gravitational force between particles in a manner rather analogous to photons that utilizes a "loophole" in key "no go theorems" related to quantum gravity that make a naive point particle analogy to photons not viable.

This is generally done in a 10-11 dimensional space, although the way that the deeper 10-11 dimensions are distilled to the three dimensions of space and one dimension of time that we observe varies quite a bit. In many versions, the Standard Model forces as manifested through string theory are confined to a four dimensional manifold or "brane" while gravitons and gravity can propagate in all of the dimensions.

The distinction between the dimensions in which the Standard Model forces can operate and those in which gravity can operate helps string theorists explain why gravity is so weak relative to other forces, relative to the generic naive expectation of versions of string theory that if all forces ultimately derive from universal string-like particles, they ought to be more similar in strength, especially at high energies.

There are multiple problems with string theory but the biggest one is that it is really a class of vast numbers of possible theories that do not uniquely give rise to a single low energy approximation that resembles the Standard Model, and nobody has figure out how to thin out the universe of possible low energy approximations of string theory to find even one that contains everything that the Standard Model contains, while lacking everything that we have no experimental evidence for at experimentally testable energies. So basically, there are almost no observables that can be calculated from string theory.

String theory, for example, tends to favor (and arguably requires) that its low energy approximations be supergravity theories (a class of theories that integrates supersymmetry theories with supergravity theories), Majorana neutrinos that undergo neutrinoless double beta decay, proton decay, and models in which the initial state of the Universe at the Big Bang has baryon number zero, lepton number zero, and engaged in baryogenesis and leptongenesis soon after the Big Bang with a high energy process showing CP violation, baryon number violation and lepton number violation, that generates far more particles than the only known Standard Model processes that do so. The existence of a massless spin-2 graviton is pretty much the only prediction of string theory that has any support from observational evidence, and of course, that itself, is only indirect and in its infancy. But the mathematical intractability of quantum gravity by other means under various "no go theorems" has been one important motivation for string theory's popularity.

In loop quantum gravity, the universe is fundamentally made up of nodes that have a small finite numbers of connections to other nodes, and gravity is quantized primarily by quantizing space-time, rather than primarily by quantizing particles against a background that is smooth, continuous and local.

In LQG, locality is ill defined at this most fundamental level and is only an emergent property of the collective interactions of all of the nodes in a sense similar to that of temperature and pressure in the thermodynamics of gases being emergent properties of individual gas atoms randomly moving around a particular speeds that can be described globally in a statistical fashion. Particles move from node to node according to simple rules.

For example, LQG imagines that in space-time, most nodes in what we perceive to be a local area of space-time will connect to other, basically adjacent, nodes in the same local area, but there is no fundamental prohibition on a node having some connections to nodes in what we perceive to be the same local area, and other connections to nodes in what we perceive to be a local area billions of light years away.

The number of space-time dimensions is likewise an emergent property in LQG, and the number of space-time dimensions that emerge in this fashion aren't necessarily integer quantities. A system of nodes could also be described with a fractal dimension that is not an integer defined in a manner similar or identical to the mathematical definition of a fractal dimension.

Some edge examples of LQG theories think of matter and particles themselves as deformations of space-time itself that are emergent, rather than as something separate that is placed within a space called "space-time."

As in classical general relativity, gravity is fundamentally a function of the geometry of space-time, but in LQG, that geometry is discrete and broken rather than smooth and continuous, and locality is (as discussed above) ill defined. In LQG, the "background independence" of the theory, realized by not having a space-time distinct from gravity, is a hallmark axiom of the field and line of reasoning involved. This has the nice feature of "automatically" and obviously giving LQG properties like co-variance that impose tight constraints on the universe of gravity theories formulated with more conventional equations of gravity, like the Einstein field equations, which have this property, even though this is not obvious without extended and non-obvious mathematical proofs. But it has the downside of expressing how gravity works in equations that are not very conceptually natural to the uninitiated, which can make understanding what LQG really says in more familiar contexts challenging.

One of the biggest practical challenges for LQG when confronted with experimental evidence, is that many naive versions of it should give rise to slight Lorentz Invariance Violations (i.e. deviations from special relativity) at the Planck level due to the discrete rather than continuous nature of space-time, because Lorentz Invariance is formulated as a continuous space-time concept. Strong experimental constraints disfavor Lorentz Invariance Violations to levels that naively extend below Planck length scale distances. But, the problem of discrete formulations of space-time leading to minor deviations from Lorentz Invariance is not a universal property of all LQG theories and can be overcome with different conceptualizations of it that evade this problem.

Like string theory, LQG is very much a work in process that is striving to find ways within its general approach and family of theories that reproduce either classical general relativity in the classical limit, or a plausible modification of classical general relativity that can't be distinguished from general relativity with current observational evidence. There are a host of intermediate baby steps and confirmations of what is predicted that have to be surmounted before it can gain wide acceptance and produce a full spectrum of "big picture" results, in part, because so much of this class of theories is emergent, and has to be discovered, rather than being put in by hand as higher level operational and useful theories in practical situations.

Footnote Regarding Loop Quantum Gravity Terminology

There are two senses in which the term "loop quantum gravity" (LQG) is used, and I'm not being very careful about distinguishing the two in this post. Some of what I say about LQG is really specific only to the narrow sense theory that I discuss below, while other things that I say about LQG applies to the entire family of LQG style quantum gravity theories.

In a strict and narrow sense, loop quantum gravity refers to a specific quantum gravity theory that involves quantizing space-time that is largely attributed to Lee Smolin and Carlo Rovelli, although assigning credit to any scientific theory that a community of interacting researchers help formulate is a perilous and always somewhat controversial thing to do.

But the term is also frequently used as a catchall term for quantum gravity theories that share, with the narrow sense type example of loop quantum gravity, the feature that space-time itself or closely analogous concepts are quantized. LQG theories are distinguishable from quantum gravity theories, like string theory, that simply insert graviton particles that carry the gravitational force into a distinct pre-determined space-time with properties sufficient to be Lorentz invariant and observe other requirements of gravity theories that approximate general relativity in the classical limit.

For example, "causal dynamical triangulation" (CDT) is a quantum gravity theory that is in the loop quantum gravity family of theories, but is not precisely the same theory as the type example of LQG after which this family of theories is named.

"Spin foam" theories are another example of LQG family quantum gravity theories.

Thursday, April 15, 2021

The Wool Road

Language Log discusses a paper looking at the spread of wool and wool related technologies in Bronze Age Asia. The bottom line chronology is as follows:
—After 3300 calBC: early exchanges of prestige goods across Near East and the North Caucasus, with wool-cotton textiles moving as part of the elite exchange networks; mixed wool-cotton textile dates around 2910–2600 calBC.

—The mid third millennium BC: spread of wool textile technologies and associated management out of the Near East / Anatolia and into the southern Caucasus; according to 14C data obtained for textiles and synchronous samples this happened between 2550–1925 calBC; an almost synchronous date was obtained from the dates of the northern steppe regions, suggesting that the spread of innovative technology from the South Caucasus to the steppe zone and further north up to the forest zone occurred as part of the same process between 2450–1900 calBC.

—Between 1925–1775 calBC there was rapid eastward transmission of the wool (and associated technologies) across the steppe and forest-steppe of the Volga and southern Urals, out across Kazakhstan and into western China between 1700–1225 calBC. This same process of transmission through the steppe ultimately brought woven wool textiles into societies around the western Altai and the Sayan Mountains of southern Siberia.
The Language Log post continues by observing that:
The findings for the timing and spread of wool technology comport well with those for bronze, chariots, horse riding, iron, weapon types, ornamentation and artwork, and other archeologically recovered cultural artifacts that we have examined in previous posts. Moreover, they are conveniently correlated with archeological cultures such as Andronovo[.]

Wednesday, April 14, 2021

Another Way Of Thinking About Neutrino Oscillation

This is one of the deepest and most thought provoking articles I've seen about neutrinos in a long time. This passage from the body text stands out:

[W]e have obtained the PMNS matrix without having to ever talk about mass diagonalization and mismatches between flavor and mass basis.

The abstract and the paper are as follows: 

We apply on-shell amplitude techniques to the study of neutrino oscillations in vacuum, focussing on processes involving W bosons. 
We start by determining the 3-point amplitude involving one neutrino, one charged lepton and one W boson, highlighting all the allowed kinematic structures. The result we obtain contains terms generated at all orders in an expansion in the cutoff scale of the theory, and we explicitly determine the lower dimensional operators behind the generation of the different structures. 
We then use this amplitude to construct the 4-point amplitude in which neutrinos are exchanged in the s-channel, giving rise to oscillations. 
We also study in detail how flavor enters in the amplitudes, and how the PMNS matrix emerges from the on-shell perspective.
Gustavo F. S. Alves, Enrico Bertuzzo, Gabriel M. Salla, "An on-shell perspective on neutrino oscillations and non-standard interactions" arXiv (March 30, 2021).

Tuesday, April 13, 2021

Lepton Universality Violation Considered Again

Moriond 2021 has reported New results on theoretically clean observables in rare B-meson decays from LHCb 3.1 sigma away from the SM prediction of lepton universality in the B+→K+μμ vs. B+→K+ee comparison. Again the same direction, muons are less common than electrons. 9/fb, i.e. the whole LHCb dataset so far. Preprint on arXiv

Statistics Issues

Statistically, my main issue is cherry picking.

They've found several instances where you have LFV and they combine those to get their significance in sigma. They ignore the many, many other instances where you have results that are consistent with LFU when the justification for excluding those results is non-obvious.

For example, lepton universality violations are not found in tau lepton decays or pion decays, and are not found in anti-B meson and D* meson decays or in Z boson decays. There is no evidence of LFV in Higgs boson decays either.

As one paper notes: "Many new physics models that explain the intriguing anomalies in the b-quark flavour sector are severely constrained by Bs-mixing, for which the Standard Model prediction and experiment agreed well until recently." Luca Di Luzio, Matthew Kirk and Alexander Lenz, "One constraint to kill them all?" (December 18, 2017).

Similarly, see Martin Jung, David M. Straub, "Constraining new physics in b→cℓν transitions" (January 3, 2018) ("We perform a comprehensive model-independent analysis of new physics in b→cℓν, considering vector, scalar, and tensor interactions, including for the first time differential distributions of B→D∗ℓν angular observables. We show that these are valuable in constraining non-standard interactions.")

An anomaly disappeared between Run-1 and Run-2 as documented in Mick Mulder, for the LHCb Collaboration, "The branching fraction and effective lifetime of B0(s)→μ+μ− at LHCb with Run 1 and Run 2 data" (9 May 2017) and was weak in the Belle Collaboration paper, "Lepton-Flavor-Dependent Angular Analysis of B→K∗ℓ+ℓ−" (December 15, 2016).

When you are looking a deviations from a prediction you should include all experiments that implicate that prediction.

In a SM-centric view, all leptonic or semi-leptonic W boson decays arising when a quark decays to another kind of quark should be interchangeable parts (subject to mass-energy caps on final states determined from the initial state), and since all leptonic or semi-leptonic W boson decays (either at tree level or removed one step at the one loop level) and are deep down the same process. See, e.g., Simone Bifani, et al., "Review of Lepton Universality tests in B decays" (September 17, 2018). So, you should be lumping them all together to determine if the significance of evidence for LFV.

Their justification for not pooling the anomalous results with the non-anomalous ones is weak and largely not stated expressly. At a minimum, the decision to draw a line regarding what should be looked at in the LFV bunch of results to get the 3.1 sigma and what should be looked at in the LFU bunch of results that isn't used to moderate the 3.1 sigma in any way is highly BSM model dependent, and the importance of that observation is understated in the analysis (and basically just ignored).

The cherry picking also gives rise to look elsewhere effect issues. If you've made eight measurements in all, divided among three different processes, the look elsewhere effect is small. If the relevant universe is all leptonic and semi-leptonic W and Z boson decays, in contrast, there are hundreds of measurements out there and even after you prune the matter-energy conservation limited measurements, you still have huge look elsewhere effects that trim one or two sigma from the significance of your results.

Possible Mundane Causes

It is much easier to come up with a Standard Model-like scenario in which there are too many electron-positron pairs produced, than it is to come up with one where there are too muon or tau pairs produced.

The ratios seem to be coming up at more than 80% but less than 90% of the expected Standard Model number of muon pair decays relative to electron-positron decays.

The simplest answer would be that there are two processes. 

One produces equal numbers of electron-positron and muon pair decays together with a positively charged kaon in each case, as expected.  The pre-print linked above states this about this process:
The B+ hadron contains a beauty antiquark, b, and the K+ a strange antiquark, s, such that at the quark level the decay involves a b → s transition. Quantum field theory allows such a process to be mediated by virtual particles that can have a physical mass larger than the mass difference between the initial- and final-state particles. In the SM description of such processes, these virtual particles include the electroweak-force carriers, the γ, W± and Z bosons, and the top quark. Such decays are highly suppressed and the fraction of B+ hadrons that decay into this final state (the branching fraction, B) is of the order of 10^−6.
A second process with a branching fraction of about 1/6th that of the primary process produces a positively charged kaon together with an electromagnetically neutral particle that has more than about 1.22 MeV of mass (enough to decay to a positron-electron pair), but less than about 211.4 MeV mass necessary to produce a muon pair when it decays.

It turns out that there is exactly one such known particle fundamental or composite, specifically, a neutral pion, with a mass of about 134.9768(5) MeV.

About 98.8% of the time, a πº it decays to a pair of photons and that decay would be ignored as the end product doesn't match the filtering criteria. But about 1.2% of the time, it decays to an electron-positron pair together with a photon, and all other possible decays are vanishing rare by comparison.

So, we need a decay of a B+ meson to a K+ meson and a neutral pion with a branching fraction of about (10^-6)*(1/6)*(1/0.012)= 1.4 * 10^-5.

It turns out that B+ mesons do indeed decay to K+ mesons and neutral pions with a branching fraction of 1.29(5)*10^-5 which is exactly what it needs to be to produce the apparent violation of lepton universality.

It also appears to me that the theoretical calculation of the K+µ+µ- to K+e+e- ratio isn't considering this decay, although it seems mind boggling to me that so many physicists in such a carefully studied process would somehow overlook the B+ --> K+πº decay channel impact on their expected outcome, which is the obvious way to reverse engineer the process. 

Time will tell if this will amount to anything. I've posted this analysis at the thread at the Physics Forums linked above, to get some critical review of this hypothesis.

If by some crazy twist of fate, this analysis isn't flawed, then it resolves almost all of one of the biggest anomalies in high energy physics outstanding today.

A Universal Maximum Energy Density

I've explored this line of thought before and so seeing it in a paper caught my eye. The mass per event horizon ratio of black holes also conforms to this limitation, reaching a maximal point at the minimum mass of any observed black hole.

This matters, in part, because quantum gravity theories need infrared or ultraviolet fixed point boundaries to be mathematically consistent, and this might be a way to provide that boundary.

One generic consequence of such a hypothesis is the primordial black holes smaller than stellar black holes don't simply not exist, they theoretically can't exist.

Recent astronomical observations of high redshift quasars, dark matter-dominated galaxies, mergers of neutron stars, glitch phenomena in pulsars, cosmic microwave background and experimental data from hadronic colliders do not rule out, but they even support the hypothesis that the energy-density in our universe most likely is upper-limited by ρ(unimax), which is predicted to lie between 2 to 3 the nuclear density ρ0. 
Quantum fluids in the cores of massive neutron stars with ρ≈ρ(unimax) reach the maximum compressibility state, where they become insensitive to further compression by the embedding spacetime and undergo a phase transition into the purely incompressible gluon-quark superfluid state. 
A direct correspondence between the positive energy stored in the embedding spacetime and the degree of compressibility and superfluidity of the trapped matter is proposed.
In this paper relevant observation signatures that support the maximum density hypothesis are reviewed, a possible origin of ρ(unimax) is proposed and finally the consequences of this scenario on the spacetime's topology of the universe as well as on the mechanisms underlying the growth rate and power of the high redshift QSOs are discussed.

Friday, April 9, 2021

A Few Notable Points About Charlemagne

The Basics

Charlemagne was King of the Franks, in an empire centered more or less around modern France but extending further, from 768 CE to his death in 814 CE (co-ruling with his brother Carloman I until 777 CE). 

He was a close ally of the Pope, and was crowned "Emperor of the Romans" by the Pope in 800 CE in what came to be called the Carolingian Empire.

This was right in the middle of the "Middle Ages" of Europe and towards the end of what are sometimes known as the "Dark Ages" of Europe. His rule preceded the Great Schism of 1054 in which the Roman Catholic Church split from the Eastern Orthodox churches.

Personal Life

He was born before his parents were married in the eyes of the church. They may, however, have had a Friedelehe (a.ka. a "peace" marriage), a Germanic form of quasi-marriage not accepted by the Christian church, with the following characteristics:
  • The husband did not become the legal guardian of the woman, in contrast to the Muntehe, or dowered marriage (some historians dispute the existence of this distinction).
  • The marriage was based on a consensual agreement between husband and wife, that is, both had the desire to marry.
  • The woman had the same right as the man to ask for divorce.
  • Friedelehe was usually contracted between couples from different social status.
  • Friedelehe was not synonymous with polygyny, but enabled it.
  • The children of a Friedelehe were not under the control of the father, but only that of the mother.
  • Children of a Friedelehe initially enjoyed full inheritance rights; under the growing influence of the church their position was continuously weakened.
  • Friedelehe came into being solely by public conveyance of the bride to the groom's domicile and the wedding night consummation; the bride also received a Morgengabe (literally "morning gift", a gift of money given to a wife upon consummation of a marriage).
  • Friedelehe was able to be converted into a Muntehe (dowered or guardianship marriage), if the husband later conveyed bridewealth (property conveyed to the wife's family). A Muntehe can also be characterized as a secular legal sale of a woman by her family clan's patriarch to her husband (sometimes with the requirement that the consummation of the marriage be witnessed).

Other alternative relationship forms that existed in that era included a Kebsehe with an unfree "concubine" in the Middle Ages, the morganatic marriage (a marriage without inheritance rights, usually of a noble to a commoner lover after the death of a first legitimate wife), the angle marriage (a "secret marriage" entered into without clergy involvement comparable to modern "common law marriage" banned by the church in 1215 but continuing in practice into the 1400s), and a robbery or kidnapping marriage (a forced marriage by abduction of the bride, sometimes with her or her family's tacit connivance to avoid an arranged marriage or because the couple lacks the economic means to arrange a conventional marriage).

Charlemagne "had eighteen children with eight of his ten known wives or concubines. . . . Among his descendants are several royal dynasties, including the Habsburg, and Capetian dynasties. . . . most if not all established European noble families ever since can genealogically trace some of their background to Charlemagne." The accounts are not entirely consistent.

He was mostly a serial monogamist, although he had two successive concubines at the same time as the marriage that produced most of his children, for about two years. 

His first relationship was with Himiltrude. After the fact, a later Pope declared it a legal marriage (despite the fact that she would logically have resulted in the invalidity of Charlemagne's later marriages as she lived until age 47). But she appears to have been a daughter from a noble family (as would be expected if she had an opportunity to have a relationship with a king's son), so she wasn't a serf who was an unfree kebese concubine either. An informal marriage-like relationship along the lines of  a Friedelehe that was not recognized by the church probably best characterizes her status at the time. This relationship produced one son who suffered from a spinal deformity and was called "the Hunchback" who spent his life confined to care in a convent.

Charlemagne's relationship with Himiltrude was put aside two years later when he legally married Desiderata, the daughter of the King of the Lombards, but the relationship with Desiderata produced no children and was formally annulled about a year later. 

He then married Hildegard of the Vinzgau in 771 with whom he had nine children before Hildegard died twelve years later in 783. 

Two of the children were named Kings (one of Aquitaine and one of Italy), one was made a Duke of Maine (a region in Northwestern France that is home to the city of La Mans), three died as infants, one daughter who never married died at age 25 after having one son out of wedlock with an abbot, one daughter died at age 47 after having had three children with a court official he remained in good standing in Charlemagne's court out of wedlock, and one daughter probably died at age 27 having never married or had any children although the time of her death is not well documents and she may have spent her final years in a convent. 

During his marriage to Hildegard he had two concubines, apparently successively, with whom he had one child each, Gersuinda starting in 773 and producing a child in 774, and Madelgard in 774, producing a daughter in 775 who was made an abbess. 

He then married Fastrada in 784 and she died ten years later in 794, after having two daughters with him, one of whom became an abbess. 

He then married Luitgard in 794 who died childless six years later in 800. 

After Luitgard's death, Charlemagne had two subsequent successive concubines. The first was Regina, starting in 800 with whom he had two sons (in 801 and 802), one of whom was made a bishop and then an abott, and the other of whom became the archchancellor of the empire. The second was Ethelind, starting in 804 with whom he had two sons, in 805 and 807, the first of whom became an abbot.

A Female Byzantine Rival

His main competitor for the title of Emperor was the Byzantine Empire's first female monarch, Irene of Athens.

Brutal Conversions Of European Pagans And Wars

Charlemagne was engaged in almost constant warfare throughout his reign and often personally led his armies in these campaigns accompanied by elite royal guards call the scara.
  • He conquered the Lombard Kingdom of Northern Italy from 772-776. He briefly took Southern Italy in 787, but it soon declared independence and he didn't try to recapture it.
  • He spent most of his rule fighting pitched campaigns to rule mostly Basque Aquitaine and neighboring regions, and the northern Iberian portion of Moorish Spain.
  • In the Saxon Wars, spanning thirty years ending in 804 and eighteen battles, he conquered West German Saxonia and proceeded to convert these pagan peoples to Christianity, and took Bavaria starting in 794 and solidified in 794.
  • He went beyond them to fight the Avars and Slav further to the east, taking Slavic Bohemia, Moravia, Austria and Croatia.
In his campaign against the Saxons to his east, Christianized them upon penalty of death and leading to events such as the Massacre of Verden. There he had 4,500 Saxons, who had been involved in a rebellion against him in Saxon territory that he had previously conquered, executed by beheading in a single day.

According to historian Alessandro Barbero in "Charlemagne: Father of a Continent" (2004) at pgs. 46-47, "the most likely inspiration for the mass execution of Verden was the Bible" and Charlemagne desiring "to act like a true King of Israel", citing the biblical tale of the total extermination of the Amalekites and the conquest of the Moabites by Biblical King David.

A royal chronicler, commenting on Charlemagne's treatment of the Saxons a few years after the Massacre of Verden records with regard to the Saxon that "either they were defeated or subjected to the Christian religion or completely swept away."

The Milky Way And the Universe Are Big

I don't think that the last one is actually true, because it is a destination that is moving away from us at the speed of light.


Wednesday, April 7, 2021

Muon g-2 Experiment Results Confirm Previous Measurement UPDATED


Abstract

Either Fermilab has confirmed that it is extremely likely that there are new physics out there, based on the pre-today gold standard calculation of the Standard Model prediction, or this new result, combined with two new calculations of the parts of the Standard Model prediction that account for most of the theoretical error in that prediction, which were released today, combined to show that the Standard Model is complete and there are no new physics out there. The no new physics conclusion is very likely to prevail when the dust settles, undermining further the motivation for building a new next generation particle collider.

Run-1 of a new Fermilab measurement of muon g-2 (which is an indirect global measurement of deviations from the Standard Model of Particle Physics) in the E989 experiment has confirmed the previous Brookhaven National Laboratory measurement done twenty years ago. 

The combined Brookhaven and Fermilab results shows a 4.2 sigma deviation from the Standard Model of Particle Physics prediction. The statistical significance of this discrepancy will almost surely increase as the results from Runs 2, 3 and 4 are announced, eventually reducing the margin of error in the experimental measurement by a factor of four. 

Also today, a Lattice QCD collaboration known as BMW released a new paper in Nature that concludes contrary to the consensus calculation of the Standard Model predicted value of muon g-2 released last summer, that the leading order hadron vacuum polarization calculation which is the dominant source of theoretical error in the Standard Model prediction should be calculated in a different matter that it turns out is consistent to within 1.6 sigma of the combined muon g-2 measurement (and to within 1.1 sigma of the Fermilab measurement) and suggests that the Standard Model is complete and requires no new physics. Meanwhile another preprint announced an improved calculation of the hadronic light by light contribution to the Standard Model prediction that also moves the prediction closer to the experimental value although not enough by itself to alleviate the large discrepancy between the old Standard Model prediction and the experimental result. 

The results (multiplied by 10^11 for easier reading, with one standard deviation magnitude in the final digits shown in parenthesis after each result, followed by the statistical significance of the deviation from the old Standard Model prediction) are as follows:

Fermilab (2021): 116,592,040(54) - 3.3 sigma
Brookhaven's E821 (2006): 116,592,089(63) - 3.7 sigma
Combined measurement: 116,592,061(41)  - 4.2 sigma
Old Standard Model prediction: 116,591,810(43)

BMW Standard Model prediction: 116,591,954(55)

The BMW prediction is 134 x 10^-11 higher than the Old Standard Model prediction.

Compared to the old Standard Model prediction, the new measurement would tend to support the conclusion that there are undiscovered new physics beyond the Standard Model which give rise to the discrepancy of a specific magnitude (about 50% more than the electroweak contribution to muon g-2, or 3.3% of the QCD contribution to muon g-2), from an unknown source.

But it is also very plausible that error estimate in the quantum chromodynamics (QCD) component of the Standard Model theoretical prediction, especially the Hadronic Vacuum Polarization component, which is the dominant source of error in that prediction, is understated by approximately a factor of three (i.e. it is closer to the 2% error margin of purely theoretical Lattice QCD calculations, than to the estimated 0.6% error derived from using other experiments involving electron-positron collisions as a proxy from much of the QCD calculation). The BMW paper does that in a way that seems to resolve the discrepancy.

And new calculation of the hadronic light by light contribution to the muon g-2 calculation was also released on arXiv today (and doesn't seem to be part of the BMW calculation). This increases the contribution from that component from 92(18) x 10^-11 in the old calculation to 106.8(14.7) x 10^-11. 

This increase by itself of 14.8 x 10^11 in the same direction of the BMW calculation adjustment, if they are independent, results in a combined increase in the Standard Model prediction of 151.8 x 10^-11, a substantial share of the gap between the measurements.

Fermilab's Muon g-2 Measurement Announcement


'Fermilab announced its measurement of the muon anomalous magnetic moment (usually called muon g-2, but actually (g-2)/2) this morning in a Zoom web seminar. The result was unblinded on February 25, 2021. 170 people have kept the secret since then. There were 5000 Zoom participants on the announcement and more on YouTube (which is still available). 

Multiple papers will be released on arXiv this evening. The lead paper is here. Technical papers are here and here and here.

The last time this quantity was measured was in 2001 with the final results announced in 2006 by the Brookhaven National Laboratory and some of the equipment used in this experiment was moved from Brookhaven to Fermilab in Chicago and then upgraded to conduct this experiment.

This is a preliminary first announced result from Run-1 and will be refined with more data collection and analysis. 

More results will be announced in the coming years form Fermilab as more data is collected and the systemic error is reduced with refinements to the adjustments made for various experimental factors, and another independent experiment will also produce a measurement in a few years.

The Result


Tension with Standard Model is 4.2 sigma.

The last measurement was made by Brookhaven National Labs (BNL) in its E821 experiment in data collection completed in 2001 with the final results released in 2006.

E821:  116,592,089(63) x 10^−11

This result: 116,592,040(54) x 10^−11

The two results (conducts with some of the same equipment upgraded for this experiment) are consistent with each other to within less than one standard deviation.

Combined result: 116,592,061(41) x 10^−11

Standard Model prediction: 116,591,810(43) × 10^−11

Expressed as a two sigma confidence interval the Standard Model prediction time 10^11 is: 116,591,724 to 116,591,896.

E821 v. SM prediction: 279 x 10^-11
Magnitude of 1 sigma: 76.3 x 10^−11
Difference in sigma: 3.657 sigma

Fermilab v. SM prediction: 230 x 10^-11
Magnitude of 1 sigma: 69.0 x 10^−11
Difference in sigma: 3.333 sigma

Combined v. SM prediction: 251 x 10^−11
Magnitude of 1 sigma: 59.4 x 10^−11
Difference in sigma: 4.225 sigma

The final experimental error is expected to be reduced to 16 x 10^−11. If the best fit value stays relatively stable, the result should be a 5 sigma discovery of new physics by Run-2 or Run-3.  Data collection is currently underway in Run-4, but data analysis of the previous runs is underway.

Significance

We care about this measurement so much because it is a global measurement of the consistency of the Standard Model with experimental reality that is impacted by every part of the Standard Model. Any deviation from the Standard Model shows up in muon g-2. 

If muon g-2 is actually different from the Standard Model prediction then there is something wrong with the Standard Model, although we don't know what is wrong, just how big the deviation is from the Standard Model.

In absolute terms, the measured value is still very close to the Standard Model prediction. The discrepancy between the new experimentally measured value announced today is roughly 2 parts per million.

Still, the experimental value is in strong tension with the Standard Model prediction (about 17% smaller in absolute terms than the Brookhaven measurement, but only 9% weaker than the Brookhaven measurement in terms of statistical significance). The combined result is becoming more statistically significant than the Brookhaven result alone.

How Could This Evidence Of New Physics Be Wrong?

If this is not "new physics", then realistically, the discrepancy has to be due to a significant overestimate of the QCD contribution to the theoretical prediction that is flawed in some way not reflected in the estimated theory error. 

For the experimental result and the theoretical prediction to be consistent at the two sigma level that is customarily considered untroubling, the QCD error would have to be 124 * 10^-11 rather than about 43 * 10^-11, i.e. about 1.8% v. 0.6% estimated. 

This is certainly not outside the realm of possibility, and indeed is reasonably probable. The error in the pure lattice QCD measurement is about 2% and the improvement is due to using experimental results from electron-positron collisions in lieu of lattice QCD calculations to determine part of the QCD part of the theoretical estimate. 

There is a precedent for a similar kind of issue leading to an apparent discrepancy between the Standard Model and experimental results. The muonic hydrogen proton radius puzzle (which appeared to show that protons "orbited" by muons in an atom were significantly smaller than muons "orbited" by an electron in ordinary hydrogen) was ultimately resolved because the ordinary hydrogen proton radius determined in old experiments were more inaccurate than estimated in the papers measuring them, while the new muonic hydrogen measurement of the proton radius was correct for both ordinary and muonic hydrogen as new measurements of the proton radius in ordinary hydrogen confirmed. 

The fact that the hadronic vacuum polarization experimental data used in the theoretical calculation of muon g-2 were derived from rather old electron-positron collisions, rather than from recent muon-antimuon collisions could easily be the source of the discrepancy.

A new paper published in Nature today, which I wasn't aware of when I wrote the analysis above, makes precisely this argument, and does a more precise HVP calculation that significantly increases the HVP contribution the muon g-2 bringing it into line with the experimental results. The pertinent part of the abstract states:
The most precise, model-independent determinations [of the leading order HVP contribution to muon g-2] so far rely on dispersive techniques, combined with measurements of the cross-section of electron–positron annihilation into hadrons. To eliminate our reliance on these experiments, here we use ab initio quantum chromodynamics (QCD) and quantum electrodynamics simulations to compute the LO-HVP contribution. 
We reach sufficient precision to discriminate between the measurement of the anomalous magnetic moment of the muon and the predictions of dispersive methods. Our result favours the experimentally measured value over those obtained using the dispersion relation. Moreover, the methods used and developed in this work will enable further increased precision as more powerful computers become available.

Jester, a physicists who has access to the paper and runs the Resonaances blog in the sidebar tweets this about the conclusion of the new paper (called the BMW paper):

The paper of the BMW lattice collaboration published today in Nature claims (g-2)_SM = 0.00116591954(55), if my additions are correct. This is only 1.6 sigma away from the experimental value announced today.

I calculate a 1.1 sigma discrepancy from the new Fermilab result, which is a difference of 86(77) x 10^-11, and a 1.6 sigma discrepancy from the combined result, which is a difference of 107(68.6) x 10^-11. Both are consistent with the BMW paper calculation of the Standard Model prediction. 

Quanta magazine nicely discusses the BMW group's calculation and how it differs from the one from last summer that that Fermilab used as a benchmark. Their pre-print was posted February 27, 2020, and last revised on August 18, 2020. (The 14-person BMW team is named after Budapest, Marseille and Wuppertal, the three European cities where most team members were originally based.)

They made four chief innovations. First they reduced random noise. They also devised a way of very precisely determining scale in their lattice. At the same time, they more than doubled their lattice’s size compared to earlier efforts, so that they could study hadrons’ behavior near the center of the lattice without worrying about edge effects. Finally, they included in the calculation a family of complicating details that are often neglected, like mass differences between types of quarks. “All four [changes] needed a lot of computing power,” said Fodor.

The researchers then commandeered supercomputers in Jülich, Munich, Stuttgart, Orsay, Rome, Wuppertal and Budapest and put them to work on a new and better calculation. After several hundred million core hours of crunching, the supercomputers spat out a value for the hadronic vacuum polarization term. Their total, when combined with all other quantum contributions to the muon’s g-factor, yielded 2.00233183908. This is “in fairly good agreement” with the Brookhaven experiment, Fodor said. “We cross-checked it a million times because we were very much surprised.” 

With a new calculation of the hadronic light by light contribution (which reduces the relative margin of error in that calculation from 20% to 14%) as well, mentioned above, the differences falls to 73.7(77) x 10^-11 (which is less than 1 sigma) and 92.3(68.6) x 10^-11 (which is 1.3 sigma), respectively, which would be excellent agreement between theory and experiment indeed. 

Also, even if the experimental error falls to 17 x 10^-11 as expected, if the BMW theoretical error remains at 55 x 10^-11, the magnitude of sigma for comparing them will still be 57.6, which falls much more slowly than the precision of the experimental result. Basically, the Fermilab Run-2 to Run-4 measurements are almost sure to remain consistent with the BMW theoretical prediction, despite their much improved accuracy.

If The Discrepancy Is Real, What Does This Tell Us About New Physics?

But it is also plausible that there are new physics out there to be discovered that the muon g-2 measurement is revealing. The new BMW collaboration result announced in Nature today, however, strongly disfavors that conclusion.

The muon g-2 measurement doesn't tell us what we are missing, but it does tell us how big of an effect we are missing.

Conservatively, using the smaller absolute difference between the new Fermilab result and the Standard Model prediction as a reference point, the discrepancy is 50% bigger than the entire electroweak contribution to the calculation. It is 3.3% of the size of the entire QCD contribution.

Given other LHC results excluding most light beyond the Standard Model particles that could interact directly or indirectly with muons, any new particles contributing to muon g-2 would probably have to be quite heavy, and would probably have an interaction not at the tree level of direct interactions with muons, but via some sort of indirect loop effect with a virtual particle that interacts with the muon and in turn, interacts with it.

But really, aside from knowing that something new is likely to be out there, the range of potential tweaks that could produce the observed discrepancy is great.

A host of arXiv pre-prints released today propose BSM explanations for the anomaly (assuming that it is real) including the following thirty-one preprints: here, here, here, here, here, here, here, here, here, here, here, here, herehere, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, and here.

Every single one of them is rubbish, unconvincingly renews a flawed and tired theory that is already disfavored by other data, and isn't worth reading or even mentioning individually. 

The utter worthlessness of the proposed explanations for a large muon g-2 anomaly, if there is one, is one more reason to suspect that the BMW calculation of the muon g-2 anomaly is correct and that this consistency largely rules out muon g-2 related new physics.

Real World Significance For Future Experiments

This experimental result is, by far, the strongest justification for building a new bigger particle collider to see if it can observe directly the new physics that the muon g-2 discrepancy seems to point towards.

It also provides excellent motivation to redo at higher precision the electron-positron collisions (last done at the Large Electron-Positron collider which was dismantled in 2001), to determine if this key experimental contribution to the Standard Model prediction is flawed and is the real source of the discrepancy.

Experimental Error Analysis

462 parts per billion error in Run-1. Analysis of error is a big part of the two years wait.

On track to 100 ppb systemic and statistical errors for a combined 140 ppb total error in final result.

Theory Error Analysis

The actual muon magnetic moment g(µ) is approximately 2.00233184(1). 

This  would be exactly 2 in relativistic quantum mechanical Dirac model, and would be exactly 1 if classical electromagnetism were correct. The Standard Model's "anomalous" contribution is due to virtual interactions of muons with other Standard Model particles. Mostly these other interactions involve emitting and absorbing photons (the electroweak contribution). But it can also involve interactions with other kinds of virtual particles, which give rise to the electroweak and QCD contributions to the muon anomalous magnetic moment. The heavier the particle, the smaller the contribution is make to muon g-2, all other things being equal.

The muon g-2 is limited to the quantum correction factors from the Dirac model in the muon magnetic moment which is why it is called the muon anomalous magnetic moment. It is (g-2)/2.

The state of the art theoretical calculation of muon g-2 is found at T. Aoyama, et al., "The anomalous magnetic moment of the muon in the Standard Model" arXiv (June 8, 2020).



Muon g-2 = QED+EW+QCD

QED = 116 584 718.931(104) × 10^−11

EW = 153.6(1.0) × 10^−11

QCD = HVP+HLbL

* HVP = 6845(40) × 10−11 (0.6% relative error).

* HLbL (phenomenology + lattice QCD) = 92(18) × 10−11 (20% relative error)

Theory error is dominated by the QCD contribution (the hadronic vacuum polarization and the hadronic light by light part of the calculation).

Other Coverage

Woit discusses the growing consensus that the experimental results that have been confirmed are solid, but that the unreconcilable theoretical predictions of the Standard Model value will take time to resolve. As he explains:
The problem is that while the situation with the experimental value is pretty clear (and uncertainties should drop further in coming years as new data is analyzed), the theoretical calculation is a different story. It involves hard to calculate strong-interaction contributions, and the muon g-2 Theory Initiative number quoted above is not the full story. The issues involved are quite technical and I certainly lack the expertise to evaluate the competing claims. To find out more, I’d suggest watching the first talk from the FNAL seminar today, by Aida El-Khadra, who lays out the justification for the muon g-2 Theory Initiative number, but then looking at a new paper out today in Nature from the BMW collaboration. They have a competing calculation, which gives a number quite consistent with the experimental result. . . .

So, the situation today is that unfortunately we still don’t have a completely clear conflict between the SM and experiment. In future years the experimental result will get better, but the crucial question will be whether the theoretical situation can be clarified, resolving the current issue of two quite different competing theory values.
Lubos Motl has a post on the topic which refreshes our memories about the ultra-precision calculation and measurement of the electron anomalous magnetic moment which has been measured at values confirming theory to 13 significant digits (compared to 6 significant digits for muon g-2). The value of the electron magnetic moment g (not g-2) most precisely measured in 2006 by Gerry Gabrielse, et al., is:

2.00231930436182(52)

This translates to an electron g-2 of:

0.00115965218091(26) 

The experimentally measured muon g-2 is greater than the electron g-2 by a factor of approximately:

0.00000626843  

This value is consistent with the theoretical prediction (which has less contamination from non-electroweak physics terms because the electron is less massive making it easier to calculate).

So does Tommaso Dorigo at his blog (which also reminds readers of his March 29, 2021 post betting that lepton universality will prevail of anomalies to the contrary notwithstanding some fairly high sigma tensions with that hypothesis discussed at Moriond 2021, one of the big fundamental physics conferences, a position I agree with, but not quite so confidently).

The New York Times story has nice color coverage and background but doesn't add much of substance in its reporting, and fails to emphasize the key issue (the conflict in the theoretical predictions) that Woit and Jester recognized.

A discussion at Physical Review Letters helpfully reminds us of other work on the topic in the pipeline:
Fermilab’s new results provide compelling evidence that the answer obtained at Brookhaven was not an artifact of some unexamined systematics but a first glimpse of beyond-SM physics. While the results announced today are based on the 2018 run, data taken in 2019 and 2020 are already under analysis. We can look forward to a series of higher-precision results involving both positive and negative muons, whose comparison will provide new insights on other fundamental questions, from CPT violation to Lorentz invariance [10]. This future muon g−2 campaign will lead to a fourfold improvement in the experimental accuracy, with the potential of achieving a 7-sigma significance of the SM deficit.

Other planned experiments will weigh in over the next decade, such as the E34 experiment at J-PARC, which employs a radically different technique for measuring g−2 [11]. E34 will also measure the muon electric dipole moment, offering a complementary window into SM deviations.
[10] R. Bluhm et al., “CPT and Lorentz Tests with Muons,” Phys. Rev. Lett. 84, 1098 (2000); “Testing CPT with Anomalous Magnetic Moments,” 79, 1432 (1997). 
[11] M. Abe et al., “A new approach for measuring the muon anomalous magnetic moment and electric dipole moment,” Prog. Theor. Exp. Phys. 2019 (2019).

Sabine Hossenfelder's tweet on the topic, while short, as the genre dictates, is apt:

Re the muon g-2, let me just say the obvious: 3.3 < 3.7, 4.2 < 5, and the suspected murderer has for a long time been hadronic contributions (ie, "old" physics). Of course the possibility exists that it's new physics. But I wouldn't bet on it.

See also my answer to a related question at Physics Stack Exchange

UPDATE April 9, 2021:

The new muon g-2 paper count has surpassed 47.

Jester has some good follow up in analysis in this post and its comments. Particularly useful are his comments on what kind of missing physics could explain the 4.2 sigma anomaly if there is one. 

But let us assume for a moment that the white paper value is correct. This would be huge, as it would mean that the Standard Model does not fully capture how muons interact with light. The correct interaction Lagrangian would have to be (pardon my Greek)


The first term is the renormalizable minimal coupling present in the Standard Model, which gives the Coulomb force and all the usual electromagnetic phenomena. 
The second term is called the magnetic dipole. It leads to a small shift of the muon g-factor, so as to explain the Brookhaven and Fermilab measurements. This is a non-renormalizable interaction, and so it must be an effective description of virtual effects of some new particle from beyond the Standard Model. 
. . . 
For now, let us just crunch some numbers to highlight one general feature. Even though the scale suppressing the effective dipole operator is in the EeV range, there are indications that the culprit particle is much lighter than that. 
First, electroweak gauge invariance forces it to be less than ~100 TeV in a rather model-independent way.
Next, in many models contributions to muon g-2 come with the chiral suppression proportional to the muon mass. Moreover, they typically appear at one loop, so the operator will pick up a loop suppression factor unless the new particle is strongly coupled. The same dipole operator as above can be more suggestively recast as


The scale 300 GeV appearing in the denominator indicates that the new particle should be around the corner! Indeed, the discrepancy between the theory and experiment is larger than the contribution of the W and Z bosons to the muon g-2, so it seems logical to put the new particle near the electroweak scale.
He continues in a comment stating:

the g-2 anomaly can be explained by a 300 GeV particle with order one coupling to muons, or a 3 TeV particle with order 10 coupling to muons, or a 30 GeV particle with order 0.1 coupling to muons 

He also quips that:

I think it has a very good chance to be real. But it's hard to pick a model - nothing strikes me as attractive

Jester doesn't take the next natural step in that analysis, which is to note that the LHC has already scoured the parameter space and come up empty in the vast majority of the possibilities in the energy scales from 30-300 GeV (or at the electroweak scale more generally). 

A measurement suggesting a major new particle "just around the corner", when exhaustive searches for such a particle by other means have come up empty, favors the BMW result. Many other commentators have similarly urged that experiment should be the judge of which of two competing and divergent theoretical predictions is most likely to be correct. In that debate, Jester also comments on the tweaks that have been made to the BMW paper as it has evolved:

the BMW value has changed by one sigma after one year. More precisely, in units of the 10^-11 they quote a_µ{L0-HVP} = 7124(45), 7087(53), 7075(55) in V1 (Feb'20), V2 (Aug'20), and Nature (Apr'21), respectively. Not that I think that there is anything wrong with that - it is completely healthy that preprint values evolve slightly after feedback and criticism from the community.

UPDATE (April 12, 2021):

Another worthwhile comment from Jester on the same thread:
The point is that the dipole operator displayed in the blog post is not gauge invariant under the full SU(3)xSU(2)xU(1) symmetry of the Standard Model. To make it gauge invariant you need to include the Higgs field H, and the operator becomes ~ 1/(100 TeV)^2 H (\bar L \sigma_\mu\nu \mu_R) B_\mu\nu where L is the lepton doublet containing the left-hand muon and the muon neutrino. If you replace the Higgs with its vacuum expectation value, = (0,v), v \sim 200 GeV you obtain the muon dipole operator from the blog post. Because the scale appearing in the denominator of the gauge invariant operator is 100 TeV, the maximum mass of the particle that generates it is 100 TeV. Sorry if it's too technical but I have no good idea how to explain it better.

And also:

This paper discusses the difficulties for the axion explanation of muon g-2: http://arxiv.org/abs/2104.03267

And, also:
In the Standard Model the role of the Higgs field is indeed to give masses, not only to W and Z, but also to fermions. In this latter case you can also think of the Higgs as the agent of chirality violation. Without the Higgs field chirality would be conserved, that is to say, left-handed polarized fermions would always stay left-handed, and idem for the right-handed ones. Why is this relevant? Because the dipole term I wrote violates chirality, flipping left-handed muons into right-handed ones, and vice-versa. This is a heuristic argument why the Higgs is involved in this story.

4Gravitons has a well done post on the subject:

the upshot is that there are two different calculations on the market that attempt to predict the magnetic moment of the muon. One of them, using older methods, disagrees with the experiment. The other, with a new approach, agrees. The question then becomes, which calculation was wrong? And why? . . .

the two dueling predictions for the muon’s magnetic moment both estimate some amount of statistical uncertainty. It’s possible that the two calculations just disagree due to chance, and that better measurements or a tighter simulation grid would make them agree. Given their estimates, though, that’s unlikely. That takes us from the realm of theoretical uncertainty, and into uncertainty about the theoretical. The two calculations use very different approaches. 
The new calculation tries to compute things from first principles, using the Standard Model directly. The risk is that such a calculation needs to make assumptions, ignoring some effects that are too difficult to calculate, and one of those assumptions may be wrong. 
The older calculation is based more on experimental results, using different experiments to estimate effects that are hard to calculate but that should be similar between different situations. The risk is that the situations may be less similar than expected, their assumptions breaking down in a way that the bottom-up calculation could catch.

None of these risks are easy to estimate. They’re “unknown unknowns”, or rather, “uncertain uncertainties”. And until some of them are resolved, it won’t be clear whether Fermilab’s new measurement is a sign of undiscovered particles, or just a (challenging!) confirmation of the Standard Model.

Matt Strassler's eventual post "Physics is Broken!" is less insightful, although he does pull back from his overstated headline.

Ethan Siegel ("Starts With A Bang" blogger and Forbes columnist) meanwhile, counsels caution:

Ideally, we’d want to calculate all the possible quantum field theory contributions — what we call “higher loop-order corrections” — that make a difference. This would include from the electromagnetic, weak-and-Higgs, and strong force contributions. We can calculate those first two, but because of the particular properties of the strong nuclear force and the odd behavior of its coupling strength, we don’t calculate these contributions directly. Instead, we estimate them from cross-section ratios in electron-positron collisions: something particle physicists have named “the R-ratio.” There is always the concern, in doing this, that we might suffer from what I think of as the “Google translate effect.” If you translate from one language to another and then back again to the original, you never quite get back the same thing you began with. . . . 

But another group — which calculated what’s known to be the dominant strong-force contribution to the muon’s magnetic moment — found a significant discrepancy. As the above graph shows, the R-ratio method and the Lattice QCD methods disagree, and they disagree at levels that are significantly greater than the uncertainties between them. The advantage of Lattice QCD is that it’s a purely theory-and-simulation-driven approach to the problem, rather than leveraging experimental inputs to derive a secondary theoretical prediction; the disadvantage is that the errors are still quite large.

What’s remarkable, compelling, and troubling, however, is that the latest Lattice QCD results favor the experimentally measured value and not the theoretical R-ratio value. As Zoltan Fodor, professor of physics at Penn State and leader of the team that did the latest Lattice QCD research, put it, “the prospect of new physics is always enticing, it’s also exciting to see theory and experiment align. It demonstrates the depth of our understanding and opens up new opportunities for exploration.”
While the Muon g-2 team is justifiably celebrating this momentous result, this discrepancy between two different methods of predicting the Standard Model’s expected value — one of which agrees with experiment and one of which does not — needs to be resolved before any conclusions about “new physics” can responsibly be drawn.