Tuesday, November 26, 2024

The Lightest Neutron Star Ever? Or Something Else?

A new preprint argues that a newly observed object that looks lot like a neutron star, but is less massive than should have been possible theoretically, might be an exotic star.

But, since the observed mass, of 0.77 + 0.2 -0.17 solar masses, is still within two sigma of the theoretical minimum mass of a neutron star, which is 1.17 solar masses, I don't take the conclusion that it could be an exotic object (made up of color flavor locked quark matter), very seriously.

In other news, I have a dim opinion of any paper whose abstract begins:
The gauge singlet right-handed neutrinos are one of the essential fields in neutrino mass models that explain tiny masses of active neutrinos.

If you feel the need to create right handed neutrinos (with masses different from any of the three Standard Model active neutrinos) to explain anything, your model is probably wrong because you are too lazy to find a solution that doesn't need them, and there is no positive experimental evidence that they exist. This possibility has been a perennial source for a steady stream of dead end theoretical speculation for at least a decade or two. This paper is the work of dim bulbs in the physics community. Try harder until you come up with something better.

To be clear, I'm not saying that I'm a professional physicist coming up with something better myself. But you don't have to be a genius composer yourself to appreciate the difference between Mozart and a mediocre music theory student.

A Physics Blog Of Note (And Hiatus Note)

Manuel Urueña, physicist focused on theoretical gravitation, has an interesting physics blog entitled "Thoughts in theoretical physics" that you may want to check out. 

He has recent posts on the modified inertia formulation of MOND (particularly in light of Mach's principle), gravitomagnetism, gravitational shielding, and other physics conjectures. The blog focuses a bit more on personal conjecture and a bit less on physics "current events" than this one does, but there's nothing wrong with that.

I'm a bit out of pocket for time at the moment, so I haven't carefully analyzed any of his posts yet, but I may do so in the future. If they look good and the blog gets updated with any regularity (which if you look at my blog roll, you know that I define leniently), I may add it to my blog roll when I have the presence of mind to do that.

Also, while there is some chance that I'll post tomorrow or on Thanksgiving Day, I'll be taking a brief hiatus to take a 30th wedding anniversary trip and will be off the grid for that. But, unless my plane crashes, or I'm murdered, or eaten by wild animals, or World War III starts, or the blogger host goes out of business, I'll probably be back afterwards in due course.

Quantum Mechanics Without Feynman Diagrams

Nima Arkani-Hamed, a famous physicist, is making progress an efforts to do quantum physics calculations that are usually done with Feynman diagrams, which have a clear heuristic explanation (of assigning probabilities to all possible paths that a particle or particles can take from a starting position to an ending one), with a completely different kind of calculation, not involving infinite series that have to be approximated, that can get the same results in a subset of real world situations with less of a computational burden.

4gravitons sketches out his latest efforts in this quest.

MOND Was Right, ΛCDM Was Very Wrong, Re When Galaxies Formed

Stacy McGaugh takes a moment to emphasize that when it comes to the timing of galaxy formation, MOND was right and the ΛCDM model was profoundly wrong.
Our paper on massive galaxies at high redshift is out in the Astrophysical Journal today. This is a scientific analysis of the JWST data that has accumulated to date as it pertains to testing galaxy formation as hypothesized by LCDM and MOND. That massive galaxies are observed to form early (z > 10) corroborates the long standing prediction of MOND, going back to Sanders (1998):
Objects of galaxy mass are the first virialized objects to form (by z=10), and larger structure develops rapidly
The contemporaneous LCDM prediction from Mo, Mao, & White (1998) – a touchstone of galaxy formation theory with nearly 2,000 citations – was
present-day disc [galaxies] were assembled recently (at z<=1).
This is not what JWST sees, as morphologically mature spiral galaxies are present to at least z = 6 (Ferreira et al 2024). More generally, LCDM was predicted to take a long time to build up the stellar mass of large galaxies, with the median time to reach half the final stellar mass being about half a Hubble time (seven billion years, give or take). In contrast, JWST has now observed many galaxies that meet this benchmark in the first billion years. That was not expected to happen.

From here.

As an aside, I strongly favor naming the critical acceleration of MOND, usually notated a0, Milgrom's Constant, after Mordehai Milgrom, who devised MOND in 1983.

A Technical But Potentially Important Conflict With The ΛCDM Model

The Cosmic Background Radiation measured by the Planck collaboration should be a lot hotter than what is observed around nearby spiral galaxies, compared to what is predicted in the ΛCDM model (a.k.a. the Standard Model of Cosmology) and is much more correlated with the ultra-large scale cosmic filament structure of the universe than the ΛCDM model predicts as well. This means a couple of things:

* The ΛCDM model has added one more problem to its dozens of existing conflicts with observational evidence. The only reasons that it is still used is that it is simple, and there is no consensus alternative.

* The inferences made from the CMB background may be subject to a pervasive source of highly significant systemic error that is not yet well understood. This could impact all sorts of cosmology "facts" based upon these systemically incorrectly measured parameters. These errors could also be a source of some key tensions in current cosmology measurements.

* The problem with trying to explain this with a physical mechanism related to dark matter is that dark matter effects are already deeply integrated into the ΛCDM model. 
We confirm at the 5.7σ level previous studies reporting Cosmic Microwave Background (CMB) temperatures being significantly lower around nearby spiral galaxies than expected in the ΛCDM model. The significance reported in our earlier work was disputed by Addison 2024, who reported lower signficances when including pixels at distances far beyond the galactic halos while disregarding pixels close to the galaxies where the main signal is seen. Here we limit the study to pixels well within the galactic halos, focus on galaxies in dense cosmic filaments and improve on signal-to-noise compared to previous studies. 
The average CMB temperature in discs around these galaxies is always much lower in Planck data than in any of the 10.000 Planck-like CMB simulations. Even when correcting for the look-elsewhere-effect, the detection is still at the 3−4σ level. We further show that the largest scales (ℓ<16) of the Planck CMB fluctuations are more correlated with the distribution of nearby galaxies than 99.99% of simulated CMB maps. 
We argue that the existence of a new CMB foreground cannot be ignored and a physical interaction mechanism, possibly involving dark matter, as well as linked to intergalactic magnetic fields, should be sought.
Frode K. Hansen, et al., "A 5.7σ detection confirming the existence of a possibly dark matter related CMB foreground in nearby cosmic filaments" arXiv:2411.15307 (November 22, 2024).

Monday, November 25, 2024

More Nazca Lines Found

So says the New York Times, and don't bring aliens into it. It took a century to find the previous 430 of them. There could be as many as 500 more yet to be rediscovered.

Hundreds More Nazca Lines Emerge in Peru’s Desert

With drones and A.I., researchers managed to double the number of mysterious geoglyphs in a matter of months.

Some 303 previously uncharted geoglyphs made by the Nazca, a pre-Inca civilization in present-day Peru dating from 200 B.C. to 700 A.D., were identified with the help of machine learning. . . . 
The Nazca people carved the designs into the earth by scraping back the pebbled, rust-colored surface to expose the yellow-gray subsoil. Little is known about the shadowy culture, which left no written record. Aside from the etchings, pretty much all that exists of the civilization are pieces of pottery and an ingenious, still functioning irrigation network.

The ancient geoglyphs have attracted theories that range from the religious (they were homages to powerful mountain and fertility gods) to the environmental (they were astronomical guides to predict infrequent rains in the nearby Andes) to the fantastical (they were landing strips and parking lots for alien spacecraft).

Dr. Sakai said that geoglyphs were drawn near pilgrimage routes to temples, which implies that they functioned as sacred spaces for community rituals, and could be considered planned, public architecture. The newly discovered geoglyphs are mainly located along a network of trails that wound through the pampa. They were most likely made by individuals and small groups to share information about rites and animal husbandry.

How Are Cosmology Based Neutrino Mass Estimates Calculated?

How are cosmology based neutrino mass estimates calculated? What conditions must hold for them to be accurate? 

A new pre-print explains:

The cosmological upper bound on the total neutrino mass is the dominant limit on this fundamental parameter. Recent observations-soon to be improved-have strongly tightened it, approaching the lower limit set by oscillation data. Understanding its physical origin, robustness, and model-independence becomes pressing. 

Here, we explicitly separate for the first time the two distinct cosmological neutrino-mass effects: the impact on background evolution, related to the energy in neutrino masses; and the "kinematic" impact on perturbations, related to neutrino free-streaming. We scrutinize how they affect CMB anisotropies, introducing two effective masses enclosing background (mBackg.ν) and perturbations (mPert.ν) effects. We analyze CMB data, finding that the neutrino-mass bound is mostly a background measurement, i.e., how the neutrino energy density evolves with time. The bound on the "kinematic" variable mPert.ν is largely relaxed, mPert.ν<0.8eV. 

This work thus adds clarity to the physical origin of the cosmological neutrino-mass bound, which is mostly a measurement of the neutrino equation of state, providing also hints to evade such a bound.

Toni Bertólez-Martínez, Ivan Esteban, Rasmi Hajjar, Olga Mena, Jordi Salvado, "Origin of cosmological neutrino mass bounds: background versus perturbations" arXiv:2411.14524 (November 21, 2024).

Thursday, November 21, 2024

Proton And Neutron Structure

"Nucleons" are protons and neutrons. 

A new paper determines that their mass is distributed over a radius of consistent with one femtometer (a 10-15 meter), and is spread over a larger area than their electromagnetic charges which are found in their valence quarks. The proton charge radius is 0.842 femtometers. This is because the electromagnetically neutral gluons which bind these quarks, whose energy is the source of most of the mass of protons and neutrons, are spread out more than the electromagnetically charged quarks in a proton or neutron. 

Being closely connected to the origin of the nucleon mass, the gravitational form factors of the nucleon have attracted significant attention in recent years. We present the first model-independent precise determinations of the gravitational form factors of the pion and nucleon at the physical pion mass, using a data-driven dispersive approach. 
The so-called ``last unknown global property'' of the nucleon, the D-term, is determined to be −(3.38+0.26−0.32). The root mean square radius of the mass distribution inside the nucleon is determined to be 0.97+0.02−0.03 fm. Notably, this value is larger than the proton charge radius, suggesting a modern structural view of the nucleon where gluons, responsible for most of the nucleon mass, are distributed over a larger spatial region than quarks, which dominate the charge distribution. We also predict the nucleon angular momentum and mechanical radii, providing further insights into the intricate internal structure of the nucleon.
Xiong-Hui Cao, Feng-Kun Guo, Qu-Zhi Li, De-Liang Yao, "Precise Determination of Nucleon Gravitational Form Factors" arXiv:2411.13398 (November 20, 2024).

Graviton Mass And The Cosmological Constant

Meta Note: This post is my 2,700th post on this blog (over a little less than thirteen and a half years). It is my 12,080th post on this blog and my Wash Park Prophet blog combined (over a little less than nineteen and a half years).

Gravitons gravitate in proportion to their mass-energy which is a function of their gravitational wave frequency in the case of a massless graviton, even if they have no rest mass. But this effect is usually ignored. 

Exploring massive graviton theories, involving low mass, relativistic gravitons can reveal some of these effects which shouldn't actually depend on gravitons having rest mass. 

The biggest problem with massive graviton theories is that they qualitatively and not quantitatively change how gravity works, by giving rise to different numbers of degrees of freedom than are found in general relativity.
Relations between the graviton mass and the cosmological constant Λ have led to some interesting implications. We show that in any approach which leads to a direct correlation between the graviton mass and Λ, either through direct substitution of gravitational coupling in dispersion relations or through the linearization of Einstein equations with massive spin-2 fields, the Compton wavelength of the graviton lies in the superhorizon scale. As a result any gravitational approaches where the graviton mass is related directly to the cosmological constant are of no observational significance.
Oem Trivedi, Abraham Loeb, "On the Cosmological Constant-Graviton Mass correspondence" arXiv:2411.12757 (November 15, 2024).

Wednesday, November 20, 2024

Inverted Neutrino Hierarchy Disfavored By DESI

The DESI collaboration has found that the sum of the three neutrino masses should be less than 0.071 eV at 95% confidence (assuming as a prior only that the sum of the neutrino masses is greater than zero). This disfavors an inverted neutrino hierarchy the demands roughly a minimum of a 0.100 eV sum of neutrino masses, while a normal neutrino hierarchy requires a minimum sum of neutrino masses of only about 0.059 eV. The preference for a normal hierarchy is only about two sigma, however. This estimate is heavily dependent upon the assumed dark energy model, however, and assumes a fixed cosmological constant. 

The upper bound on the sum of the neutrino masses from direct measurements at KATRIN is about 1.35 eV, a cap that is likely to fall by about 0.75 eV to 0.60 eV when the KATRIN experiment is concluded. The upper limit based upon cosmology observations, as of 2020, was about 0.130 eV, and DESI significantly tightens this bound. Of course, direct measurement bounds on the absolute neutrino masses remain much weaker than those from cosmology, and will continue to be weaker for the foreseeable future.

The number of effective neutrino species N(eff) is estimated by DESI to be 3.18 ± 0.16 compared to the expected value of 3.044 if the only neutrinos are the three Standard Model active neutrinos, a possibility that is compatible at the one sigma level. This disfavors a model with four or more neutrinos impacting N(eff) at more than the five sigma level (as well as disfavoring the already ruled out possibility that there are two or fewer neutrino flavors at more than five sigma), consistent with past cosmological estimates of N(eff). This is a slightly higher value of N(eff) than a prior DESI analysis, due to this paper's additional consideration of "full shape" information, but the difference is immaterial given that the number of neutrino flavors is a quantity that changes in integer increments.

The loophole in the N(eff) measurement, however, is that a very massive fourth neutrino species would not register as a neutrino contributing to the number of effective neutrino species.

For example, a 50 GeV mass fourth generation active neutrino would not change N(eff).

In addition to its conclusions about neutrinos, the DESI collaboration concludes that the late time Hubble constant value is 68.40 ± 0.27 in  the usual units, which is closer to the CMB based determination of it of 67.66 ± 0.42, which is consistent with the new DESI estimate at the 1.5 sigma level, than many other efforts to determine the late time Hubble constant have suggested. The DESI results still prefer a non-constant amount of dark energy, however.

Friday, November 15, 2024

The Hubble Tension Considered

When Did We Learn That The Universe Is Expanding?

There is no reasonable doubt that the size of the observable universe has expanded over the last 13.8 billion years or so from time when it was dramatically smaller than it is today called the Big Bang.

Why this happened and the details of the very first moments of it are still the subject of ongoing research, but there is near universal consensus about the broad outlines of this process from Big Bang nucleosynthesis (end about 15 minutes after the Big Bang in a conventional cosmological chronology) to the present.

1924 paper by Carl Wirtz, which was one of the earliest to note the astronomy observations now explained with the expansion of the universe and to reach the conclusion that the universe was expanding has been made more widely available in an English translation on the 100th anniversary of its publication. 

Better know cosmologist Edwin Hubble, who read Wirtz's work, reached the same conclusion from the data and improved upon it by proposing "Hubble's Law" which quantified and characterized this expansion with what has come to be known as Hubble's Constant, in 1929.

Quantifying and characterizing any changes in the rate at which the universe has expanded has proven to be a more challenging problem which we are still wrestling with a century later.

General relativity with a cosmological constant is an idea that had been proposed only a few years earlier when Wirtz and Hubble made their early ground breaking conclusions that astronomy observations supported an expanding universe.

Once Hubble's Law was proposed, the race was on to measure Hubble's constant, sometimes producing conflicting results. Some of the early estimates of it (in (km/s)/Mpc units), one of which predated the formal publication of Hubble's law, were as follows (often with significant uncertainties or no estimated uncertainties):

1927  625

1929  500

1956  180

1958    75

early 1970s 55

mid-1970s 100

late 1970s to 1994 50-90

The best fit values for estimates made since 1994 have ranged from 69.8 to 76.9, and the uncertainties in those estimates has more or less steadily fallen to as little as 0.42 for CMB based indirect early time estimates and as little as 1.0 for some late time direct measurements.

Notably, it took less than 30 years from the publication of Hubble's law to get measurements of the value of Hubble's constant that were reasonably close to the modern measured value.

Previous discrepancies between measurements of the Hubble constant which is functionally related to the cosmological constant of general relativity, have had discrepancies and tensions (much bigger in magnitude than the current "Hubble tension") before, but those were always resolved by reducing sources of measurement uncertainty in the differing values of the Hubble constant from different kinds of observations.

The LambaCDM "Standard Model of Cosmology" assumes that this expansion is explained by general relativity with a cosmological constant. The source of  this phenomena due to this cosmological constant in the LambdaCDM model is often called "dark energy."

The Hubble Tension

The simple explanation of this expansion with a constant cosmological constant in general relativity (which by the way, facially, at least, is a gravitational modification and not a new substance or separate force), which leads to a constant value of the Hubble constant, however, has broken down in the last few years. 

Increasingly powerful space telescopes have shown a tension between the high precision determination of the Hubble constant inferred from the Planck cosmic microwave background (CMB) observations early in the universe's history, and late universe measurements of the Hubble constant.
[M]easurements from the Planck mission published in 2018 indicate a lower value of 67.66 ± 0.42 (km/s)/Mpc, although, even more recently, in March 2019, a higher value of 74.03 ± 1.42 (km/s)/Mpc has been determined using an improved procedure involving the Hubble Space Telescope. The two measurements disagree at the 4.4σ level, beyond a plausible level of chance. The resolution to this disagreement is an ongoing area of active research.

The chart below from the same link summarizes some of these recent measurements. 


New late time measurements in the last few years from sources including the James Webb Space Telescope and DESI, since the chart below was made, with one or two exceptions (such as a July 2023 estimate based upon astronomy observations of kilonova that produced a late time value of Hubble's constant of 67.0 ± 3.6) that can't cancel out independent late time measurements to the contrary, have generally strengthened the evidence that the Hubble tension is real and not just a product of observational uncertainty.

Even in the face of the Hubble tension, Hubble's Law is still a good first approximation description of the rate at which the universe is expanding. The difference in the measured values of Hubble's constant, in measurements of its value from times that are up to about 13 billion years apart, is less than 10%. 

This is still highly statistically significant (more than 5 sigma), because the relative uncertainty in the difference between the most precise measurements is less than 2%. But in plenty of astronomy contexts, a field not generally known for its high precision by physics standards, 10% precision is still excellent.

But from a fundamental laws of physics and cosmology perspective, if these results are confirmed, the consequences are profound. 

Any changes to Hubble's constant over time demand that the simple cosmological constant explanation of these observations be discarded, effectively rewriting a part of the equations of general relativity with deep cosmological implications, in favor of a new theory.

Possible Resolutions Of The Hubble Tension

Time will tell how the Hubble tension is resolved.

There are basically three possible resolutions to the Hubble tension (more than one of which could each provide a partial explanation).

1. Indirect Early Universe Estimates Are Wrong. The CMB based determination of Hubble's constant in the early universe (about 380 million years after the Big Bang according to the LambdaCDM model) is flawed somehow, in a way that underestimates the value of the Hubble constant in the early universe. 

McGaugh, for example, has suggested that this is a plausible full or partial explanation.

For example, maybe the Planck collaboration omitted one or more theoretically relevant components of the formula for converting CMB observations to a Hubble constant value that were reasonably believed to be negligible (indeed, it almost certainly did so). But it could be that one or more of the components omitted from the Planck collaboration's calculated value of Hubble's constant from the CMB data actually increase the calculated value by something on the order of 9% because some little known factor makes the component(s) omitted have a value much higher than one would naively expect.

Also, since the indirect determination of the value of Hubble's constant from CMB measurements is model dependent, any flaw in the model used could cause its determination of Hubble's constant to be inaccurate.

An indirect CMB based determination of Hubble's constant is implicitly making a LamdaCDM model dependent determination of how much the universe had expanded since the Big Bang at the time that the CMB arose. If the LambdaCDM model's indirect calculation of Hubble's constant predicts that the CMB arose later than it actually did, then its indirect determination of the value of Hubble's constant would also be too low, and a high early time value of Hubble's constant would resolve the problem.

This is a plausible possibility because the James Webb Space Telescope has confirmed that the "impossible early galaxies problem" is real, implying that there is definitely some significant flaw (of not too far from the right magnitude and in the right direction) in the LambdaCDM models description of the early universe, although the exactly how much earlier than expected galaxies arose in the early universe (which is a mix of cutting edge astronomy, statistical analysis, and LambdaCDM modeling) hasn't been pinned down with all much precision yet.

The impossible early galaxy problem is that galaxies form significantly earlier after Big Bang than the LambdaCDM model predicts that they should. The galaxies seen by the JWST at about redshift z=6 (about 1.1 billion years after the Big Bang) are predicted in the LambdaCDM model to apear at about redshift z=4 (about 1.7 billion years after the Big Bang).

If the CMB arose more swiftly after the Big Bang than the LambdaCDM model predicts it did but the amount by which the universe had expanded at that point was about the same, in much the same way that galaxy formation actually occurred earlier than the LambdaCDM model predicted that it would, then that could fully or partially resolve the Hubble tension.

The relationship between Hubble's constant and the amount of expansion in the universe at any given point in time is non-linear (it's basically exponential). So, figuring out how much of a roughly 55% discrepancy at 1.1 billion years after the Big Bang in galaxy formation time translates into in Hubble constant terms, at about 380 million years after the Big Bang, is more involved than I have time to work out today, even though it is really only an advanced pre-calculus problem once you have the equations set up correctly. But my mathematical intuition is solid enough to suspect that the effect isn't too far from the 9% target to within the uncertainties in the relevant measurements.

2. Late Time Direct Measurements Share A Systemic Error. The multiple different, basically independent, methods of measuring Hubble's constant in the late universe are flawed in a way that causes them to overestimate Hubble's constant in roughly the same amount.

The problem is that since several different methods have been used and reach similar higher values for Hubble's constant in the late universe, so the issue can't be one that is particular to only a single method of determining Hubble's constant.

For example, one explanation that has been explored is that the little corner of the universe around the Milky Way from the perspective of solar system observers has some local dynamics, or has local distortions that impact light at the relevant wavelengths reaching us in the solar system (e.g. due to localized gravitational lensing or local distributions of interstellar gas and dust) that has nothing to do with the expansion of the universe, but is indistinguishable, by the most precise existing methods used to measure Hubble's constant in the late time universe, from an increase in Hubble's constant of about 6.4 (km/s)/Mpc. 

I've bookmarked a number of papers exploring this hypothesis but haven't had the time to analyze them as a group or compile them in a blog post.

3. Hubble's Constant Isn't Constant. The third possibility is that Hubble's constant genuinely isn't constant and the rate of the expansion of the universe attributed to a cosmological constant in the equations of General Relativity is mistaken. Thus, new physics are necessary to explain these observations.

This is, of course, the most exciting possible answer. But I'll save consideration of some of these alternative theories to a cosmological constant for another post (and I won't address them in the comments to this post either). 

Suffice it to say that there are many proposals for alternatives that could resolve the Hubble tension out there in the literature.

GR v. Asymptotically Safe Gravity

One alternative to general relativity, called asymptotically safe gravity, is one of the better established routes to solving the difficult problem of devising a theory of quantum gravity (which is necessary to integrate general relativity with the Standard Model of Particle Physics). 

This approach has a characteristic observable difference from general relativity: its black holes are smaller. But astronomy observations of actual black holes show that unmodified general relativity is a better fit than this alternative. So, this otherwise promising approach to quantum gravity may not be the right one.

According to the asymptotically safe gravity, black holes can have characteristics different from those described according to general relativity. Particularly, they are more compact, with a smaller event horizon, which in turn affects the other quantities dependent on it, like the photon ring and the size of the innermost stable circular orbit. 
We decided to test the latter by searching in the literature for observational measurements of the emission from accretion disk around stellar-mass black holes. All published values of the radius of the inner accretion disk were made homogeneous by taking into account the most recent and more reliable values of mass, spin, viewing angle, and distance from the Earth. We do not find any significant deviation from the expectations of general relativity. Some doubtful cases can be easily understood as due to specific states of the object during the observation or instrumental biases.
Luigi Foschini, Alberto Vecchiato, Alfio Bonanno, "Searching for quantum-gravity footprint around stellar-mass black holes" arXiv:2411.09528 (November 14, 2024).

Wednesday, November 13, 2024

X17 Not Found

The ATOMKI laboratory in Debrecen, Hungary claimed to have found evidence of a new fundamental particle with a mass of about 17 MeV in the details of several different radioactive decays.

The MEGII detector looked for the claimed X17 particle in a dedicated experiment to search for it. 

It didn't find it and set strict limits on the properties it would have had to have had to evade detection in this experiment. The introduction to the paper also notes that explanations for the anomaly observed involving "Standard Model or nuclear physics effects unaccounted for previously have also been suggested."

Standard Cosmology In A Nutshell

A new paper has a handy chart assigning red shift time frames for key eras derived from from standard cosmology assumptions to specific numbers of years after the Big Bang, and the ambient temperature at each point. See also a related Wikipedia page on the chronology of the universe. 

The chart starts at what might fairly be called the midpoint of the "Photon epoch" and after Big Bang nucleosynthesis (which takes place from roughly 10 seconds to 1000 seconds after the Big Bang).



The energy scales which have been probed by the Large Hadron Collider (LHC) reach the energy scales associated in the standard cosmology chronology with 10^-12 seconds (i.e. one trillionth of a second) after the Big Bang. In this account:

The sphere of space that will become the observable universe is approximately 300 light-seconds (~0.6 AU) in radius at this time.

So, the time frame in which beyond the Standard Model physics that only manifest at extremely high energies (assuming that the Standard Model is merely a "low energy" effective field theory) is physically relevant is very, very short.

The epochs that precede this point, the "Electroweak epoch", the "Inflationary epoch", the "Grand Unification epoch", and the "Planck epoch" are all high speculative (especially the "Inflationary epoch", the "Grand Unification epoch", and the "Planck epoch."). In the highly speculative inflationary epoch account:

Cosmic inflation expands space by a factor of the order of 10^26 over a time of the order of 10^−36 to 10^−32 seconds.

The paradigm of Grand Unification has increasingly fallen into disfavor. 

The Milky Way's Formation History

The observed differences between the actual makeup of the Milky Way and what is predicted by simulations is more or less exactly what MOND would predict. 

The early phases of galaxy formation are in the Newtonian regime so they behave as a Newtonian model predicts that they would. 

But once the galaxy reaches a point where some parts of it are in the MOND regime, the pull of gravity binding satellites to it gets stronger than predicted by a Newtonian regime, so dwarf galaxies at the galactic fringe look like recently acquired outlier galaxies in a Newtonian model.
Galactic halos are known to grow hierarchically, inside out. This implies a correlation between the infall lookback time of satellites and their binding energy. In fact, cosmological simulations predict a linear relation between infall lookback time and log binding energy, with a small scatter. 
Gaia measurements of the bulk proper motions of globular clusters and dwarf satellites of the Milky Way are sufficiently accurate to establish the kinetic energies of these systems. Assuming the gravitational potential of the Milky Way, we can deduce the binding energies of the dwarf satellites, as well as of the galaxies previously accreted by the Milky Way, which can, for the first time, be compared to cosmological simulations. 
We find that the infall lookback time vs. binding energy relation found in a cosmological simulation matches that for the early accretion events, once the simulated MW total mass within 21 kpc is rescaled to 2*10^11 solar masses, in good agreement with previous estimates from globular cluster kinematics and from the rotation curve. However, the vast majority of the dwarf galaxies are clear outliers to this re-scaled relation, unless they are very recent infallers. In other words, the very low binding energies of most dwarf galaxies compared to Sgr and previous accreted galaxies suggests that most of them have been accreted much later than 8 or even 5 Gyr ago. We also find that some cosmological simulations show too dynamically hot sub-halo systems when compared to identified MW substructures, leading to overestimate the impact of satellites on the Galaxy rotation curve.
F. Hammer, et al., "The Milky Way accretion history compared to cosmological simulations -- from bulge to dwarf galaxy infall" arXiv:2411.07281 (November 11, 2024).

Another new paper likewise finds a strong correlation between galaxy mass as estimated by lensing and galaxy shape. This is something that Deur's work predictions, but which is not found in standard Newtonian gravity with dark matter type simulations.

Tuesday, November 12, 2024

Muon g-2 HLbL Developments

Background (mostly, but not entirely, from an October 27, 2022 post and a July 18, 2024 post)

The combined result of the experimental measurements of muon g-2 (all of the numbers that follow are in the conventional -2 and divided by two form times 10^-11) is:

116,592,059 ± 22 

This compares to the leading Standard Model predictions of: 

116,592,019 ± 38 (which is a relative error of 370 parts per billion). This is from A. Boccaletti et al., "High precision calculation of the hadronic vacuum polarisation contribution to the muon anomaly." arXiv:2407.10913 (July 15, 2024).

The gap is only 40 ± 44.9, with the Standard Model prediction still a bit lower than the experimental value.

The QED + EW predicted value is:

116,584,872.53 ± 1.1

About 90% of the combined uncertainty in this  QED + EW value is from the EW component (there may be an error in the standard QED prediction but it is so small that it is immaterial for these purposes).

The difference, which is the experimentally implied hadronic component value (HVP plus HLbL), is:

7186.47 ± 22.02

This has a plus or minus two sigma range of:

7,142.32 to 7,230.51

The hadronic QCD component is the sum of two parts: the hadronic vacuum polarization (HVP) and the hadronic light by light (HLbL) components.

In the Theory Initiative analysis the QCD amount is 6937(44) which is broken out as HVP = 6845(40), which is a 0.6% relative error and HLbL = 98(18), which is a 20% relative error.

The latest LO-HVP calculation component is 7141 ± 33 (a relative error of just 0.46%).

As of November 1, 2024, it was clear that the Theory Initiative calculation of the Standard Model value of the HVP contribution to muon g-2 (which differs from 5.1 sigma from the experimental value) was the flawed one:
Fermilab/HPQCD/MILC lattice QCD results from 2019 strongly favour the CMD-3 cross-section data for e+e−→π+π− over a combination of earlier experimental results for this channel. Further, the resulting total LOHVP contribution obtained is consistent with the result obtained by BMW/DMZ, and supports the scenario in which there is no significant discrepancy between the experimental value for aμ and that expected in the Standard Model.

Similarly, the introduction to a new paper that is the motivation for this post notes that:

estimates based on τ data-driven approaches or lattice QCD calculations significantly reduce the tension between theoretical and experimental values to 2.0σ and 1.5σ, respectively (less than one σ in [44]). The latest CMD-3 measurement of σ(e+e− → π+π−) also points in this direction.

Reference [44] cited in the block quote above is the current state of the art calculation cited above.

The HLbL Contribution

The Theory Initiative HLbL calculation

None of the refinements of the muon g-2 HVP contribution discussed above tweak the Theory Initiative value of the Hadronic Light by Light (HLbL) contribution of 92 ± 18, even though it has the highest relative error of any of the components of the muon g-2 calculation of nearly 20%, because the HLbL is only 1.3% of the total hadronic contribution and still has only half the uncertainty of the HVP contribution.

But, progress has been made on the HLbL component as well, which is now getting more attention as the experimental result's increased precision and the progress on the HVP contribution makes it relevant.

The Chao (April 2021) HLbL Calculation 

On the day that the first new muon g-2 experimental results from Fermilab were released a "new calculation of the hadronic light by light contribution to the muon g-2 calculation was also released on arXiv." This wasn't part of the BMW calculation and increased the HLbL contribution from 92 ± 18 to 106.8 ± 14.7. That paper stated:
We compute the hadronic light-by-light scattering contribution to the muon g−2 from the up, down, and strange-quark sector directly using lattice QCD. Our calculation features evaluations of all possible Wick-contractions of the relevant hadronic four-point function and incorporates several different pion masses, volumes, and lattice-spacings. We obtain a value of aHlblμ = 106.8(14.7) × 10^−11 (adding statistical and systematic errors in quadrature), which is consistent with current phenomenological estimates and a previous lattice determination. It now appears conclusive that the hadronic light-by-light contribution cannot explain the current tension between theory and experiment for the muon g−2.
En-Hung Chao, et al., "Hadronic light-by-light contribution to (g−2)μ from lattice QCD: a complete calculation" arXiv:2104.02632 (April 6, 2021) (the failure of this pre-print to be published, three and a half years later, however, is somewhat concerning, as there is no obvious flaw in the paper from the eyes of an educated layman).

This would increase the Standard Model prediction's value and lower the uncertainty to:

116,592,033.8 ± 36

This would reduce the gap between this combined theoretical prediction and the world average experimental value to 25.2 ± 42.2 (just 0.6 sigma).

The Zimmerman (October 2024) HLbL Calculation

The most recent total HLbL calculation, from October 2024 reached value of 125.5 ± 11.6, which would reduce the HLbL relative uncertainty to 9% (cutting it in more than half from the Theory Initiative value). This would make the state of the art combined prediction of muon g-2:

116,592,052.5 ± 35

The gap between this combined state of the art calculations of the Standard Model value of muon g-2, and world average experimental value for muon g-2, would be 6.5 ± 41.3 (less than 0.2 sigma).

Other Recent HLbL work

As the introduction to the new paper explains in a nice overview of the HLbL calculation:
The HVP data-driven computation is directly related to the experimental input from σ(e+e− → hadrons) data. HLbL in contrast, requires a decomposition in all possible intermediate states. Recently, a rigorous framework, based on the fundamental principles of unitarity, analyticity, crossing symmetry, and gauge invariance has been developed, providing a clear and precise methodology for defining and evaluating the various low energy contributions to HLbL scattering. The most significant among these are the pseudoscalar-pole (π(0), η and η′) contributions. Nevertheless, subleading pieces, such as the π± and K± box diagrams, along with quark loops, have also been reported, with the proton-box representing an intriguing follow-up calculation. Specifically, a preliminary result obtained from the Heavy Mass Expansion (HME) method —which does not consider the form factors contributions— for a mass of M ≡Mp=938 MeV, yields an approximate mean value of ap−box µ = 9.7 ×10−11. This result is comparable in magnitude to several of the previously discussed contributions, thereby motivating a more realistic and precise analysis that incorporates the main effects of the relevant form factors. In this work, we focus on the proton-box HLbL contribution. We apply the master formula and the perturbative quark loop scalar functions, . . . (which we verified independently), together with a complete analysis of different proton form factors descriptions, which are essential inputs for the numerical integration required in the calculations.

The new paper concludes the proton box contribution to HLbL which was preliminarily estimated at 9.7 is actually 0.182 ± 0.007, which is about 50 times smaller than the preliminary result and immaterial in the total, making the neutral and charged pion, the eta, the eta prime, charged kaon, and quark loop contributions as the primary components of the HLbL contribution to muon g-2.

Another new paper calculates the neutral pion contribution to HLbL which is the single largest component of the HLbL contribution, which accounts for more than half (almost two-thirds) of the HLbL contribution:

We develop a method to compute the pion transition form factors directly at arbitrary photon momenta and use it to determine the 
π0
-pole contribution to the hadronic light-by-light scattering in the anomalous magnetic moment of the muon. The calculation is performed using eight gauge ensembles generated with 2+1 flavor domain wall fermions, incorporating multiple pion masses, lattice spacings, and volumes. By introducing a pion structure function and performing a Gegenbauer expansion, we demonstrate that about 98% of the π0-pole contribution can be extracted in a model-independent manner, thereby ensuring that systematic effects are well controlled. After applying finite-volume corrections, as well as performing chiral and continuum extrapolations, we obtain the final result for the π0-pole contribution to the hadronic light-by-light scatterintg in the muon's anomalous magnetic moment, aπ0poleμ=59.6(2.2)×1011, and the π0 decay width, Γπ0γγ=7.20(35)eV.

Tian Lin, et al., "Lattice QCD calculation of the π(0)-pole contribution to the hadronic light-by-light scattering in the anomalous magnetic moment of the muon" arXiv:2411.06349 (November 10, 2024).

The relative uncertainty in the neutral pion contribution is 3.7%, which is a much larger relative uncertainty than in the EM, weak force, or HVP components, but much smaller than the relative uncertainty in the HLbL calculation as a whole.

This neutral pion contribution calculaton is an incremental improvement in the precision of this estimate, compared to most other recent attempts, and produces in value in the same ballpark as previous attempts (i.e. it is statistically consistent with them):

This also implies that the uncertainty from the charged pion, the eta, the eta prime, charged kaon, and quark loop contributions to HLbL, while small in magnitude (about 29-45 * 10^-11 from all of them combined) have combined uncertainties on the order of 13-17 * 10^-11. This is on the order of 35-45% relative uncertainty, which is far more than any other part of the muon g-2 calculation. 

Future Prospects

As the uncertainty in the HVP calculation falls (and this calculation currently approaches the maximum relative precision possible in QCD), this becomes more material in the overall accuracy of the muon g-2 calculation, and the greater precision will be important as the precision of the experimental measurement continues to improve. QCD calculations definitely can get more precise than the HLbL calculations are today, and especially more precise than the HLbL calculations other than the neutral pion contribution. 

But, it will be quite challenging, and may require a major breakthrough in QCD calculations generally, to get the uncertainty in the muon g-2 calculation to below 33-34 * 10^-11, which would be only about a 3-6% improvement from the best available combination of calculations so far. Therefore, the experimental result will probably be more precise than the QCD calculation for the foreseeable future.

Still, the bottom line, which has been clear since the BMW calculation was published at the time of the first Fermilab muon g-2 measurement, is that there is no muon g-2 anomaly since the predicted value and the measured value are consistent at the 0.2 sigma level. 

This global test of beyond the Standard Model physics at relatively low energies reveals that the Standard Model physics is complete and accurate at sub-parts per million levels, at least at relatively low energies of on the order of low GeVs or less.