Thursday, April 30, 2026

The Standard Model Still Works (Again)

The LHCb experiment at the Large Hadron Collider (LHC) has made a statistically significant observation (although not an absolutely certain discovery) a rare decay of a particular kind of positively charged bottom quark meson (to a positively charged pion and an electron-positron pair, which is an example of what is called a semi-leptonic decay because it is a mix of a hadron, the pion, and leptons like electrons and positrons) with a frequency of one decay per 40 million decays of this kind of meson (a kind of meson which, itself, doesn't make up a large share of mesons produced at LHCb). 

This just happens to be statistically consistent with the frequency of this kind of decay of this kind of meson that the Standard Model predicts of B(B+→ π+ℓ+ℓ−) = (2.04 ± 0.21) × 10^−8, which is about one per 50 million decays. The same decay, but with muons, was first seen in 2012 at a branching fraction of one per 55 million decays that was also statistically consistent with the Standard Model expectation (which is the same for electrons and for muons due to lepton universality).

The first evidence for the decay B+→π+e+e− is reported using proton-proton collision data recorded by the LHCb experiment at centre-of-mass energies of 7, 8 and 13 TeV, corresponding to an integrated luminosity of 9 fb^−1. 
A signal excess with a significance of 3.2σ is observed and the branching fraction is measured to be B(B+→ π+e+e−) (2.4+0.9−0.8+0.4−0.2) × 10^−8, where the first set of uncertainties is statistical and the second is systematic. The result is consistent with the Standard Model expectation.
LHCb collaboration, "First evidence of the decay B+→π+e+e−" arXiv:2604.26784 (April 29, 2026).

Combining the statistical and systemic uncertainties, the total uncertainty is about 2.4 ± 0.9 x 10^-8, which a larger branching fraction (i.e. more events) actually slightly favored over a smaller one (i.e. fewer events), relative to the best fit value.

The deviation from the Standard Model expectation in the muon measurement was about 0.7 sigma (in the opposite direction of the deviation in the electron experiment, from the best fit value), while the deviation from the Standard Model expectation in the electron measurement was about 0.4 sigma. This suggests that the systemic uncertainty estimate in the Standard Model prediction and in the experiments was probably conservatively somewhat high.

This particular hadron decay isn't extremely significant (hadrons are either mesons like the B+ or baryons like the proton). But comparing the decay rate of a positively charged pion with a muon-antimuon pair to the decay rate of a positively charge pion with an electron-positron pair is a good test of "lepton universality" (i.e. the Standard Model rule that electrons, muons, and tau leptons have properties that are identical except for their masses). For several years there were experimental anomalies that made it appear that lepton universality was violated, but those anomalies were recently resolved in favor of the Standard Model prediction that lepton universality is not violated.

There are about a hundred plain vanilla mesons and baryons in the Standard Model like the B+ meson studied here, and some of the heavier ones have perhaps hundreds of decay modes with a predicted branching fraction of less than one decay per billion decays. So, the universe of Standard Model predicted meson decays to look for is somewhere on the order of 10,000.

The B+ meson has two "valance quarks" an up quark and an anti-b quark. It has a rest mass of 5279.26 ± 0.17 MeV/c^2 (about 5.6 times the mass of a proton and a little less massive than a Lithium-6 atom). It has total angular moment (a.k.a. "spin") of 0 and odd (i.e. negative) parity, which means that it is a "pseudo-scalar" meson. It is ephemeral, it has a mean lifetime of (1.638 ± 0.004) × 10^−12 seconds (i.e. a little more than a trillionth of a second). It has more than two dozen measured decay modes that happen in more than one in a million decays, and the vast majority of the time B+ mesons decay to particles that include some kind of charm quark hadron. It has hundreds of decay modes more probable than this one.

The Standard Model was devised in the early 1970s, the b quark was discovered at Fermilab in 1977. The full set of fundamental particles (except the Higgs boson, which was discovered at Fermilab in 2012 and the discovery that the neutrinos were massive), was in place in 1995, more than three decades ago. 

The Standard Model prediction for the frequency of this particular B+ meson decay was cited in connection with the first observation of the parallel muon decay in 2012 and in 2015, and derive from a 2008 paper (i.e. it was made more than 14 years before this decay was observed just as predicted).

'The Higgs boson and the neutrino masses don't (meaningfully) enter into the calculation of the branching fractions of the B+ meson, so the only thing that has changed in the Standard Model since 1995 that is relevant to this calculation is that the measurements of some of the fundamental physical constants involved in the calculation, especially the relevant CKM matrix elements (as noted at page 13 of the 2008 paper) have gotten more precise over that time. (The accuracy with which we know another non-fundamental physical constant, called the "form factor" of the B+ meson, which is too hard to calculate from first principles at this point, has also improved and is material to this calculation.)

The physical constant whose improved precision matters most in this context are the CKM matrix elements for the b quark to up quark transition probability in W boson interactions and the top quark to down quark transition probability in W boson interactions, which are low: about 0.14% and 0.007% respectively. 

The respective 3% and 2% uncertainties in the world average measurement of these physical constants are probably some of the leading sources of the roughly 10% uncertainty in the Standard Model prediction of the frequency of this B+ meson decay branching fraction. It is hard to say exactly how much of a share of the uncertainty in the predicted value is from this source, however, because while the respective papers linked above provide an error budget chart for the uncertainties in their experimental measurements, none of the papers provide an exact error budget chart for their Standard Model predictions for this decay frequency, probably because this was considered too elementary to publish. 

Computer processing capacity has also improved greatly since then, which makes these calculations much less cumbersome to actually make.

In isolation, this experimental confirmation of the Standard Model prediction could be just a lucky fluke, although a quite remarkable one, even on its own. But together with thousands of other measured hadron branching decay fractions, the Standard Model is really unstoppable. 

Experimental result anomalies where there are deviations from the Standard Model prediction are few, far between, modest in statistical significance, and usually go away quickly for closer inspections and more experiments and analysis. Experiments testing the Standard Model in contexts other than hadron decay branching fractions that involve completely different kinds of calculations are just as consistently correct. It is an extremely robustly tested theory.

Even if there are beyond the Standard Model physics gaps that are missing from the Standard Model, it is very close to the truth. The open parameter space for deviations from it are very small.

Tuesday, April 28, 2026

Theoretical X17 Considerations And Related Conjectures

Could the X17 resonance, if it is even real, be an electromagnetically bound light quark-light antiquark meson?

This explanation is much more attractive than a new fundamental particle, as it wouldn't involve beyond the Standard Model physics, and would instead involve a low energy electromagnetically bound up-antiup or down-antidown pair of quarks.

It has to be electromagnetically bound, rather than strong force bound, because a neutral light quark-antiquark pair bound by the strong force, i.e. a neutral pion, has a mass of about 135 MeV, mostly due to the binding energy of the gluons confining them in a hadron. 

This said, this theory has a big problem. 

Why aren't the light quarks confined in a QCD bound hadronic state? 

The only times quarks are not in QCD bound hadronic states that have so far been observed are shortly after top quarks form (because they almost always decay before they can hadronize, although we just learned in 2025 that in rare cases a top anti-top quark pair can form toponium in a QCD bound state the persists very briefly) and in quark-gluon plasma at temperatures corresponding to about 1-2 GeV (i.e. 11-23 trillion Kelvin).
The invariant mass spectrum of e+e− pairs produced in high-energy Pb-emulsion collisions at 160 A GeV at CERN SPS exhibits a complex structure of many resonances resting on top of a broad enhancement at invariant masses below 50 MeV, with the prominent resonance at 19 ±1 MeV providing independent support for the hypothetical X17 particle. 
We show that this complex structure may be coherently described as signatures for the neutral color-singlet qq¯ quark matter in both its deconfined and confined phases. That is, the broad enhancement may arise from thermal annihilation of QED(U(1))-deconfined quarks and antiquarks into e+e− pairs at the phase transition temperature Tc(QED), theoretically estimated to be 4.75 ± 1.2 MeV from the transitional equilibrium condition. The observed 3±1 and 7±1 MeV resonances may correspond to the QED(U(1))-deconfined dd¯ and uu¯ Coulomb bound states near their quark rest masses, respectively, whereas the observed 19 ± 1 MeV resonance may correspond to the QED(U(1))-confined isoscalar QED meson. 
The approximate agreement between the theoretical and the experimental spectrum suggests that both QED(U(1))-confined and QED(U(1))-deconfined neutral color-singlet qq¯ quark matter may have been produced in these high-energy Pb-emulsion collisions. We propose future experiments to confirm or refute these findings.
Cheuk-Yin Wong, "Possible Evidence for Neutral Color-Singlet qq¯ Quark Matter from High-Energy Pb-Emulsion Collisions" arXiv:2604.23473 (April 25, 2026) (21 pages).

Some conjectures

What would work without breaking the rules of the Standard Model, however, is if the 3 and 7 MeVs were light quark-antiquark pairs that were produced and immediately annihilated before  they could hadronize, and if the 19 MeV resonance was an electromagnetically bound positron-electron state (i.e. positronium). Positronium has a ground state mass of 1.022 MeV  (twice the 0.511 MeV mass of an electron or positron), however, with excited states varying in mass by single digit eV amounts per state, which wouldn't generate a single resonance at 17-19 MeV. 

Another possibility is that the observed 3 ± 1 MeV resonances may correspond to the QED(U(1))-deconfined uu¯ Coulomb bound state near its quark rest masses, that the 7 ± 1 MeV resonances correspond to the QED(U(1))-deconfined dd¯ Coulomb bound state and also to uu¯uu¯ Coulomb bound state near their respective quark rest masses, and that the observed 19 ± 1 MeV resonance may correspond to the QED(U(1))-deconfined dd¯dd¯ Coulomb bound state.

The light quark masses, according to the Particle Data Group (admittedly at the 1-2 GeV energy scale and not the low single digit to tens of MeVs energy scale) is as follows:


The rest mass of four d-quarks is 18.8 MeV, which is right where the resonance is observed.

In this hypothesis, these resonances fail to hadronize because the e+e− pairs that produced one or two light quark-antiquark pairs didn't have enough mass-energy to form a 135 MeV neutral pion, so they instead formed one or two deconfined quark-antiquark pairs that quickly annihilate again because the system had enough energy to create the quarks, but not enough energy to create the bound system of quarks and gluons necessary to form a pion. This has the virtue, again, of not requiring any BSM fundamental particles or new forces.

A four quark solution requires angular momentum that wouldn't normally be present in a simple e+e− pair, but if there were two e+e− pairs in close proximity, both with only modest kinetic energy, which is plausible in the context of the complex overall environment of the high-energy Pb-emulsion collisions generating the data here, or the interactions of the full fledged multi-nucleon atoms present in other contexts where there are claimed sightings of the X17 resonance, a coincidence of two low energy e+e− pairs would be expected with some calculable frequency.

This explanation would still be ground breaking, as it would represent a third circumstance, previously unknown and not predicted, where quarks are (briefly) deconfined. But it would be far less radical than most of the alternative explanations.

Monday, April 27, 2026

Does a(0) Evolve Over Time?

The radical acceleration relation (RAR) which is implied by MOND but isn't necessary caused by MOND, holds true for all low-z observations (i.e. nearby galaxies). But this study concludes that while the RAR still holds in intermediate age galaxies (i.e. those that are farther away), that Milgrom's constant a(0) for these galaxies has a numerical value that is a factor of two greater than what it is for low-z galaxies.
The radial acceleration relation (RAR) is a tight empirical correlation between the observed radial acceleration (a_tot) and the baryonic radial acceleration (a_bar) measured across galaxy radii: these two accelerations start to deviate significantly from each other below a characteristic acceleration scale, a0. So far, observational studies of the RAR have predominantly focused on galaxies in the local Universe, leaving its evolution with cosmic time largely unexplored. 
Using high signal-to-noise data from the MUSE Hubble Ultra Deep Field survey, we investigate the RAR with a sample of 79 star-forming galaxies (complete above M* >10^8.8 Msun) at intermediate redshifts (0.33 < z <1.44). We estimate the observed intrinsic acceleration and the baryonic acceleration from a disk-halo decomposition that incorporates stellar, gas, and dark matter components, with corrections for pressure support, using 3D forward modelling. 
We find a RAR in our intermediate-z sample offset from the local relation, with a higher characteristic acceleration scale, a0(z~1) = 2.38+/-0.1* 10^-10 m/s^2, and a larger intrinsic scatter (~0.17 dex). Dividing the sample into redshift bins and refitting the RAR in each bin, we find a characteristic acceleration scale that systematically increases with z. Parametrizing the z-dependence as a0(z)= a0(0) + a1 * z, we obtain a1 = 1.59 +/- 0.1 * 10^-10 m/s^2, providing evidence for a z-evolution. 
We find similar results using various dark matter halo profiles as well as the Modified Newtonian Dynamics framework in our 3D forward modelling. Our results show that the RAR persists at intermediate redshift, with statistically significant redshift evolution of the characteristic acceleration, pointing to a possible evolution of the baryon-missing mass connection over cosmic time.
B. I. Ciocan, N. F. Bouché, J. Fensch, D. Krajnović, J. Freundlich, H. Desmond, B. Famaey, R. Techi, "MUSE-DARK III: The evolution of the radial acceleration relation at intermediate redshifts" arXiv:2604.22613 (April 24, 2026) (Accepted in A&A).

For reference z=0.33 is about 3.7 to 3.8 billion years ago, z=1 is about 7.7 to 8 billion years ago, and z=1.44 is about 9 to 10 billion years ago. The universe is about 13.8 billion years old. A variation of 0.17 dex is about ± 48%. The intrinsic scatter in the recent time SPARC galaxy sample is about ± 8% (0.034 dex), which is about is small as possible given the precision of the astronomy instrumentation involved. Milgrom's constant is about a(0) ≈ 1.2 × 10^−10 m/s^2.

Ciocan (2026), above, and the cluster data, both point to something very like MOND, except that a(0) evolves under certain circumstances to higher values. 

Missing baryonic matter (i.e. matter made up of ordinary atoms) is, at least, a partial explanation and one that could evolve other time. Indeed, it should evolve over time, because over time more baryonic matter ends up in stars, which are easy for astronomers to see, rather than interstellar gas and dust, which are hard for astronomers to see (and hence often called "missing" when it isn't seen and couldn't be seen even if it was there with current instrumentation). Still, missing baryonic matter may not be the entire explanation, because the magnitude of the change in a(0) may not be big enough, and changes in the naively measured value of Milgrom's constant shouldn't be very uniform since some galaxies are forming starts more actively than others (although this may be reflected in the greater dispersion of Milgrom's constant measurements in older samples).

Deur (who bibliography is linked in the sidebar) argues that the missing piece for cluster scale phenomena is the geometry of the mass distributions, by an appealing analogy to similar phenomena in QCD (which is attractive theoretically because in many respects gravity behaves like QCD squared). (QCD stands for quantum chromodynamics which is the Standard Model theory of the strong force that holds hadrons together and indirectly through hadron mediated forces accounts for the nuclear binding force that binds atomic nuclei together.)

Stacy McGaugh at Triton Station has another post about MOND v. dark matter particles (DM) and why the evidence favors something like MOND but the sociology of astrophysics favors dark matter particles.

The search for a final explanation of dark matter phenomena continues, and while toy-model MOND isn't the final solution, it does a remarkably good job over a very wide range of masses. McGaugh is surely right that the final solution looks a lot more like MOND than it does like most DM models, because for DM to describe the universe we see, we need a theory that explains how DM particles consistently form in a way entirely predicted by baryonic mass distribution, which contrary to protests that it has, it hasn't.

Even if a(0) changes over time, it provides a vastly smaller degree of freedom in how galaxy dynamics can vary than DM, especially if the variation is systemic between galaxies and galaxy clusters, or between galaxies over billions of years of time, and not just random.

Wednesday, April 22, 2026

South American Genetic History

I remain skeptical that the Australasian ancestry is as ancient as claimed. It is much less than 2%, maybe a hundred times less, and the regional variation in its frequency is far too great for it to be ancient. I suspect an origin in Polynesian sea farers that may be obscured by natural selection against some signature Polynesian genes. 

I have not yet seen any really solid evidence that it has been present for 10,000+ years, or any explanation for the extremely varied frequency of these genes in the populations where they are found, indicating a very recent dispersal to these populations that hasn't had time for these frequencies to harmonize. 

South American ancient DNA samples are few and far between at that time depth, and this paper has some of them, but a lot of the references supporting this analysis are in supplementary materials and extended data, or references to other papers, and the nature of the ancient DNA sample is one I haven't been able to look at closely yet. As I've only had time to cut, paste, and highlight, rather than enough to do a proper critical analysis of this claim. 

I'll add more analysis in an update in this post, if time permits, which may or may not happen (I'm busy preparing for an upcoming jury trial).
[A] study published today in Nature reveals these migrations were anything but simple. Examining ancient and modern genomes collected from across South America and beyond, the team found that genetically diverse groups populated the continent in at least three separate pulses. And some people or communities carried with them possibly advantageous genes acquired from long-ago Australasian ancestors. . . .

His team published a complementary study today in Current Biology, finding evidence of unexpected genetic diversity and otherwise invisible migrations in 52 ancient genomes from Argentina and Uruguay.

In the Nature study, Tábita Hünemeier, a geneticist at the University of São Paulo and the Institute of Evolutionary Biology, collaborated with researchers and Indigenous communities across Latin America to sequence 128 whole genomes from living people from north Mexico to southern Argentina. The team then analyzed them alongside existing databases and previously published ancient genomes.

Previous work had identified the first two waves of settlement in South America, the earliest of which included people related to the Anzik child, who was buried in Montana 12,700 years ago. A second dispersal followed about 9000 years ago and ultimately contributed more to the genomes of most ancient and modern South Americas, including those Posth studied.

Hünemeier and her team found evidence of a third dispersal, whose genetic signature first appears in their data about 1300 years ago and then spreads widely across the continent and even into the Caribbean. The newcomers show hints of being related to Mesoamericans from Mexico and Central America, but so far, researchers don’t know exactly where they came from or who were their closest relatives. “Without the source population and more direct evidence [of a third pulse] from ancient DNA, it’s hard to really wrap our heads around” how and when a third migration might have happened, Posth says.

The study also digs deeper into a mystery that has bedeviled the genetic history of the Americas for over a decade: How did traces of Australasian ancestry end up in some ancient and modern South American genomes? Genetic variants from this lineage make up only about 2% of ancestry in the people who carry it, but that proportion has stayed remarkably consistent over the past 10,000 years. “This signal is found again and again and again,” Posth says. “It must mean something.”

Hünemeier suspects people carrying this ancestry were among several distinct populations that lived for thousands of years in Beringia, the now-drowned landmass that connected eastern Siberia to Alaska, and that it eventually spread southward into the Americas from there. (This Australasian ancestry, sometimes known as Population Y or the Ypykuéra signal after the Tupi word for “ancestor,” is different from the genetic sequences some Polynesian populations share with South American ones. Scientists continue to debate how that more recent gene flow happened—for example, whether Polynesian voyagers may have reached western South America about 800 years ago—but the findings from the Nature paper have no bearing on that mystery.)
From Science.

Both of these papers are open access.

The Nature article and its abstract:




Indigenous peoples of America represent the last principal expansion of humans across the globe, yet their genetic history remains one of the least explored. Although these populations have inhabited the continent for thousands of years, their evolutionary history remains largely unresolved, owing to the limited availability of genomic data. 
Here we present data on 128 high-coverage Indigenous American genomes and show they harbour extensive and previously uncharacterized genetic diversity, reflecting at least three dispersals into South America, followed by regional differentiation and long-term continuity. 
We identified widespread natural selection signals in genes associated with immunity, metabolism, reproduction and development, which were shaped by adaptation to diverse environmental conditions. 
Notably, several genomic regions exhibit a remarkable allele sharing with Australasian populations, probably originating from an ancient admixture event and partly maintained by selection for more than 10,000 years. 
We also detected distinct contributions from archaic humans with adaptive introgression affecting key biological functions. The limited overlap between the regions of Australasian affinity and archaic ancestry indicates independent evolutionary origins of these signals. These findings challenge simplified models of continental settlements and show a more dynamic and complex evolutionary history for the Indigenous peoples in America.
Castro e Silva, M.A., Nunes, K., Ribeiro, M.R. et al. :The evolutionary history and unique genetic diversity of Indigenous Americans." Nature (April 22, 2026). https://doi.org/10.1038/s41586-026-10406-w

The section on archaic and Australasian ancestry in the body text of the Nature article states (with citations omitted):
Affinity with Australasians and archaics

Some Indigenous American populations show elevated genetic affinity to present-day Australasians relative to other groups contradicting a single non-Arctic Indigenous American clade. This affinity is best explained by admixture between the ancestors of Indigenous Americans and an unsampled ancient Asian population, termed Ypykuéra (here referred to as Ypykuéra ancestry), partially related to a sister clade of present-day Australasians.

We assessed genetic affinity between ancient and modern Indigenous Americans and present-day Australasians, the closest living proxies for Ypykuéra ancestry. We applied F-statistics to modern Indigenous American pairwise comparisons and to comparisons including ancient individuals.

Several Indigenous groups, including the Awajún, Ayoreo, Guarani, Karitiana, Sirionó, Suruí and Tsimané, show significant excess genetic affinity to Australasians relative to other present-day populations (Z > 3). These groups span eastern and western South America and the Chaco, with the strongest enrichment in the southwestern Amazon, where five of these seven populations are located.

A second analysis detected at least one individual with significant affinity in all examined clusters, except Arctic and northern North American groups, which were excluded from this analysis because of partial or complete ancestry from independent Siberian dispersals. The earliest signal occurs in the 10,400-year-old Sumidouro individual. Signals persist from the Early Holocene to the present, increasing in frequency during the Late Holocene, especially in the Andes, Pacific Coast and western South America. The partially discontinuous spatiotemporal pattern probably reflects variation in prevalence within and among populations. Taken together, these findings indicate that this ancestry was present during the initial peopling of America and that it may have contributed more strongly to Late Holocene and present-day genetic diversity.

We tested whether Ypykuéra-related ancestry in Indigenous Americans reflects shared ancestry with Australasians by means of archaic hominins (Neanderthals and/or Denisovans). We compared D(Mbuti, Onge; Mixe, X) with D(Mbuti, Neanderthal or Denisova; Mixe, X), where X denotes Indigenous American groups. Mixe served as a Mesoamerican reference to match earlier studies reporting Australasian affinity. No correlation was detected between Australasian and Neanderthal (Spearman’s r = −0.006, P = 0.971) or Denisovan affinity (r = −0.1002, P = 0.5372). By contrast, Neanderthal and Denisovan affinities were strongly correlated (r = 0.6572, P = 7.2 × 10^−6), consistent with homogeneous archaic ancestry in the founding populations.

An alternative hypothesis proposes that Australasian affinity reflects retention of the Ypykuéra component in isolated groups with high internal genetic similarity. Such populations and genomic regions, characterized by elevated ROH, would be less affected by admixture that could dilute signals of ancient Population Y ancestry. This hypothesis is not supported by our data, which show no correlation between Australasian affinity and inbreeding (FROH) (Spearman’s r = 0.2503; P = 0.1192). Moreover, ROH hotspots, defined as regions with ROH density greater than three standard deviations above the mean, show little overlap with loci of Australasian affinity, with only about 6% of such positions coinciding.

We tested whether Indigenous American affinity to present-day Australasians also includes ancient Hòabìnhian individuals, proposed ancestors of mainland Southeast Asian hunter-gatherers, including the Onge Using D(Mbuti, Y; Mixe, X), with Y as Onge or Hòabìnhian individuals (La368, La364) and X as Indigenous American populations, we evaluated correlations in affinity to Onge and Hòabìnhian individuals. La368 forms a sister branch to Onge, whereas La364 is modelled as Australasian-related plus Austronesian ancestry, sister to Ami. We observed significant correlations for La368 (Spearman’s r = 0.6444; P = 1.1856 × 10^−5) and La364 (Spearman’s r = 0.6208; P = 2.8848 × 10^−5). These results support a shared ancestry component between Indigenous Americans and Australasians that extends deep into the past.
The Current Biology article and its abstract:
• Expansion of ancestry into the Pampas, Uruguay, and Patagonia from the Middle Holocene
• Repeated mobility from southern Andean and southern Patagonian-related populations
• Genetic differentiation between the Upper and Lower Paraná River Delta ∼600 years ago
• Coastal dispersal from southern Brazil to eastern Uruguay via mound-builder societies
The Southern Cone represents the southernmost region of South America settled by humans. Although ancient genomes from southern Patagonia have been sequenced, genomes from the central Southern Cone (CSC) remain temporally and spatially sparse. Archaeology documents major cultural transformations during the Middle and Late Holocene, yet their relationship with demographic processes has been debated. 
We present genome-wide data from 52 individuals spanning 6,000 years, originating from four regions of the CSC in present-day Argentina and Uruguay: the central and southern Pampas, Northwest Patagonia, the Paraná River Delta and Lower Uruguay River, and the eastern lowlands of Uruguay. 
Genomic evidence from the Pampas reveals the presence of at least three distinct ancestries during the Middle Holocene. Although genetic contacts with southern Patagonian groups were sporadic, we identified the expansion of an ancestry of unknown geographic origin by 5,500 years ago (ya), which increased during the Late Holocene. This ancestry arrived in Northwest Patagonia by at least 600 ya and co-existed locally with a southern Andean genetic profile until colonial times. 
Genetic structure differentiates populations along the Paraná River Delta and Lower Uruguay River by 1,500 ya. 
Individuals from the eastern lowlands of Uruguay show genetic links with Sambaqui-associated populations from the southern coast of Brazil, suggesting the role of human dispersals in connecting tropical lowland cultural traditions. 
Our work documents the diffusion of genetically distinct groups across all studied regions and provides compelling evidence that large-scale human movements contributed to the remarkable cultural diversity of CSC populations during the Middle and Late Holocene.
Kim-Louise Krettek, et al., "The shared genomic history of Middle- to Late-Holocene populations from the Southern Cone of South America" Current Biology (April 22, 2026).

Wednesday, April 15, 2026

Astronomy Meets Climate

A 100,000 year climate cycle on Earth can be explained by Earth's orbit.
The 100,000-year problem concerns the dominant period of glacial-interglacial cycles over the past 800,000 years and their correlation with Earth's orbital eccentricity, despite eccentricity's weak influence on solar radiation. 
Two theories compete: the astronomical theory, in which orbital forcing drives the cycles with amplification from Earth system feedbacks, and the geochemical theory, in which internal dynamics dominate with orbital forcing synchronising oscillations. We investigate these theories using conceptual models. 
Augmentations to the Budyko energy balance model fail to reproduce the 100,000-year period, revealing formulation limitations. Linearised versions of existing non-linear ice volume models perform comparably to their full counterparts, indicating the data does not necessitate non-linear dynamics. We develop two simple linear models: a feedforward model aligned with the astronomical theory and a feedback model aligned with the geochemical theory. 
The feedforward model reproduces the ice volume record well and offers a novel explanation for the absence of eccentricity's 400,000-year period, arising from oceanic heat storage and tropospheric energy responding with differing phase lags. Conservative estimates show bulk ocean temperature variation can be explained by eccentricity alone, challenging the geochemical theory's core assumption. 
We also show that widespread use of Q65 may bias models towards geochemical explanations by underrepresenting eccentricity. The feedback model's improvement is concentrated around Marine Isotope Stage 11, suggesting this anomalous interglacial reflects Earth-based events rather than a general requirement for feedback mechanisms. We conclude that 800,000 years of glacial cycles can be largely reproduced by a linear astronomical model, emphasising the importance of parsimony when interpreting palaeoclimate data.
Liam Wheen, "A First Principles Approach to the 100,000-year Problem" arXiv:2604.12143 (April 14, 2026).

Tuesday, April 14, 2026

Quick MOND Hits

Enticing, but with issues. Lots of self-citation, an arXiv review delay, very short, the author is primarily a mathematician and not primarily an astronomer, although his does have an institutional affiliation to a legitimate cosmology research center.

Gas-rich ultra-diffuse galaxies (UDGs) are an unusually sharp test for gravity models tied to the baryonic Tully--Fisher relation because several systems appear to rotate too slowly for their baryonic masses. This study revisits the six isolated gas-rich UDGs analysed by Mancera Piña et al. with the current outer-radius prescription of hyperconical modified gravity (HMG), using the published baryonic masses and circular velocities at the outer radii. The scan over the neighbourhood-scale parameter drives the model towards the asymptotic branch of HMG. For that limit, the HMG velocities are still systematically high for four of the six galaxies. Relative to the observed values, the fixed asymptotic branch gives χ2≃18.1 for six objects, whereas Newtonian baryons alone give χ2≃9.7, but MOND interpolation is much worse (χ2≃615.7). Using combined uncertainties, the per-galaxy HMG tension ranges from 0.2σ to 2.1σ, very similar to the 0.1σ to 1.7σ found for Newtonian baryons, and much smaller than the 3.7σ to 5.9σ obtained for MOND. We conclude that the present outer-radius HMG implementation alleviates the difficulties of MOND, but is still not sufficient to account for the published central values of the UDG sample. Gas-rich UDGs therefore provide a useful discriminant between MOND and HMG.
Robert Monjo, "Gas-rich ultra-diffuse galaxies: alleviating the MOND tension with HMG" arXiv:2604.09652 (March 30, 2026) (4 pages, 1 figure).

More credible. An established MOND astrophysicist. A big blow to MOND critics.
It is a common miss-conception that 1E 0657-56, the "Bullet Cluster", is somehow inconsistent with MOND expectations. The argument centres on the fact that the baryonic matter distribution of this system is dominated by the X-ray emitting gas, while the total projected surface density required under General Relativity to explain the observed lensing signal, centres on the observed galaxies. This is sometimes interpreted as being in conflict with MOND, as under such an interpretation, it is naively assumed that all dark matter being absent, the gravitational potential should necessarily be dominated by the largest mass distribution, that of the gas. 
However, just as under General Relativity, under MOND, the total gravitational potential of a system depends sensitively upon the volume density and not just on the total mass. It is shown in this letter that the surface density which QUMOND predicts will be inferred under a standard gravity framework from the total gravitational potential of the Bullet Cluster, closely matches what General Relativity inferences of lensing observations return. The close-to-point-like galaxies imply under QUMOND a relatively much larger surface density signal than what is expected from the Mpc scale gas distribution.
X. Hernandez, "A consistent MOND modelling of the Bullet Cluster" arXiv:2604.10811 (April 12, 2026).

Monday, April 13, 2026

The Hubble Tension Is Real

The Hubble constant is a measurement of the expansion of the universe, sometimes attributed to a cosmological constant in General Relativity (and the source of more than two-thirds of the mass-energy of the universe in conventional cosmology). Except, it appears that the Hubble constant isn't quite constant. So the explanation must be more complicated than a simple cosmological constant.

The Hubble tension isn't huge in relative terms, 10% over measurements more than ten billion years removed from each other. 

But it is highly statistically significant at the five sigma plus level, and isn't a simple methodological artifact of late time Hubble constant measurements (although it could be a methodological artifact of model dependent cosmic microwave background radiation measurements).

Context. The direct empirical determination of the local value of the Hubble constant (H(0)) has markedly advanced thanks to improved instrumentation, measurement techniques, and distance estimators. However, combining determinations from different estimators is nontrivial due to their correlated calibrations and different analysis methodologies.

Aims. Using covariance weighting and leveraging community expertise, we have constructed a rigorous and transparent “Distance Network” to find a consensus value and uncertainty for the locally measured Hubble constant.

Methods. Experts across all relevant distance measurement domains were invited to critically review the available datasets spanning parallaxes, detached eclipsing binaries, masers, Cepheids, the tip of the red giant branch, Miras, carbon-rich asymptotic giant branch stars, Type Ia (SNe Ia) and Type II supernovae, surface brightness fluctuations, the fundamental plane, and Tully–Fisher relations. Before any calculations, the group voted for first-rank indicators to define a “baseline” Distance Network. Other indicators were included to assess the robustness and sensitivity of the results. We provide open-source software and data products to support full transparency and future extensions of this effort.

Results. Our key findings are as follows: (1) The local H(0) is robustly determined, with first-rank indicators internally consistent within their uncertainties. (2) A covariance-weighted combination yields a relative uncertainty of 1.1% (baseline) or 0.9% (all estimators). (3) The contribution from SNe Ia is consistent across compilations of optical or NIR magnitudes. (4) Removing either Cepheids or the tip of the red giant branch has a minimal effect on the central value of H0. (5) Replacing SNe Ia with galaxy-based indicators changes H(0) by less than 0.1 km s^−1 Mpc^−1 while doubling its uncertainty. (6) The baseline result is H(0) = 73.50 ± 0.81 km s^−1 Mpc^−1, 7.1σ from the early Universe plus ΛCDM result 67.24 ± 0.35 km s^−1 Mpc^−1 and 5.0σ from BBN+BAO within a flat ΛCDM DESI DR2 (68.51 ± 0.58 km s^−1 Mpc^−1).

Conclusions. A networked approach, such as the one presented here, is invaluable for enabling further progress in Hubble constant measurements, as it provides the much needed advances in accuracy and precision without overreliance on any single method, sample, or group.

Worth noting that in Deur's approach, there is no cosmological constant, and that the apparent cosmological constant varies over time, and is expected to increase as galaxy and cluster structure increase somewhat over time. And, in Deur's approach, galaxy formation comes earlier than in ΛCDM.

Thursday, April 9, 2026

arXiv Is Moving

A few weeks ago, arXiv.org announced that it will be leaving Cornell, the university that currently manages it, and establishing its own nonprofit.

Calculating Light Meson Masses From First Principles In QCD

How good are current Standard Model calculations at predicting the experimental values of the light meson masses?

new paper that makes that attempt for most light mesons under 1.5 GeVs of mass (except scalar mesons). And, physicists are finally starting to do a pretty good job of describing the meson mass spectrum which has been an elusive target for decades, even for axial vector mesons, which had long been challenging.

As explained in the introduction:

In the present work we employ the procedure described above to compute the masses of relatively light mesons, namely mesonic states no heavier than about 1.5 GeV. Specifically, for mesons composed of u and ¯d quarks, we compute the masses of π±, ρ(770), b1(1235), a1(1260), π±(1300), and ρ±(1450). For the strange sector, we calculate the masses of the states K±, K∗(890), K1A, K1B, and K±(1460). 
In general, the computed masses are in good agreement with the experimental values. In fact, our findings represent a definite improvement over the results obtained within the standard rainbow-ladder truncation [84], where the masses of axial-vector mesons and radially excited states tend to deviate considerably from the observed values.

Notably, this omits the f(0)(500) scalar meson a.k.a. the sigma meson and seven other true scalar mesons with masses under 1.5 Gev. The other omitted scalar mesons are the f(0)(980), f(2)(1270), f(1)(1285), f(0)(1370), f(1)(1420), f(2)(1430) and f(0)(1500). This may be because their internal structures are less well understood.

The actual procedure used is too technical to discuss at this blog, which is aimed at an education layman readership.

The money chart is as follows:

With the exception of spin-1 kaons (where the relationship is inverted for some reason), the experimental values (in red) tend to be at the very high end of the theoretically predicted values using their methods (in blue), and their predictions, in turn, tend to be more massive than those made using a previous "rainbow ladder" truncation method (in green).

The predictions (and measurements) of excited state light meson masses are much less precise than the predictions (and measurements) of ground state light meson masses.

Is The Newtonian Expectation For Galaxy Rotation Curves Modeled Incorrectly?

The conclusion of this paper is a very big deal if true, and I don't dismiss it out of hand.

But given how well established and widely used the models it claims are grossly wrong are, this needs peer review and time for commentary papers in response to it in order to be taken seriously. I wouldn't be surprised if it contains some significant conceptual flaw.
The approximately flat outer parts of spiral galaxy rotation curves are commonly interpreted as evidence for a discrepancy between the observed baryonic mass and the dynamical mass inferred from the measured orbital velocities. In most standard analyses, this discrepancy is quantified using v2(R)=GM(<R)/R, which is exact only under spherical symmetry. However, spiral galaxies are flattened disk systems, for which mass exterior to the galactocentric radius under consideration can contribute non-negligibly to the gravitational field. 
We introduce the Lost and Found (LF) model, a geometrically consistent Newtonian framework based on direct full-disk gravitational integration and a parametrized representation of the disk surface density. In this approach, the gravitational field is computed without imposing spherical symmetry, and the disk mass distribution is represented by two exponential components with a smooth outer truncation. 
We apply the LF model to a heterogeneous sample of disk galaxies spanning a broad range of masses and radial extents. The model reproduces the main observed features of the rotation curves, including the inner rise and the approximately flat outer behavior, without explicitly invoking a dark matter halo or modifying Newtonian gravity. Across the sample, the LF-inferred mass scales nearly linearly with the conventional dynamical mass, with a characteristic reduction factor ηLF ~ 0.67. 
These results indicate that part of the inferred mass discrepancy may arise from the geometric treatment of gravitation in disk galaxies, and motivate a reassessment of mass inference in non-spherical systems.
Adolfo Santa Fe Dueñas, "Galactic Rotation Curves from Full-Disk Newtonian Gravity: The Lost and Found Model" arXiv:2604.06917 (April 8, 2026) (submitted to MNRAS).

Wednesday, April 8, 2026

Population Discontinuity At The End Of The Neolithic In Paris, France

Bernard's blog, Généalogie génétique, lives. (I had removed it from the blogroll when technical difficulties at the site made it look dead. I'll reinstate it when I have time.)

His latest post examines an ancient DNA paper looks at ancient DNA from a graveyard in Paris, part of which dates to around 3000 BCE in the megalithic Neolithic era, and part of which comes a century later after a long gap in burials there, when the previous megalithic Neolithic civilization there had collapse: forests had regrown, megalithic construction had ceased, and infectious diseases including the black plague had ravaged the population. 

The post-collapse social organization was different too, with the earlier burials reflecting a large extended family/clan social structure of related people, and the later burials reflecting a smaller nuclear family with multiple generations of related people but only a few people in each generation.

In this case, there was population discontinuity in which the prior Neolithic population was replaced by another Neolithic population from the South with a different social organization that moved in after the original megalithic Neolithic culture in Paris collapsed. Both the original group (in brown on the PCA plot below) and the subsequent one (in green) cluster together as European Neolithic populations, distinct from prior European hunter-gather peoples and the later Bronze Age steppe peoples, despite being distinct enough to indicate a population replacement.


This graveyard pins down the timing of this collapse fairly precisely to a single century.

This also makes clear that Neolithic collapse in Western Europe happened before the arrival of Bronze Age people with steppe ancestry. It also illustrates the civilizational vacuum that those Bronze Age people swept into a few centuries later, replacing much of the first farmer wave of people in this part of Europe, in a dynamic distinct from the conquest of a vibrant Neolithic civilization.

The abstract and citation appears below:
At the transition between the third and the fourth millennium BC, there is evidence for a population decline concurrent with the end of megalith building across continental northwestern Europe. In Scandinavia this ‘Neolithic decline’ is followed by a massive population turnover, as farming communities disappeared and were replaced by people with steppe ancestry. In western Europe, however, ancestry associated with Neolithic farmers persisted beyond the Neolithic decline, and it remains unclear whether a similar demographic replacement occurred.

To investigate the population dynamics around the Neolithic decline in present-day France, we sequenced 132 ancient genomes from the allée sépulcrale at Bury. Located in the Paris area, Bury spans two burial phases separated by a hiatus with no burial activity: one phase directly preceding the Neolithic decline in the late fourth millennium BC, ending around 3000 BC, and a later phase some time after the Neolithic decline in the early- to mid-third millennium BC.

Our analysis revealed that the two burial phases at Bury represented largely discontinuous genetic groups of a markedly different social organization as inferred from three large pedigrees. We show that the difference between the two burial phases can be linked to a northwards movement of Neolithic ancestry from the south, which only spread into the Paris Basin after the Neolithic decline, at around 2900 BC. 
Together with genetic evidence of various infectious diseases in the dataset, such as Yersinia pestis and Borrelia recurrentis, as well as evidence for forest regrowth between the two phases, these findings detail a population turnover at the end of the fourth millennium BC, offering a possible explanation for the cessation of megalith building.
Frederik Seersholm, et al, "Population discontinuity in the Paris Basin linked to evidence of the Neolithic decline" Nature Ecology and Evolution (April 3, 2026).

Steppe ancestry starts to appear in Southern France ca. 2650 BCE, with Bell Beaker artifacts found in the Lower Rhine ca. 2600 BCE. This is about 250-300 years after population replacement in the Paris basin.

Notably, however, the source of the Southern France Neolithic migrants to the Paris basin ca. 2900 BCE is, geographically, one of the earliest places of the Bell Beaker phenomena in France and is the geographic source of the French Bell Beaker people. Indeed, southern France is the first place that the Bell Beaker phenomena arose after Iberia (it arose originally in the Tagus River basin in Portugal). 

Also notably, the very first Bell Beaker people had Neolithic, rather than Steppe ancestry, which only came two or three centuries later.

It is thus conceivable that the Southern French replacement population in Paris ca. 2900 BCE may be from the same population that was the source of the pre-Steppe Bell Beaker progenitors.

Density v. Mass In Compact Objects In Space

This comparison of compact object density and mass is purely descriptive and informs astrophysical intuition.


From here.

Friday, April 3, 2026

The Latest News In Top Quark Physics

The latest indirect measurement of the top quark pole mass is surprisingly precise (exceeding the precision of the world average in a single measurement) despite the method used, which has historically had large error bars. The Particle Data Group world averages are as follows:


This will probably drag up the world average a little bit, to about 172.7 GeV.

We present an indirect determination of the top-quark pole mass mt within a global analysis of parton distribution functions (PDFs), based on the public NNPDF framework. 
We consider a wide range of measurements, including both single- and double-differential observables, computed at NNLO QCD accuracy with EW corrections, and analyse their individual as well as combined impact on the joint (α(s),m(t)) parameter space, while accounting for PDF evolution up to approximate N3LO QCD accuracy with QED corrections. We account for missing higher order QCD uncertainties by default. 
Unique to our analysis are the inclusion of, first, toponium contributions around the tt¯ threshold, second, state-of-the-art constraints on αs from the lattice, and finally, a detailed sensitivity study of the various ATLAS and CMS differential cross-section measurements at 8 and 13 TeV. We demonstrate explicitly how a combined determination requires the refitting of the PDFs in order to correctly correlate uncertainties. 
We find mt = 172.80 ± 0.26 GeV at approximate N3LO QCD including NLO QED, EW and toponium corrections.
Richard D. Ball, Jaco ter Hoeve, Roy Stegeman, "A Determination of the Top Mass from a Global PDF Analysis" arXiv:2603.28865 (March 30, 2026).

Another new paper on top quark physics (with an abstract devoid of much of an interesting description of the paper) confirms that: 

(1) the experimentally measured top quark-antitop quark pair production rates are consistent with the Standard Model expectation, 

(2) toponium has been discovered by both the ATLAS and CMS experiments at the Large Hadron Collider (LHC), and 

(3) the Higgs field Yukawa of the top quark is experimentally confirmed to be not more than 2.1 times the Standard Model expectation (the coupling should be proportionate to the top quark's pole mass in the Standard Model).

A Decent Modified Gravity Candidate

This modified gravity proposal explains galactic rotation curves without dark matter, it's relativistic, and its key parameter beyond general relativity is determined on a very consistent basis from data from nine different galaxies. It bears some general similarities to other modified gravity proposals that do the same thing. 

The author's conjecture that the reason we don't have a workable quantum gravity theory is that the standard equations of general relativity that we're trying to quantize aren't quite right also seems plausible.

This candidate isn't as mature as some of the competing modified gravity proposals, so it hasn't be tested against the cosmic microwave background, galaxy formation rates, in non-spiral galaxies, and in galaxy clusters yet. But its a promising proposal that deserves further attention.
A modification of the Einstein-Hilbert Lagrangian by introducing a coupling between the Weyl tensor and the stress-energy tensor was proposed to explain flat galactic rotation curves without the exotic (non-baryonic) dark matter (DM). The proposed coupling constant was previously determined by fitting the rotational velocities of the Milky Way and M31 modeled with constant density, yielding the same coupling constant for both. In this work, we have modified the formalism for a variable density by modeling the galactic systems with realistic, spherically symmetric and radially varying density profiles for the baryonic matter and this analysis is applied to seven edge-on spiral galaxies of the local cluster and the Milky Way.
Asghar Qadir, Ashmal Shahid, Noraiz Tahir, "The Galactic Halo Rotation by Weyl Incorporated Gravity" arXiv:2604.01643 (April 2, 2026) (Arabian Journal of Mathematics (2026)).

The introduction to the paper is also encouraging, although some of the summary of the criticisms of MOND are overstated. The explicit treatment of the effect of the gravitational field, similar to the approach of Deur, is particularly notable. It says:
One of the most striking observations in galactic dynamics is the discrepancy between the predicted and observed rotational velocities of galaxies. According to the standard theories of gravity, the rotational velocity of the galaxies should decrease sharply at large radii where visible matter becomes sparse. However, observations of their rotation curves remain nearly flat out to very large distances [11–13]. Other dynamic considerations had already led Zwicky [14] to propose the existence of DM, but this evidence was much stronger. Rubin’s investigation was extended to galactic clusters [15, 16] providing yet stronger evidence. The observations of the cosmic microwave background (CMB) had already provided minimum and maximum values for baryonic matter in the Universe according to the standard model of particle physics (SMpp). The observations required a value well beyond the limit of the baryonic matter [17]. This has led to various suggestions for exotic (non-baryonic) DM, but there is no direct evidence for any of the proposed candidates. Nevertheless, CMB observations also indicate that ≃ 5% of the Universe should be made up of baryons (the usual protons and neutrons), but observations of the luminous parts of the galaxies show only half of these baryons, this is the “missing baryon problem”. Since the baryons are dark, we call this the baryonic DM. It is proposed that a significant fraction of this baryonic DM is present in the galactic halos [18–23]. 
For the non-baryonic DM an alternative suggestion was that the standard law of gravity should be modified instead of looking for other forms of matter. The first such suggestion came from Milgrom [24], who proposed the modification of Newton’s law by inserting a Yukawa-like term to damp gravity at large distances, Modified Newtonian Dynamics (MOND). It was not able to explain the dynamics at different scales, especially of single galaxies and clusters, or provide for the formation of structure in the early Universe [25–31]. Most of all, the damping term was totally ad-hoc and was embedded in an obsolete Newtonian framework, which could not be converted to General Relativity (GR) [29]. Apart from galactic dynamics, arguably the most outstanding problem of fundamental physics is the incompatibility of GR and Quantum Theory. In particular, the Renormalization Group Equation of ’t Hooft and Veltman demonstrated that Dirac quantization of GR produced a non-renormalizable theory [32], leading to the well-known “Quantum Gravity (QG)” problem. To avoid these separate problems various ad-hoc modifications of GR involving arbitrarily many new parameters have been proposed [24, 33–36]. 
Qadir and Lee took the view that there must be a sound physical basis for any modification of the highly successful GR, and it must be minimal, i.e., it should remain geometric and involve only one free parameter to explain the discrepancies of galactic dynamics at all scales. Further, it should also provide a base for solving the QG problem. In 2019, they proposed an explicit interaction term between matter and the gravitational field λT.C.T, where λ is a new coupling constant, T is the stress-energy tensor, and C is the Weyl tensor, which represents a pure gravitational field [1]. This idea was inspired by the Feynman vertex representing a similar explicit interaction between the source (an electron) and the electrodynamic field in Quantum Electrodynamics (QED), Aµjµ. In QED the electron is given by a spin-half spinor, which comes twice over in the current jµ and the field, Aµ, which appears singly, while in QG the source would be represented by the rank-two tensor T, which comes twice, and the gravitational field by the rank-four tensor C, which comes once. As QED with this source term is renormalizable, it can be hoped that so would this modified QG. It was called Modified Relativistic Dynamics (MORD). 
Previously, MORD was tested by checking whether a single value of the new coupling constant λ could reproduce the rotational velocity at the outer rim of two galaxies, the Milky Way and M31, incorporating only the baryonic DM and not any postulated non-baryonic DM, by assuming a simple-minded model in which both the galaxies were represented as a constant density sphere with a peak density of the baryonic matter from the core to the edge of the galaxy [2, 3]. In that study, a single value of λ was indeed found to fit the rotational velocity values for both galaxies. This approach was inherently limited, i.e., it neglected the radial variation of the galactic halo density, and treated the galaxies as idealized, uniform objects. The aim of this paper is to completely modify the previous formalism of a constant density case to a variable density case, where ρ′= 0, and take the next step forward by generalizing the baryonic DM component to spherically symmetric, radially varying density profiles for the galactic halos of eight spiral galaxies [4–10, 37, 38]. 
This extension is conceptually significant because it will allow us later to test whether the universality of λ persists under physically motivated halo structures at different radii, rather than only at the rim. By moving from a toy model to a realistic halo description, we not only refine the numerical estimate of λ but also provide a more robust and physically meaningful assessment of MORD across multiple spiral galaxies. We stress that while more realistic baryonic distributions, such as double exponential stellar disks combined with bulge components, are commonly used to model luminous matter, these structures are intrinsically non-spherical, and would require extension of the formalism to two or more variables. 
The plan of the paper is as follows: in Section 2, we will briefly explain the Weyl modified Einstein field equations for varying spherically symmetric density profiles and demonstrate how the value of the coupling constant λ is obtained for the Milky Way galactic halo. In Section 3, we will use the analysis for seven other spiral galaxies. Finally, in Section 4, the obtained results will be discussed.

The key formulas are as follows:

The Weyl-modified Einstein-Hilbert Lagrangian is [1] 

L =√−g (R − 2Λ − kT + λC(αµβν)T^(αβ)T^(µν)), (1) 

where √−g is the determinant of the metric, k = 8πG/c^4 is the coupling constant for matter, where G is the Newton’s gravitational constant, c is the speed of light. This leads to the Weyl incorporated Einstein field equation (WIFE) 

R(µν) − 1/2g(µν)R + g(µν)Λ =  kT(µν) + λI(µν), (2) 

[Ed. For comparison the unmodified Einstein field equation is as follows:

 
So, the only modification is the addition of the λI(µν) term on the RHS.]

where I(µν) is the interaction term given by 

I(µν)= 

1/4(−g(αβ)g(ρµ)g(σν)−g(ρσ)g(αµ)g(βν)−g(ασ)g(ρµ)g(βν)−g(ρβ)g(αµ)g(σν))□(T^(αβ)T^(ρσ))

+1/6 (g(αβ)g(ρσ)−g(ρβ)g(ασ))(g(µν)□ −∇(µ)∇(ν)) T(αβ)T(ρσ). (3) 

The paper sums up its findings in the conclusion:

The very first modified gravity approach to explain the flat rotational curves of galaxies without invoking DM is MOND [24]. However, MOND’s phenomenological success comes with challenges in covariant formulation and cluster-scale dynamics, motivating alternative modifications rooted in GR. Furthermore, a major challenge to MOND has emerged from the analysis of wide binary stars in Gaia DR3. A comprehensive study by Ref. [59] found that the relative velocities of widely separated binaries (2−30 kAU) are inconsistent with the MOND prediction, which expects a ≈ 20% enhancement over Newtonian gravity due to the external field effect. Their analysis, which rigorously modeled the Galactic external field and population uncertainties, excluded MOND at a statistical significance of 16σ in favour of Newtonian dynamics [59, 60]. These challenges motivate the exploration of alternative single-parameter modified gravity theories rooted in a covariant framework. 

Qadir and Lee’s MORD [1] proposed a modification to the standard Lagrangian by incorporating an interaction coupling constant λ with a term involving the Weyl tensor and the stress-energy tensor, expressed as λC(µνρπ)T^(µν)T^(ρπ) whose purpose was to see whether a single unique value of λ can account for the rotational velocity curves of galaxies by replacing the exotic DM by what we now feel should be called Weyl Incorporated Gravity (WIG), solving the outstanding DM problem with a single new parameter. This was tested using a simplistic constant density model which did have just the one value of the coupling for all galaxies considered [2, 3]. 

In the present work, we have modeled the galactic halos of eight spiral galaxies, modifying the previous framework for a variable density case to estimate the value of the coupling constant λ. For this purpose we adopt three widely used density profiles, the Navarrow-Frenk-White (NFW), Moore, and Burkert models, normally used for all DM in the halos, but here used only for the baryonic DM to test the robustness of the proposal by verifying that the choice of model makes no difference to the results [40-50]. 

We find that a tiny range, λ = (6.9546 ± 0.00012) × 10^−18 km^2s^4kg^2, consistently reproduces the observed halo rotational velocities at r = 100 kpc. 

For the Milky Way, the fitted rotational velocities span v(rot) ≃ 153–159 km s^−1, compared to the observed value 150 ± 10 km s^−1, with an enclosed halo mass M(h)(≤ 100 kpc) ≃ 1.0 × 10^12 M⊙. 

For M31, we obtain v(rot) ≃ 230–232 km s^−1 versus the observed 225 ± 10 km s^−1, corresponding to a halo mass M(h) ≃ 1.4 × 10^12 M⊙. 

In the case of M33, the modeled velocities v(rot) ≃ 121–122 km s^−1 agree with the observed 120 ± 5 km s^−1, yielding M(h) ≃ 3.2 × 10^11 M⊙. 

For M81 and M82, the fitted velocities lie in the ranges 256-257 km s^−1 and 251–253 km s^−1, respectively, consistent with the observed values 250 ± 15km s^−1 and 250 ± 18 km s^−1, with inferred halo masses M(h) ≃ 1.3 × 10^12 M⊙ and 1.0 × 10^11 M⊙. 

Similarly, for NGC 5128, NGC 4594, and M90, the modeled rotational velocities at 100 kpc fall within the observed ranges reported in the literature, with corresponding halo masses M(h) ≃ 4.4 × 10^12 M⊙, 6.3 × 10^13 M⊙, and 3.5 × 10^13 M⊙, respectively. It is clearly seen that these mass and velocity estimates are consistent with the observed values (see Refs. [40, 44], and Table 1). 

Before closing the paper, note that we have used spherical symmetry to explain the dynamics of the galactic halo to estimate the value of λ. However, to make a more realistic model of the galaxy, we should take into account the vertical component of the velocity, which is missing in the present geometry. The hope is that one can fit the complete rotational velocity curve and get a more robust model. Indeed, the chosen metric would change, and we may need to consider the Kerr geometry, or a slow rotation approximation of it [61], to incorporate the vertical component, as it accounts for the angular momentum effects [62, 63]. This will be addressed separately later. 

As previously discussed, the problem of DM and QG may share a common origin [1]. Addressing observational issues related to DM could provide insights into resolving fundamental difficulties in QG. Instead of assuming an indirect interaction, we have introduced a direct nonlinear coupling between matter and gravity, analogous to the interaction between electromagnetic sources and the electromagnetic field. The modified Lagrangian, given in eq. (1), represents the minimal extension of the Einstein-Hilbert Lagrangian and is proposed as a potential solution to both problems with a single additional parameter. Naturally, the feasibility of this approach must first be tested against the DM problem before seeing if our WIG fits on the messier head of QG. 

As a further conjecture of my own, their coupling constant could be parsed out suggestively into λ = X x Λ^2/G^2 where Λ is the cosmological constant, G is Newton's constant, and X is a dimensionless coupling constant with a value on the order of 10^48. 

You could also have λ = X x Λ/G^2 with a much small constant (on the order of 10^-4) that has dimensions of m^2.