Friday, December 5, 2025

DES Reduces S8 Tension

One of the persistent tensions in cosmology measurements, that has attracted less attention from the general public than the Hubble tension, is the value of a parameter called S8 (which measures "clustering amplitude" at a cosmological level) estimated from the cosmic background radiation measurements and the measurement from other means. New data from the Dark Energy Survey (DES) weakens that tension.
Cosmology from weak gravitational lensing has been limited by astrophysical uncertainties in baryonic feedback and intrinsic alignments. 
By calibrating these effects using external data, we recover non-linear information, achieving a 2% constraint on the clustering amplitude, S8, resulting in a factor of two improvement on the ΛCDM constraints relative to the fiducial Dark Energy Survey Year 3 model. The posterior, S8 = 0.832+0.013−0.017, shifts by 1.5σ to higher values, in closer agreement with the cosmic microwave background result for the standard six-parameter ΛCDM cosmology. 
Our approach uses a star-forming 'blue' galaxy sample with intrinsic alignment model parameters calibrated by direct spectroscopic measurements, together with a baryonic feedback model informed by observations of X-ray gas fractions and kinematic Sunyaev-Zel'dovich effect profiles that span a wide range in halo mass and redshift. Our results provide a blueprint for next-generation surveys: leveraging galaxy properties to control intrinsic alignments and external gas probes to calibrate feedback, unlocking a substantial improvement in the precision of weak lensing surveys.
Leah Bigwood, et al., "Confronting cosmic shear astrophysical uncertainties: DES Year 3 revisited" arXiv:2512.04209 (December 3, 2025).

New astronomy observations also strongly constrains multi-field cosmological inflation models. And, another study combining data from multiple collaborations, strongly disfavors cosmological "inflation models preferred by Planck alone, such as Higgs, Starobinsky, and exponential α-attractors, in favor of other models, such as polynomial α-attractors," based upon its new measurements of cosmological parameter n(s) (the primordial power spectrum).

Finally, there were several new preprints today exploring the WIMP dark matter hypothesis, which is irritating because the WIMP dark matter hypothesis has been almost completely ruled out by a variety of independent means.

Volcano Driven Famine Brought Black Plague Bearing Fleas In Grain Shipments To Europe In 1347

Volcanic eruptions in 1345 CE, lead to crop failures from 1345 to 1347 in the Mediterranean. This led Italians to import grain from Mongols near the sea of Azov region (currently between Ukraine and Russia) in 1347, where a black plague infestation was already present, spreading the plague to Europe. The black plague then ran rampant across Europe from 1347 to 1353 killing an immense share of the population of Europe (up to 60% of some towns and villages). 

Several years of famine also probably weakened the immune systems of most Europeans, impairing their ability to fight to black plague bacteria and making its lethality rate greater.

This black plague pandemic actually started in "the arid foothills of the Tien Shan mountains west of Lake Issyk-Kul in modern-day Kyrgyzstan" in 1338, but it took nine more years for it to reach Europe. While the volcano induced famines in Europe sped its spread, arguably its eventual arrival in Europe, sooner or later, was almost inevitable.

Human history is pockmarked with periods of death and destruction on unimaginable scales. Of these calamitous epochs, one stands out: The Black Death. The mid 14th century scourge killed tens of millions of people in Europe, Asia, and Africa and changed the course of history—marking the tail end of the Middle Ages and ushering in the cultural reawakening of the Renaissance by disrupting society, the feudal system, and economies across the continent. 
Researchers have long known the Black Death’s central villain: the bacterium Yersinia pestis, which caused the bubonic plague that swept through towns and villages with a mortality rate of up to 60 percent. Experts also know this microbial agent was spread by fleas, borne on the backs of rodent pests and maybe domestic animals, and passed between humans through the air and bodily fluids. But historians have had a tougher time recreating the sequence of events that initially started the devastating pandemic. 
Now, a pair of scientists have found new clues hidden in tree rings. By looking at these rings in the Spanish Pyrenees—as well as details in historical accounts of the time—they suggest that heightened volcanic activity sometime around 1345 may have sparked a famine, kicking off the sequence of events that eventually led to the Black Death raging through Eurasia from 1347 and 1353. They published their findings today in Communications Earth & Environment. . . . 
Here is the model Bauch and his colleague Ulf Büntgen, a dendrochronologist at Cambridge University, propose. As yet unknown volcanic eruptions ejected huge amounts of ash and gases into the atmosphere around 1345, causing drops in annual temperatures that persisted for several years. The cross sections from living and relic trees that the researchers studied had “blue rings,” denoting abnormally cold and wet summer growth seasons, in 1345, 1346, and 1347. Additional accounts from the time considered by Bauch and Büntgen tell of abnormal cloudiness and dark lunar eclipses, further hints of volcanic activity. This sustained cooling could have caused widespread crop failure across the Mediterranean. 
The resulting food shortages drove merchants in the maritime republics of Venice, Genoa, and Pisa to increase imports of grain from the Mongols living around the sea of Azov in 1347. Along with shipments of grain coursing across established trade routes came plague-infested fleas. Once Y. pestis and the fleas that carried it landed in Europe, the pathogen jumped to rats, mice, and perhaps domesticated animals. Eventually the disease hopped to humans, and people began transmitting it in densely packed population centers. The rest is a dark part of history. 
“For more than a century, these powerful Italian city states had established long-distance trade routes across the Mediterranean and the Black Sea, allowing them to activate a highly efficient system to prevent starvation,” said Bauch. “But ultimately, these would inadvertently lead to a far bigger catastrophe.” 

From a Facebook post by Nautilus Magazine

The introduction to the published paper states:

Recent advances in paleogenetic research now demonstrate that the Black Death was caused by the bacterium Yersinia pestis, which is likely to persist in different forms in natural reservoirs, including wildlife rodent populations. Investigations of great gerbil (Rhombomys opimus) populations in Kazakhstan, for instance, have outlined how the bacterium can be transmitted from one mammalian host to another by hematophagous insect vectors, such as fleas. The zoonotic disease, however, only occasionally spills over to domestic mammals and humans, and so far three pandemics have been documented: The Justinianic plague from circa 541 to the second half of the 8th century CE; the second pandemic starting around 1338 CE in central Asia and later outbreaks in the Mediterranean region and Europe until the early 19th century CE; and the third plague pandemic that had its origin in the 1770s in China and is arguably still prevalent in endemic rodent populations in different parts of the world.

A combination of archaeological, historical and ancient genomic data proposes that the causal agent of the second plague pandemic most likely originated from the arid foothills of the Tien Shan mountains west of Lake Issyk-Kul in modern-day Kyrgyzstan. A genetically distinct strain of the bacterium was then transmitted along ancient trade routes and entered Europe via the northern Black Sea region in the early 1340s. While changes in long-distance maritime grain trade have been introduced as a possible explanation for the import of plague-infected fleas to Venice and other Mediterranean harbour towns in 1347 CE, this chain of arguments excludes alternative transmission pathways, such as human-to-human infection or the transport of rodents and goods. Intriguingly, the role climatic changes and associated environmental factors may have played in the onset and establishment of the Black Death remains controversial amongst scholars from the natural and social sciences and the humanities.

Despite an ever-growing understanding of the evolution, origin and transmission of Yersinia pestis during the second plague pandemic, it is still unclear if the bacterium was frequently re-introduced into Europe or if natural reservoirs of the bacterium ever existed there. Recent insights into plague ecology include aspects of prolonged flea survival without human and/or rodent hosts but feeding opportunities on grain dust during long-term food shipments. Empirical evidence from around 1900 CE may therefore be considered as a possible explanation of how Yersinia pestis could have arrived in medieval Italy. While there is so far no convincing argument to pre-date the beginning of the second plague pandemic into the 13th century CE, changes in socio-economic structures, political institutions and trade networks since the second half of the 13th century possibly impacted the course of the second plague pandemic.

Here, we show that interdisciplinary investigations into the entanglements between weather, climate, ecology and society well before the Black Death are essential to understand the exceptional level of spread and virulence that made the first wave of the second plague pandemic so deadly. Based on annually resolved and absolutely dated reconstructions of volcanically forced cooling, transregional famine, and changes in long-distance maritime grain trade from 1345–1347 CE, we argue that the onset of the Black Death most likely resulted from a complex interplay of natural and societal factors and processes. Although this unique spatiotemporal coincidence of many influences seems rare, our findings emphasise the increased likelihood of zoonotic infectious diseases to suddenly emerge and rapidly translate into pandemics in both, a globalised and warmer world with COVID-19 just being the latest warning sign.

Thursday, December 4, 2025

A Quick Recap Regarding The Indo-European Languages

 

This post is pretty much entirely old news that I've blogged about previously. But every once and a while it is worth recapping the basics for folks who are new to the discussion (and quite frankly, I just haven't had the time lately to post in more depth about more up to date developments in this field).

The Indo-European Languages
More than 40% of humans alive today speak an Indo-European language as their mother tongue, some 3.4 billion people (and well north of 50% if you count second-language learners). The top ten are:
Spanish ~484 million
English ~390 million
Hindi ~345 million
Portuguese ~250 million
Bengali ~242 million
Russian ~145 million
Punjabi ~120 million
Marathi ~83 million
Urdu ~78 million
German ~76 million

It is also worth noting that the 'Indo" part of "Indo-European", basically, languages derived from Sanskrit (formally known as the Indo-Aryan language family) as it existed ca. 1500 BCE, is a huge part of the total with about 868 million speakers (about 39% of the top ten) among the top ten Indo-European languages, compared to about 1365 million for European languages (including native speakers of versions of those languages spoken mostly in their New World colonies in North America and South America, and in Australia and New Zealand) among the top ten Indo-European languages. There are more Sanskrit derived language speakers in the top ten than there are Latin derived (Romance) language speakers in the top ten.

Where Did The Indo-European Languages Come From?

All Indo-European languages are derived from the Proto-Indo-European language spoken by about 10,000-20,000 people in what is now Ukraine, probably making up several tribes of people with a mixed herder-farmer form of subsistence, around 3000 BCE. 

Speculatively, Proto-Indo-European may have arisen from a fusion of the language of an early herder community in the region and an early farmer community in the reegion.

The Indo-European language expansion had a large demic component (i.e. it involved Indo-Europeans people replacing or demographically swamping existing populations), although the extent that this happened varied considerably, from about 90% replacement in the British Isles ca. 2500-2400 BCE, to less than 15% in parts of Southern India (where Indo-European languages are currently not widely spoken in daily life as a first language). 

Indo-European Languages In South Asia

In India, the Indo-European demic component in places where Dravidian languages are now the predominant native language, is probably the product of a first wave of Indo-European conquest that covered almost all of the Indian subcontinent and led to the extinction of most of the then existing Dravidian language. This conquest was then followed by a Dravidian reconquest of most of the formerly Dravidian linguistic territory by speakers of a sole surviving Dravidian language in a small area that managed to escape language shift at the hands of the Indo-Aryan conquerors. The reconquest kept the invader's proto-Hindu religion (maintained to this day by members of a broad Brahmin caste called a "varna" in South India, who have significant ancestry from those invaders), however, mostly intact but with regional influences. Subsequent waves of Indo-European people migrated to Northern India after this reconquest, but not to Southern India.

This explains why the last date of mass Indo-European admixture is older in Southern India than in Northern India (which is mostly Indo-European speaking), why the Dravidian language family looks so young despite the fact that the most plausible time for it to emerge is in the South Asian Neolithic Revolution ca. 2500 BCE (not all that different from the Proto-Indo-European language, despite the fact that the Indo-European language family that is far more diverse and has far more time depth), why Indo-European ancestry found across all of India but in varying proportions by location and caste, and why Indo-European language speaking Hindus are much more likely to be vegetarians than Dravidian language speaking Hindus (vegetarianism was one aspect of the invader's religion that didn't survive the Dravidian reconquest).

Their expansion is summarized in  broad brush without some of the finer details in the map above, which is generally accurate but subject to revision as new evidence from archaeology and ancient DNA and historical linguistics refines it. 

Indo-European Anatolian Languages.

Probably the most controversial part of the map pertains to how the Anatolian Indo-European languages (which are now extinct) relate to the other Indo-European languages. 

These languages are greatly diverged from other Indo-European languages (and there is not much Steppe ancestry in ancient DNA from Neolithic and Bronze Age Anatolia), which has led some historical linguists to try to come up with contorted theories to explain what seems like a very old date of divergence of the Anatolian languages from the other Indo-European languages, in the face of genetic evidence, ancient historical records from nearby areas, and archaeology, that don't seem to fit this narrative.

For example, some scholars think that the Indo-European languages originated in Anatolia in the Neolithic era and then had a secondary expansion to almost everywhere else sometime after 3000 BCE. This well-intentioned effort to fit the linguistic distinctiveness of the Anatolian languages to the other evidence is wrong.

In my informed but not credentialed opinion, the Indo-European languages originated on the Steppe, and one wave of Indo-European migrants travelled to Anatolia around 2000 BCE (at a time when Indo-Europeans were rapidly expanding in all directions due to a climate driven collapse of civilizations in Europe and India at the time). The Anatolian languages are more distinct from other Indo-European languages. But this is not because the time depth of their relationship is older. 

Instead, it is because unlike most other places that the Indo-European languages expanded to, the local copper age/early Bronze Age civilization in Anatolia has not collapsed to nearly the same extent, so the Anatolian languages spread more through elite dominance than demically, and the Anatolian languages had a stronger substrate influence from the Hattic language spoken in the region before it was conquered by an Indo-European elite in the centuries following 2000 BCE from a couple of modest Indo-European villages, in a process of conquest that is historically attested that ultimately gave rise to a Hittite empire. Most historically attested Indo-European Anatolian languages are known only after the Hittite language fractured after the Hittite Empire collapsed in the regional phenomena known as Bronze Age collapse, ca. 1200 BCE, in a process similar to the fragmentation of the Romance languages after the fall of the Roman Empire. Only two or three of the Anatolian languages (including Hittite) predate this fragmentation.

The Anatolian languages also seem more distinct because the substrate languages for Indo-European languages in Europe were all part of the same Neolithic Paleo-European language family of the first farmers of Europe (who largely replaced early European hunter-gatherers), derived from their common origins in Western Anatolia (before Anatolia experienced a language shift in the Copper Age or early Bronze Age as invaders from the Caucuses and Western Asian highlands conquered its Neolithic civilization).

Some of what looks like shared Indo-European roots in European Indo-European languages is really the product of a shared Paleo-European linguistic substrate that is absent in the Anatolian languages and Tocharian languages (which are the most diverged from other European language causing some linguists to assume that the greater divergence represented greater time depth of the divergence).

The Tocharian language family

Another, less intense controversy in Indo-European historical linguistics is how the extinct Tocharian language family fits into the overall picture.

The Tocharian languages, attested in written form and spoken historically in the Tarim Basin of Central Asia, is the attested Indo-European language family that is probably most conservative with respect to Proto-Indo-European. This is because it experienced far less contact with other languages and had almost no substrate influence as the Tocharians moved into basically unoccupied territory. In the same way and for the same reasons, Icelandic is the most conservative Germanic language, the Spanish of the American Southwest is the most conservative Spanish dialect, and the Appalachian English dialect is closest in pronunciation to Shakespearian English. 

In my own life, I've personally seen that the Korean language dialect of Korean migrants to the U.S. is more conservative than that of Koreans who stayed in Korea. Languages evolve most slowly at the frontiers if they have limited language contract with other languages (something that obviously isn't true of second generation and later Korean language speakers in the U.S., of course).

Monday, December 1, 2025

The Higgs Boson Continues To Behave Like The Standard Model Higgs Boson

Searches for Higgs boson to charm-anticharm quark pairs at the Large Hadron Collider (LHC), are still too imprecise to precisely determine if the size of its branching fraction matches the Standard Model expectation. But Higgs boson decays to bottom-antibottom quark pairs, the dominant form of Standard Model Higgs boson decays, has a best fit value that is extremely close to the Standard Model predicted value and an uncertainty of roughly ± 50%.

This study breaks out decay detections by different kinds of Higgs boson production methods, which, aside from serving as an internal consistency check of the robustness of the Standard Model's understanding of the Higgs boson, is of only technical interest.
A search for Standard Model (SM) Higgs bosons produced via vector-boson fusion at the Large Hadron Collider and decaying into a charm quark-antiquark pair (H→cc¯) is presented. The datasets used correspond to integrated luminosities of 37.5 fb^−1 and 51.5 fb^−1 and were collected by the ATLAS detector from proton-proton collisions at s√=13 and 13.6 TeV, respectively. 
The observed (expected) upper limit on the H→cc¯ production cross-section times branching ratio is 41 (28) times the SM prediction at 95% confidence level. Combining this search with the previous H→cc¯ search in associated production with a W or Z boson yields an observed (expected) limit on the Higgs-charm Yukawa coupling modifier of |κc| < 4.7 (3.9). 
Higgs bosons decaying into a bottom quark-antiquark pair (H→bb¯) are measured simultaneously using the 51.5 fb^−1 dataset at s√=13.6 TeV, with an observed signal strength of 0.97+0.57−0.50 relative to the SM expectation. When combined with previous H→bb¯ results at 13 TeV, the observed (expected) significance reaches 3.2 (3.6) standard deviations, providing evidence for H→bb¯ events from vector-boson fusion.
ATLAS Collaboration, "Search for H→cc¯ and measurement of H→bb¯ in vector-boson fusion production with the ATLAS Detector" arXiv:2511.21911 (November 26, 2025).

The latest measurement of the top quark mass by the ATLAS Collaboration at the LHS is on the low end, but not terribly precise.
The top-quark mass is measured to be m(top) = 172.17 ± 0.80(stat) ± 0.81(syst) ± 1.07(recoil) GeV, with a total uncertainty of 1.56 GeV. The third uncertainty arises from changing the dipole parton shower gluon-recoil scheme used in top-quark decays.

Sunday, November 30, 2025

Derived Properties In Particle Physics

It is customary to assume more properties of gauge theories than is necessary to produce all of their properties. Some of those assumptions can be derived, including CP invariance.
We revisit the emergence of a Yang-Mills symmetry in theories with massless spin 1 particles from fundamental physical properties of scattering amplitudes. In the standard proofs, some symmetry and reality properties of the coupling constants in three-point amplitudes are assumed. These properties cannot be justified using only three-point amplitudes but we show that they arise as consequences of the consistent factorization of four-particle amplitudes, for particular choices of the particle basis. This applies to self-interactions of massless spin 1 particles and also to their interactions with spin 0 and 1/2 particles. CP invariance is a derived property, not an additional assumption. The situation for gravity interactions is analogous and it is dealt with in the same fashion.
Renato M. Fonseca, Clara Hernandez-Garcia, Javier M. Lizana, Manuel Perez-Victoria, "Gauge theories from scattering amplitudes with minimal assumptions" arXiv:2511.21664 (November 26, 2025).

Wednesday, November 26, 2025

Two Tully-Fischer Relations Linked

 

The Baryonic Tully - Fisher relation (BTFR) links the baryonic mass of galaxies to their characteristic rotational velocity and has been shown to with remarkable precision across a wide mass range. 
Recent studies, however, indicate that galaxy clusters occupy a parallel but offset relation, raising questions about the universality of the BTFR. 
Here, we demonstrate that the offset between galaxies and clusters arises naturally from cosmic time evolution. Using the evolving BTFR derived from the Nexus Paradigm of quantum gravity, we show that the normalization of the relation evolves as an exponential function of cosmic time, while the slope remains fixed at ∼4. This provides a simple and predictive framework in which both galaxies and clusters obey the same universal scaling law, with their apparent offset reflecting their different formation epochs. Our results unify mass-velocity scaling across five orders of magnitude in baryonic mass, offering new insights into cosmic structure formation.
Stuart Marongwe, Stuart Kauffman, "The Evolving Baryonic Tully Fisher Relation: A Universal Law from Galaxies to Galactic Clusters" arXiv:2511.20188 (November 25, 2025).

There is a tight link between the amount of ordinary matter in a galaxy and its rotation speed over many orders of magnitude. This empirical relationship arises naturally from the phenomenological gravity modification known as MOND, without dark matter.

The same relationship holds true for galaxy clusters, but it is shifted from the relationship for galaxies.

The authors propose a theory that would unify both of these relationships. It works, but it isn't terribly convincing but there are a multitude of ways that galaxies and galaxy clusters differ which could give rise to the shift in the relationship that is observed.

Closely related issues are discussed at the latest post at Triton Station.

JUNO Hype And Reality

A new neutrino physics experiment published a preprint with new measurements of neutrino oscillation constants. The new equipment works to high precision and will help fine tune the exact values of some the least precisely known experimentally measured parameters in the Standard Model of Particle Physics. 

This is interesting to people who follow particle physics closely. It is also scientifically important. But honestly, it isn't that interesting to the average person with only a general interest in science.

But, Rory Harris at Live Science in a fit a yellow journalism in the science world, writes a story containing all sorts of nonsense about JUNO revealing beyond the Standard Model physics, as well as the usual, misleading blather about CP violation experimentation answering questions about the baryon asymmetry of the universe (which this experiment does not do).

Thursday, November 20, 2025

From Quarks To Chemistry

Protons, neutrons, and hundreds of other much less stable hadrons  (i.e. systems of quarks and/or gluons bound by the strong force) are understood quite well with the Standard Model of Particle Physics, although there are challenges in understanding scalar mesons, axial vector mesons, and hadrons with four or more quarks, as well as in distinguishing true hadrons with four or more quarks from "hadron molecules", and predicting the full spectrum of hadrons from first principles.

Protons and neutrons in atomic nuclei are not bound together primarily by the strong force itself. Instead, the nuclear binding force between protons and neutrons in an atomic nucleus is the sum of the forces arising from the exchange of several kinds of light mesons (primarily pions but also other light mesons including kaons).

We are not quite there in terms of using Standard Model physics to explain the physics and chemistry of atomic nuclei, although we are getting closer to achieving this vertical integration of subatomic and atomic scale phenomena, and we making great progress on this front. Part of the hold up is the challenge of explaining "parton distribution functions" (PDFs), a property of hadrons that, in principle, can be worked out from first principles with Standard Model physics. But until the past few years, PDFs have actually been determined almost entirely from brute force raw data collection and organization from particle accelerator data.

We also mostly understand the way electrons interact with atomic nuclei, which is almost entirely an electromagnetic quantum electrodynamics (QED) phenomena.

The next layer above understanding atoms, is chemistry, which pertains mostly to how atoms interact with each other, much of which ultimately flow from the physics of atomic nuceli.
We extend the QCD Parton Model analysis by employing a factorized nuclear structure model that explicitly accounts for both individual nucleons and correlated nucleon pairs. This novel framework establishes a paradigm that directly links the nuclear physics description of matter (in terms of protons and neutrons) to the particle physics schema (in terms of quarks and gluons). 
Our analysis of high-energy data from lepton Deep-Inelastic Scattering, Drell-Yan, and W/Z production simultaneously extracts the universal effective distribution of quarks and gluons inside correlated nucleon pairs, and their nucleus-specific fractions. 
The successful extraction of these universal distributions marks a significant advance in our understanding of nuclear structure, as it directly connects nucleon-level and parton-level quantities.
Fredrick Olness, "Bridging the Gap: Connecting Atomic Nuclei to Their Quantum Foundations" arXiv:2511.15659 (November 19) ("Talk presented at the 32nd International Workshop on Deep Inelastic Scattering and Related Subjects (DIS 2025), Capetown, South Africa, 24-28 March 2025. To appear in the proceedings").

MOND From Loop Quantum Gravity

Another "Fundamond" (i.e. fundamental theory to explain MOND phenomenology) proposal tweaked to incorporate quantum gravity considerations. Spin connection foams are a subset of the loop quantum gravity program that seeks to quantize space-time.
Building upon previous work that derived an alternative to (galactic) dark matter in the form of Modified Newtonian Dynamics (MOND), with a specific theoretical interpolating function, from the motion of a non-relativistic test particle in the gravitational field of a point mass immersed in the non-relativistic static limit of the spin connection foam -- which represents the quantum analogue of Minkowski spacetime within precanonical quantum gravity -- we now show the consequences of using higher moments (third and fourth) of the corresponding geodesic equation with a random spin connection term. 
These higher moments lead to more general quantum modifications of the Newtonian potential (qMOND potentials expressed in terms of Gauss and Appell hypergeometric functions), more general (steeper) MOND interpolating functions, and a new modification of MOND at low accelerations (mMOND) that features an almost-flat asymptotic rotation curve ∝r−^1/18, which is expected to operate at approximately the same galactic scales as MOND.
M.E. Pietrzyk, V.A. Kholodnyi, I.V. Kanattšikov, J. Kozicki, "Modifications of Newtonian dynamics from higher moments of quantum spin connection in precanonical quantum gravity" arXiv:2511.15025 (November 18, 2025) ("To appear in the Special Issue "My Favourite Dark Matter Model'' of MPLA").

Tuesday, November 18, 2025

Inflation Without Inflaton

I'm agnostic but skeptical about cosmological inflation. This preprint (which is part of a series of articles) is a middle ground between conventional cosmological inflation theory and a true no inflation theory.

Rather than a new substance/particle, it relies upon the quantum nature of space-time itself for its conclusions.
We present a complete computation of the scalar power spectrum in the inflation without inflaton (IWI) framework, where the inflationary expansion is driven solely by a de~Sitter (dS) background and scalar fluctuations arise as second-order effects sourced by tensor perturbations. By explicitly deriving and numerically integrating the full second-order kernel of the Einstein equations, we obtain a scale-invariant scalar spectrum without invoking a fundamental scalar field. 
In this framework, the amplitude of the scalar fluctuations is directly linked to the scale of inflation. More precisely, we show that matching the observed level of scalar fluctuations, Δ2ϕ(k∗) ≈ 10^−9 at Cosmic Microwave Background (CMB) scales, fixes the inflationary energy scale H(inf) as a function of the number of observed e-folds N(obs). 
For N(obs) ≃ 30 − 60, we find Hinf≃5×10^13 GeV − 2 × 10^10 GeV, corresponding to a tensor-to-scalar ratio r≃ 0.01 − 5 × 10^−9. In particular, requiring consistency with instantaneous reheating, we predict a number of e-folds of order~(50) and an inflationary scale H(inf) ≃ 10^11GeV. We also incorporate in our framework the quantum break-time of the dS state and show that it imposes an upper bound on the number of particle species. Specifically, using laboratory constraints on the number of species limits the duration of inflation to N(obs) ≲ 126 e-folds. 
These results establish the IWI scenario as a predictive and falsifiable alternative to standard inflaton-driven models, linking the observed amplitude of primordial fluctuations directly to the quantum nature and finite lifetime of dS space.
Marisol Traforetti, et al., "Inflation without an Inflaton II: observational predictions" arXiv:2511.11808 (November 14, 2025).

Tuesday, November 11, 2025

C.N. Yang Dies At Age 103

Theoretical physicist C.N. Yang has died at the age of 103 years. 

He is the Yang in Yang-Mills theory, which he and his collaborators devised in 1953, which is a generic quantum field theory that is used by scientists to study amplitudes (i.e. vector probabilities) that are foundational in all Standard Model processes and most quantum gravity theories.

He also won a Nobel prize in 1957 for his work on CP violation.

The Case Against The External Field Effect And A Relativistic MOND Theory

A new paper provides a possible explanation for observational evidence of a MOND-like external field effect, without definitively ruling it out. I made a post about the paper that is being re-examined exactly five years ago today.

We examine the claimed observations of a gravitational external field effect (EFE) reported in Chae et al. 
We show that observations suggestive of the EFE can be interpreted without violating Einstein's equivalence principle, namely from known correlations between morphology, environment and dynamics of galaxies. 
While Chae et al's analysis provides a valuable attempt at a clear test of Modified Newtonian Dynamics, an evidently important topic, a re-analysis of the observational data does not permit us to confidently assess the presence of an EFE or to distinguish this interpretation from that proposed in this article.
Corey Sargent, William Clark, Antonia Seifert, Alicia Mand, Emerson Rogers, Adam Lane, Alexandre Deur, Balša Terzić, "On the Evidence for Violation of the Equivalence Principle in Disk Galaxies" arXiv:2511.03839 (November 5, 2025) (published in 8 Particles 65 (2025)).

Another promising MOND related preprint that uses entropy and temperature (which is associated intimately with entropy) to devise a relativistic gravitational theory that reproduced MOND phenomenology was also released today:
We derive a relativistic extension of Modified Newtonian Dynamics (MOND) within the framework of entropic gravity by introducing temperature-dependent corrections to the equipartition law on a holographic screen. 
Starting from a Debye-like modification of the surface degrees of freedom and employing the Unruh relation between acceleration and temperature, we obtain modified Einstein equations in which the geometric sector acquires explicit thermal corrections. Solving these equations for a static, spherically symmetric spacetime in the weak-field, low-temperature regime yields a corrected metric that smoothly approaches Minkowski space at large radii and naturally contains a characteristic acceleration scale.
In the very-low-acceleration regime, the model reproduces MOND-like deviations from Newtonian dynamics while providing a relativistic underpinning for that phenomenology. We confront the theory with rotation-curve data for NGC~3198 and perform a Bayesian parameter inference, comparing our relativistic MOND (RMOND) model with both a baryons-only Newtonian model and a dark-matter halo model. We find that RMOND and the dark-matter model both fit the data significantly better than the baryons-only Newtonian prediction, and that RMOND provides particularly improved agreement at r≳20kpc. These results suggest that temperature-corrected entropic gravity provides a viable relativistic framework for MOND phenomenology, motivating further observational tests, including gravitational lensing and extended galaxy samples.
A. Rostami, K. Rezazadeh, M. Rostampour, "Relativistic MOND Theory from Modified Entropic Gravity" arXiv:2511.05632 (November 7, 2025).

Thursday, November 6, 2025

Why Does Cosmology Give Us A Negative Neutrino Mass As A Best Fit Value?

The apparent preference for a best fit value of the neutrino masses from cosmology measurements is probably a matter of some fine methodological adjustments that weren't made for gravitational lensing.
Recent analyses combining cosmic microwave background (CMB) and baryon acoustic oscillation (BAO) challenge particle physics constraints on the total neutrino mass, pointing to values smaller than the lower limit from neutrino oscillation experiments. To examine the impact of different CMB likelihoods from Planck, lensing potential measurements from Planck and ACT, and BAO data from DESI, we introduce an effective neutrino mass parameter (∑m̃ ν) which is allowed to take negative values. 
We investigate its correlation with two extra parameters capturing the impact of gravitational lensing on the CMB: one controlling the smoothing of the peaks of the temperature and polarization power spectra; one rescaling the lensing potential amplitude. In this configuration, we infer ∑m̃ ν=−0.018+0.085−0.089 eV (68% C.L.), which is fully consistent with the minimal value required by neutrino oscillation experiments. 
We attribute the apparent preference for negative neutrino masses to an excess of gravitational lensing detected by late-time cosmological probes compared to that inferred from Planck CMB angular power spectra. We discuss implications in light of the DESI BAO measurements and the CMB lensing anomaly.
Andrea Cozzumbo, et al., "A short blanket for cosmology: the CMB lensing anomaly behind the preference for a negative neutrino mass" arXiv:2511.01967 (November 3, 2025).

A Dark Energy Alternative

There are multiple possible alternatives to a cosmological constant. This is one of the better attempts.
In our local-to-global cosmological framework, cosmic acceleration arises from local dynamics in an inhomogeneous Einstein-de Sitter (iEdS) universe without invoking dark energy. 
An iEdS universe follows a quasilinear coasting evolution from an Einstein-de Sitter to a Milne state, as an effective negative curvature emerges from growing inhomogeneities without breaking spatial flatness. Acceleration can arise from structure formation amplifying this effect. 
We test two realizations, iEdS(1) and iEdS(2), with H(0) = {70.24,74.00} km s^−1 Mpc^−1 and Ω(m,0) = {0.290,0.261}, against CMB, BAO, and SN Ia data. 
iEdS(1) fits better than ΛCDM and alleviates the H0 tension, whereas iEdS(2) fully resolves it while remaining broadly consistent with the data. Both models yield t0≃13.64 Gyr, consistent with globular-cluster estimates.
Peter Raffai, et al., "A Case for an Inhomogeneous Einstein-de Sitter Universe" arXiv:2511.03288 (November 5, 2025).

Monday, October 27, 2025

A New 200,000 Year Old Denisovan Genome

Bernard's blog does a good job of reviewing the recent publication of a 200,000 year old Denisovan genome. 

This Denisovan's life predates the emergence of modern humans from Africa, but overlaps with the existence of the earliest modern humans within Africa.

In a nutshell, the preprint is: Stéphane Peyrégne, et al., "A high-coverage genome from a 200,000-year-old Denisovan" bioRxiv (October 20, 2025). According to Bernard (via Google translate from French):
They sequenced the genome of molar Denisova 25. Initial results showed that the individual was male. Furthermore, the mitochondrial and Y chromosome haplogroups both belong to the Denisovan population.

Friday, October 24, 2025

The Latest Neutrino Oscillation Parameters

Background

Data from W and Z boson decays and from cosmology measurements, strongly favor a model with exactly three active neutrino flavors, as do requirements for mathematical consistent in the Standard Model of Particle Physics, which requires that each generation of Standard Model fermions (i.e. an up-like quark, a down-like quark, an electron-like charged lepton, and a neutrino) must be complete with four members. The lower bound on a fourth active neutrino mass is on the order of 45,000,000,000,000 meV, while we know that none of the other three active neutrino masses are more than about 900 meV, and we have strong indications that the largest of the three active neutrino masses is no more than about 70 meV.

When the Standard Model was first formulated in the 1970s, neutrinos were assumed to be massless fermions. Experiments proved that this couldn't be the case in 1998, and that neutrinos change flavors and oscillate between their three mass eigenstates. Since then, scientists have worked to determine their mass and additional properties arising from the fact that they have mass, which has culminated in a basic three favor, Dirac neutrino model of neutrino oscillation to which the data has been fit.

This neutrino oscillation behavior is characterized by two mass differences Δm(21) and Δm(32), whether the masses are in a "normal" or "inverted" hierarchy, and four parameters of the PMNS matrix: three of which describe the probability of each of the possible transitions between the three neutrino flavors, and one of which δCP describes charge parity violation (i.e. how those transition probabilities differ between neutrinos and antineutrinos).  

The two mass differences have been measured to decent precision. A normal mass hierarchy is favored by the experimental data, but not to terribly great statistical significance (the preference is close to two sigma). The three main mixing angles of the PMNS matrix have been measured to reasonable but modest precision, although it isn't entirely established if one of them is a bit less than 45º or a bit more than 45º (the data increasing favors a value that is a bit more than 45º). Attempts to measure δCP are very imprecise and generally can't entirely rule out the possibility that there is no CP violation in neutrino oscillation, the best fit values of measurements of δCP consistently favor near maximum CP violation in neutrino oscillations.

The world average measured value of those parameters are as follows (according to the Particle Data Group):


In addition, to fully characterize the properties of neutrinos in the basic three flavor, Dirac neutrino model to which experimental data is fitted, one needs to know the absolute rest mass of at least one of the neutrino mass eigenstates. The experimental upper bound on the mass of the lightest absolute neutrino mass eigenstate is about 800 meV. The experimental lower bound on the sum of the three neutrino mass eigenstates is on the order of 58 meV for a "normal" hierarchy of neutrino masses, and 110 meV. for an "inverted" hierarchy of neutrino masses Reasonably robust upper bounds on the sum of the three neutrino masses from cosmology models and astronomy measurements favor an upper bound for the sum of the three neutrino masses to around 130 meV, with some measurements putting it below the 110 meV cap allowed for an inverted hierarchy for neutrino masses with some more aggressive theoretical assumptions.

The New Paper

A new paper combines the latest data from two major neutrino physics collaborations (NOvA and T2K) to tighten up the precision of measurements of the Δm(32) and δCP parameters of neutrino oscillations, which is hard to do with a single collaboration's data, because the observables in each experiment depend upon more than one parameter, and it is hard to tell with just a single experiment's data, which parameter is driving those observables. But, since the mix of parameters that drive the observables in each experiment is different (in part, by design to allow for just this kind of combined data analysis), when the two collaborations' data are combined, these degeneracies in each individual collaboration's data can be minimized.

The new paper below improves the precision of the measurement of Δm(32) a bit, and also makes for a still very imprecise, but improved, measurement of δCP. 

The new combined measurement for Δm(32) is at the very low end of the current the world average plus or minus two sigma range.

The new paper rules out the possibility of zero CP violation in neutrino mixing at the 3 sigma level for an inverted neutrino mass hierarchy assumption, and at a roughly 2.4 sigma level of significance for a normal neutrino mass hierarchy assumption.
The landmark discovery that neutrinos have mass and can change type (or "flavor") as they propagate -- a process called neutrino oscillation -- has opened up a rich array of theoretical and experimental questions being actively pursued today. 
Neutrino oscillation remains the most powerful experimental tool for addressing many of these questions, including whether neutrinos violate charge-parity (CP) symmetry, which has possible connections to the unexplained preponderance of matter over antimatter in the universe. Oscillation measurements also probe the mass-squared differences between the different neutrino mass states (Δm^2), whether there are two light states and a heavier one (normal ordering) or vice versa (inverted ordering), and the structure of neutrino mass and flavor mixing. 
Here, we carry out the first joint analysis of data sets from NOvA and T2K, the two currently operating long-baseline neutrino oscillation experiments (hundreds of kilometers of neutrino travel distance), taking advantage of our complementary experimental designs and setting new constraints on several neutrino sector parameters. 
This analysis provides new precision on the Δm(32)^2 mass difference, finding 2.43+0.04−0.03 (−2.48+0.03−0.04) × 10^−3 eV^2 in the normal (inverted) ordering, as well as a 3σ interval on δCP of [−1.38π, 0.30π] ([−0.92π, −0.04π]) in the normal (inverted) ordering. The data show no strong preference for either mass ordering, but notably if inverted ordering were assumed true within the three-flavor mixing paradigm, then our results would provide evidence of CP symmetry violation in the lepton sector.
NOvA, T2K Collaborations, "Joint neutrino oscillation analysis from the T2K and NOvA experiments" arXiv:2510.19888 (October 22, 2025).

Further Neutrino Property Issues 

Additional parameters are needed if neutrinos are actually Majorana particles (i.e. if they are their own antiparticles), or if they oscillate with a "sterile" right handed neutrino which only interacts via neutrino oscillation (and not via the electromagnetic, weak, or strong forces of the Standard Model) which is often proposed as a way for Dirac neutrinos to acquire mass in what is called a see-saw mechanism. 

Most physicists believe that one of these two possibilities should be possible to give rise to a mechanism for mass generation in neutrinos. Mass generation via the Higgs mechanism is not a good fit for neutrinos since all neutrinos are "left handed" in parity, and all antineutrinos are "right handed" in parity, unlike all other Standard Model fermions which have both left and right parity versions of both their particles and their antiparticles, making four states possible.

The most definitive phenomenological parameter of Majorana neutrinos would be neutrinoless double beta decay, which has not been observed to high precision. But neutrinoless double beta decays involving Majorana neutrinos is a function of their absolute mass scale. It is more rare to the extent that neutrinos are less massive. And, current bounds on neutrinoless double beta decay are not so strong that this can be ruled out for reasonable neutrino mass scales, although it's getting close to that point. Majorana neutrinos would also have a more complicated oscillation behavior involving more mixing parameters than the Dirac neutrino model.

A Dirac neutrino model with a see-saw mechanism involving oscillation with a sterile neutrino would imply that the transition probabilities of the PMNS matrix parameters wouldn't be unitary. In other words, the probabilities of all the three flavor model transitions wouldn't add up to 100%, because some small percentage of neutrino oscillations would be to one or more sterile neutrino flavors. So far, the observed PMNS matrix parameters are consistent with unitarity. But since the measured parameters each have uncertainties, there is room within those uncertainties for transitions to an additional sterile neutrino flavor (or even to multiple sterile neutrino flavors). The upper bound on missing neutrino transition probabilities is quite low, however, and to make a see-saw mechanism work with such small experimentally allowed transition probabilities, the mass of a hypothetical sterile neutrino would have to be very high.

For the record, I don't like either solution and think that we need to find a "third way" mechanism for generating neutrino mass, although I don't know what it would be.

Thursday, October 23, 2025

Because Deur Is Awesome, Even At His Day Job

Alexandre Deur's side hustle, described in the sidebar link, is his work on a gravitational explanation for dark matter and dark energy phenomena, which would solve several of the greatest unsolved problems in physics.

His day job is as a QCD physicist at Jefferson Lab, a U.S. Department of Energy particle physics facility in Newport News, Virginia. There, his progress in determining the value of the least accurately known Standard Model model coupling constant, and confirming that its running with energy scale is consistent with the Standard Model, is also a good thing. 

Unsurprisingly, the research done by him and his colleagues confirms that the Standard Model's running of the strong force coupling constant of quantum chromodynamics determined experimentally confirms the Standard Model over a huge range of energy scales (from hundreds of MeVs to about 14,000,000 MeV).

The strong force coupling constant is usually quoted with values converted using the beta-function that describes its running with energy scale in the Standard Model to the Z-boson mass of 91.188 ± 0.002 GeV (according to the Particle Data Group, inverse error weighted world average measurement). Its world average value converted to that energy scale is 0.1180(9).

The numerical values shown below are in a normalized scale, so the numerical value doesn't match the familiar number.

We discuss how the Bjorken sum rule allows access to the QCD running coupling αs at any scale, including in the deep infrared IR domain. The Bjorken sum data from Jefferson Lab, together with the world data on αs reported by the Particle Data Group, allow us to determine the running of α(s)(Q) over five orders of magnitude in four-momentum Q. We present two possible future measurements of the running of α(s)(Q) using the Bjorken sum rule: the first at the EIC, covering the range 1.5 < Q < 8.7 GeV, and the second at Jefferson Lab at 22 GeV, covering the range 1.0 < Q < 4.7 GeV.
A. Deur, "The strong coupling from the IR to the UV extremes: Determination of α(s) and prospects from EIC and JLab at 22 GeV" arXiv:2510.19556 (October 22, 2025) (Contribution to the proceedings of the "QCD at the Extremes" workshop, Sept. 1-11 2025).

The paper above discusses how proposed low energy experiments at the electron-ion collider at the Brookhaven Lab on Long Island, New York (EIC) and JLab would greatly reduce uncertainties in the measurement of the strong force coupling constant measurement at low energies (i.e. below 5,000 MeV) as shown by the chart below.

Tuesday, October 21, 2025

The Hunter-Gather To Bronze Age Transition In Kazakhstan

Bernard discusses a paper on ancient DNA from Kazakstan. It revealed that the hunter-gatherer population that persisted there until the late Neolithic era in Europe was roughly 95% replaced by Bronze Age early Indo-Europeans herders similar to the Sintashta and Andronovo cultures genetically. The paper is Haechan Gill, et al., "Ancient genomes from eastern Kazakhstan reveal dynamic genetic legacy of Inner Eurasian hunter-gatherers" (2025).

The paper also has many other secondary insights.


The samples from the current study are in yellow, with the Bronze age samples on the left, and in the MLBA clines, and the Neolithic samples in the Steppe HG cline on the right.

A Search For X17 Comes Up Empty And Assorted Astrophysics Papers

Today's preprint harvest was abundant and I have a little time to blog this morning.

An X17 paper

BESIII searched for an X17 boson and didn't find it. 

We report a direct search for a new gauge boson, X, with a mass of 17 MeV/c^2, which could explain the anomalous excess of e+e− pairs observed in the 8Be nuclear transitions. The search is conducted in the charmonium decay χcJ→XJ/ψ (J = 0,1,2) via the radiative transition ψ(3686)→γχcJ using (2712.4 ± 14.3) × 10^6 ψ(3686) events collected with the BESIII detector at the BEPCII collider. No significant signal is observed, and the new upper limit on the coupling strength of charm quark and the new gauge boson, ϵc, at 17 MeV/c^2 is set to be |ϵc| < 1.2 × 10^−2 at 90% confidence level. We also report new constraints on the mixing strength ϵ between the Standard Model photon and dark photon γ′ in the mass range from 5 MeV/c^2 to 300 MeV/c^2. The upper limits at 90% confidence level vary within (2.5−17.5) × 10^−3 depending on the γ′ mass.
BESIII Collaboration, "Search for a hypothetical gauge boson and dark photons in charmonium transitions" arXiv:2510.16531 (October 18, 2025).

Four astrophysics papers

There were several MOND or MOND-adjacent papers in today's preprints that I don't really have time to discuss at great length.

Stacy McGaugh, one of the leading members of the current generation of MOND researchers looks at pattern in the ordinary matter mass v. size relationship for galaxies in a large data set:
The mass-size relations of galaxies are generally studied considering only stars or only gas separately. Here we study the baryonic mass-size relation of galaxies from the SPARC database, using the total baryonic mass (Mbar) and the baryonic half-mass radius (R50,bar). We find that SPARC galaxies define two distinct sequences in the Mbar−R50,bar plane: one that formed by high-surface-density (HSD), star-dominated, Sa-to-Sc galaxies, and one by low-surface-density (LSD), gas-dominated, Sd-to-dI galaxies. The Mbar−R50,bar relation of LSD galaxies has a slope close to 2, pointing to a constant average surface density, whereas that of HSD galaxies has a slope close to 1, indicating that less massive spirals are progressively more compact. 
Our results point to the existence of two types of star-forming galaxies that follow different evolutionary paths: HSD disks are very efficient in converting gas into stars, perhaps thanks to the efficient formation of non-axisymmetric structures (bars and spiral arms), whereas LSD disks are not. 
The HSD-LSD dichotomy is absent in the baryonic Tully-Fisher relation (Mbar versus flat circular velocity Vf) but moderately seen in the angular-momentum relation (approximately Mbar versus Vf×R50,bar), so it is driven by variations in R50,bar at fixed Mbar. This fact suggests that the baryonic mass-size relation is the most effective empirical tool to distinguish different galaxy types and study their evolution.

Zichen Hua, Federico Lelli, Enrico Di Teodoro, Stacy McGaugh, James Schombert, "The baryonic mass-size relation of galaxies. I. A dichotomy in star-forming galaxy disks" arXiv:2510.17770  (October 20, 2025) (accepted by Astronomy & Astrophysics).

The creator of MOND muses in a public lecture about what a fundamental theory explaining MOND (a FUNDAMOND) has to look like:

In default of a fundamental MOND theory -- a FUNDAMOND -- I advocate that, alongside searching for one, we should try to identify predictions that follow from wide classes of MOND theories, if not necessarily from all. In particular, predictions that follow from only the basic tenets of MOND -- ``primary predictions'' -- are shared by all MOND theories, and are especially valuable. Such predictions permit us to test the MOND paradigm itself, or at least large parts of it, without yet having a FUNDAMOND. 
Concentrating on the deep-MOND limit, I discuss examples of either type of predictions. 
For some examples of primary predictions, I demonstrate how they follow from the basic tenets (which I first formulate). I emphasize that even predictions that pertain to the deep-MOND limit - namely, those that concern gravitating systems that have low accelerations everywhere -- require the full set of MOND tenets, including the existence of a Newtonian limit close to the deep-MOND regime. This is because Newtonian dynamics is a unique theory that all MOND theories must tend to in the limit of high accelerations, and it strongly constrains aspects of the deep-MOND regime, if the transition between the limits is fast enough, which is one of the MOND tenets.

Mordehai Milgrom, "The deep-MOND limit -- a study in Primary vs secondary predictions" arXiv:2510.16520 (a talk presented at the MOND workshop, Leiden, September 2025) (October 18, 2025).

The paper by Scholz below is an attempt to devise a "FUNDAMOND":
Under carefully chosen assumptions a single general relativistic scalar field is able to induce MOND-like dynamics in the weak field approximation of the Einstein frame (gauge) and to modify the light cone structure accordingly. 
This is shown by a Lagrangian model formulated in the framework of integrable Weyl geometry. It contains a Bekenstein-type (``aquadratic'') term and a second order term generating additional mass energy for the scalar field. Both are switched on only if the gradient of the scalar field is spacelike and below a MOND-typical threshold, like in the superfluid model of Berezhiani/Khoury. The mass term induces non-negligible energy and pressures of the scalar field and leads to gravitational light deflection compatible with MOND-ian free fall trajectories. In the weak field (Newton-Milgrom) approximation the Bekenstein term implies a deep MOND equation for the scalar field. In this model the external field effect of the MOND approach has to be reconsidered. This has important consequences for hierarchical systems like clusters, which may suffice for explaining their dynamics without additional dark matter
Erhard Scholz, "Einstein gravity extended by a scale covariant scalar field with Bekenstein term and dynamical mass generation" arXiv:2510.17704 (October 20, 2025).

Finally a notable dark matter search paper rules out a significant swath of dark matter particle parameter space, that most people assumed never existed (heavy charged dark matter):
There is a claim in the literature that charged dark matter particles in the mass range 100(qX/e)^2 TeV≤mX≤10^8(qX/e) TeV are allowed, based on arguing that heavy charged particles cannot reach the Earth from outside the magnetized region of the Milky Way (Chuzhoy-Kolb, 2009). We point out that this claim fails for physical models for the Galactic magnetic field. We explicitly confirm our argument by simulating with the software CRPropa the trajectories of heavy charged dark matter in models of the Galactic magnetic field.
Daniele Perri, Glennys Farrar, "The window on heavy charged dark matter was never open" arXiv:2510.17026 (October 19, 2025).

Thursday, October 16, 2025

The Population Genetics Of Egypt Have Been Stable For A Long Time

An ancient DNA sample from ca. 2500 BCE in Egypt reveals a great deal of continuity in the population genetics of Egypt then and the population genetics of Egypt today. 

I didn't have a lot of time to look carefully at this study, but prior studies have shown a modest increase in sub-Saharan African admixture since then, due to the trans-Saharan slave trade in more recent time periods.

Ultralight Dark Matter

While ultra-light bosonic dark matter (ULDM) in a Bose-Einstein condensate (BEC) state could naturally account for the central core in some galaxies and resolve the core-cusp problem, the dark matter density distribution in the outer regions of galaxies remains less explored. We propose a trial wavefunction to model the ULDM distribution beyond the BEC core. We derive the corresponding rotation velocity curve, which shows excellent agreement with those of 12 dwarf spheroidal galaxies. The best-fit ULDM particle mass for each dwarf galaxy falls within a strikingly narrow range of m = (1.8−3.2) × 10^−23 eV.
Tian-yao Fang, Ming-Chung Chu, "Constraining Ultra-Light Dark Matter mass with Dwarf Galaxy Rotation Curves" arXiv:2510.12848 (October 14, 2025).

The best fit particle mass is in line with other studies and very close to the average mass-energy of a graviton, if they exist (and gravitons are, of course, bosons).

In general, ultralight bosonic dark matter proposals are are better fit to the data than any of the other dark matter particle models. 

Even warm dark matter, in the keV mass range, only barely improves upon failed cold dark matter and ultraheavy dark matter models. Self-interacting dark matter models have also not stood up well against the data from galaxy dynamics.

Tuesday, October 14, 2025

A Quantum Gravity Observation From Sabine

This basic idea has been floating around in quantum gravity circles for a while, but Hossenfelder's take is more cogent and careful than many of these attempts. Her model is basically a superdeterministic one.
I present a simple argument for why a fundamental theory that unifies matter and gravity gives rise to what seems to be a collapse of the wavefunction. The resulting model is local, parameter-free and makes testable predictions.
Sabine Hossenfelder, "How Gravity Can Explain the Collapse of the Wavefunction" arXiv:2510.11037 (October 13, 2025).

The conclusion states:
I have shown here how the assumption that matter and geometry have the same fundamental origin requires the time evolution of a quantum state to differ from the Schr¨odinger equation. This has the consequence that the ideal time evolutions which minimise the action are those with end states that are to good approximation classical. We can then identify these end states with the eigenstates of the measurement device. 
This new model therefore explains why quantum states seem to ‘collapse’ into eigenstates of the measurement observable, and how this can happen while preserving locality. Since the collapse process is governed by quantum gravitational contributions whose strength is known, the resulting model is parameter free. 
Collapse happens in this model whenever the accumulated phase difference between dislocated branches, τm|Φ12|, exceeds ∼ 1. The model’s phenomenology—notably the collapse itself—can be tested in roughly the same parameter range as other tests of the weak field limit of quantum gravity.

Thursday, October 9, 2025

A Proposal To Explain The Neutrino Mixing Angles

Many papers try to explain fundamental constants in the Standard Model in terms of deeper relationships. This attempt to gain insight into the neutrino oscillation parameters is more thought provoking than most. 

We propose a geometric hypothesis for neutrino mixing: twice the sum of the three mixing angles equals 180∘, forming a Euclidean triangle. This condition leads to a predictive relation among the mixing angles and, through trigonometric constraints, enables reconstruction of the mass-squared splittings. 
The hypothesis offers a phenomenological resolution to the θ23 octant ambiguity, reproduces the known mass hierarchy patterns, and suggests a normalized geometric structure underlying the PMNS mixing. 
We show that while an order-of-magnitude scale mismatch remains (the absolute splittings are underestimated by ∼10×), the triangle reproduces mixing ratios with notable accuracy, hinting at deeper structural or symmetry-based origins. 
We emphasize that the triangle relation is advanced as an empirical, phenomenological organizing principle rather than a result derived from a specific underlying symmetry or dynamics. 
It is testable and falsifiable: current global-fit values already lie close to satisfying the condition, and improved precision will confirm or refute it. We also outline and implement a simple χ2 consistency check against global-fit inputs to quantify agreement within present uncertainties.
Mohammad Ful Hossain Seikh, "A geometrical approach to neutrino oscillation parameters" arXiv:2510.06526 (October 7, 2025).

Does Non-Perturbative QCD Have A Cosmological Constant Analog?

A new paper explores a potential parallel between non-perturbative quantum chromodynamics (the physics of the strong force that binds quarks into hadronic structures) and gravity. This isn't entirely surprising, as both are non-abelian gauge theories. And, it suggests that features like the cosmological constant may have a natural source in a non-abelian quantum gravity theory.

Einsteins gravity with a cosmological constant Λ in four dimensions can be reformulated as a λϕ^4 theory characterized solely by the dimensionless coupling λ∝G(N)Λ (G(N) being Newton's constant). The quantum triviality of this theory drives λ → 0, and a deviation from this behavior could be generated by matter couplings. Here, we study the significance of this conformal symmetry and its breaking in modeling non-perturbative QCD. The hadron spectra and correlation functions are studied holographically in an AdS(5) geometry with induced cosmological constants on four-dimensional hypersurface. 

Our analysis shows that the experimentally measured spectra of the ρ and a(1) mesons, including their excitations and decay constants, favour a non-vanishing induced cosmological constant in both hard-wall and soft-wall models. Although this behavior is not as sharp in the soft-wall model as in the hard-wall model, it remains consistent. Furthermore, we show that the correction to the Gell-Mann-Oakes-Renner relation has an inverse dependence on the induced cosmological constant, underscoring its significance in holographic descriptions of low-energy QCD.
Mathew Thomas Arun, Nabeel Thahirm, "On the role of cosmological constant in modeling hadrons" arXiv:2510.06380 (October 7, 2025).

A New Paper Argues For Dark Matter Over MOND

This paper argues for dark matter particles rather than modified gravity based upon observations of very low mass dwarf galaxies, although it has a very small sample size of just twelve galaxies.
A tight correlation between the baryonic and observed acceleration of galaxies has been reported over a wide range of mass (10^8 < Mbar/M⊙ < 10^11) - the Radial Acceleration Relation (RAR). This has been interpreted as evidence that dark matter is actually a manifestation of some modified weak-field gravity theory. 
In this paper, we study the radially resolved RAR of 12 nearby dwarf galaxies, with baryonic masses in the range 10^4 < Mbar/M⊙ < 10^7.5, using a combination of literature data and data from the MUSE-Faint survey. We use stellar line-of-sight velocities and the Jeans modelling code GravSphere to infer the mass distributions of these galaxies, allowing us to compute the RAR. We compare the results with the EDGE simulations of isolated dwarf galaxies with similar stellar masses in a ΛCDM cosmology. 
We find that most of the observed dwarf galaxies lie systematically above the low-mass extrapolation of the RAR. Each galaxy traces a locus in the RAR space that can have a multi-valued observed acceleration for a given baryonic acceleration, while there is significant scatter from galaxy to galaxy
Our results indicate that the RAR does not apply to low-mass dwarf galaxies and that the inferred baryonic acceleration of these dwarfs does not contain enough information, on its own, to derive the observed acceleration. 
The simulated EDGE dwarfs behave similarly to the real data, lying systematically above the extrapolated RAR. We show that, in the context of modified weak-field gravity theories, these results cannot be explained by differential tidal forces from the Milky Way, nor by the galaxies being far from dynamical equilibrium, since none of the galaxies in our sample seems to experience strong tides. As such, our results provide further evidence for the need for invisible dark matter in the smallest dwarf galaxies.
Mariana P. Júlio, et al., "The radial acceleration relation at the EDGE of galaxy formation: testing its universality in low-mass dwarf galaxies" arXiv:2510.06905 (October 8, 2025) (Accepted for publication in A&A).