Friday, March 13, 2026

Predicting Heavy Hadron Masses

This paper makes mass predictions for a huge number of three and five valence quark hadrons (in both ground states and excited states) made by both traditional methods from the literature and AI models, producing multiple estimates by different methods for each hadron considered. It is mostly a pattern recognition exercise, rather than a set of calculations from QCD first principles. It predicts several hundred composite particle masses.

This is easier for baryons (i.e. half-integer spin fermions) than for mesons (i.e. integer spin bosons) because baryons have far fewer quirky exceptions to general rules that flow, in part, from different mesons blending into each other, which is something that baryons don't do.

One observation is that these several hundred heavy baryons (in the broad sense of half integer spin hadrons, rather than the narrow sense of three valence quark hadrons) fill a pretty narrow range of masses, with the lightest having a mass of about 1.5 GeV, the heaviest having a mass of 11.4 GeV, and most of the predicted masses bunching up in the middle, with more than 4 GeV and less than 10 GeV. The lightest pentaquarks are a bit over 4 GeV.

Given that there are only a handful of possible quantum numbers for each hadron, the experimental task of distinguishing one heavy baryon from another would be challenging, with many possibilities near any given mass. 

While experimental mass measurement of heavy baryons typically have uncertainties of a few MeV, the uncertainties in the theoretical mass predictions are much greater. The theoretical uncertainties of the predictions range from about 100 to 2000 MeV, with most in the range of about 450 to 1200 MeV. The differences between theoretical mass predictions methods for the same hadron also frequently exceed the combined claimed uncertainties in the predictions, however, so the uncertainties are probably underestimated.

Since it is easy to make predictions if they are vague enough, which makes it easy for the predictions to be consistent with the experimentally observed values, the significance of these models shouldn't be exaggerated. They are making very ballpark estimates based upon very general considerations. 

But because it is so comprehensive, this is still somewhat useful in winnowing down candidates for a particular observed resonance with a particular observed mass from several hundred possibilities to perhaps a few dozen likely candidates of similar mass, which can be narrowed down further with measurements of the resonances spin, charge, and other quantum numbers to perhaps a dozen or fewer candidates.

In this article, we use two different methods for studying the mass spectra of fully-heavy baryons and pentaquarks. 
In the first section, we use state-of-the-art machine learning methods, such as deep neural networks and the Particle Transformer model architecture, to predict baryon masses directly from their quantum numbers, based on experimental information on hadrons from the Particle Data Group (PDG). We use this data-driven approach for the case of fully heavy baryons, and a large number of exotic pentaquark states, going much beyond the well-known P+c(4380) and $ P_c^+(4457) candidates. Subsequently, we extend the Gürsey-Radicati mass formula to incorporate the contributions of charm and bottom quarks, enabling analytical calculations for both ground and radially excited states of baryons and pentaquarks. 
The results obtained from both approaches demonstrate strong agreement with experimental data where available and make predictions for a number of unobserved states, including higher radial excitations. By addressing the question through both data-driven prediction and analytical modeling in different frameworks, this study offers complementary insights into the mass spectrum of conventional and exotic hadrons, guiding future experimental searches.
S. Rostami, A. R. Olamaei, M. Malekhosseini, K. Azizi, "Comprehensive Mass Predictions: From Triply Heavy Baryons to Pentaquarks" arXiv:2603.11259 (March 11, 2026).

Thursday, March 12, 2026

An Unreview

What makes this paper especially notable is not its content per se but the concept of an "unreview", which potentially has broad interdisciplinary applications.

Accreting white dwarfs (AWDs) are among the best natural laboratories for understanding disk accretion. Their proximity, brightness, and purely classical nature make them ideal systems in which to probe the fundamental physics that governs the transport of angular momentum, the generation of outflows, and the coupling between disks, magnetospheres, and accretors. Yet despite decades of study, many critical questions remain unresolved. 
In this ``unreview'', we therefore focus not on what is known, but on what is unknown. 
What drives viscosity and sustains accretion in largely neutral disks? How are powerful winds launched, and how do they feed back on the disk and binary evolution? Why do so many systems show persistent retrograde precession, and what drives bursts in magnetic AWDs? 
By identifying these open problems -- and suggesting ways to resolve them -- we aim to motivate new observational, numerical, and theoretical efforts that will advance our understanding of accretion physics across all mass scales, from white dwarfs to black holes.
Simone Scaringi, Christian Knigge, Domitilla de Martino, "Accreting White Dwarfs: An Unreview" arXiv:2603.10150 (March 10, 2026) (Accepted in Space Science Reviews).

Also notable is a paper demonstrating that a twenty times faster method of computing big data in cosmology is indistinguishable in its results from a more conventional method of doing so, despite the fact that the faster method isn't obviously theoretically rigorous and sound (because it uses linear rather than non-linear mathematical methods).

There is also a new paper replicating a result of a 2026 paper finding MOND-like effects in wide binaries using a modestly different analysis method.

Monday, March 9, 2026

Variations On Tully-Fischer

The Baryonic Tully-Fischer relation (a tight correlation between ordinary matter and inferred total mass) holds much more tightly than a parallel correlation considering only ordinary matter in stars.


We combine data for extragalactic systems to quantify a relation between the observed baryonic mass Mb and the enclosed dynamical mass M200 inferred from kinematics or gravitational lensing. Our sample covers nine orders of magnitude in baryonic mass, including galaxies with kinematic or weak gravitational lensing data and groups and clusters of galaxies with new gravitational lensing data. 
For rich clusters with M(b)>10^14M⊙, the observed baryon fraction is consistent with the cosmic value, f(b)=0.157. 
For lower masses, the baryon fraction decreases systematically with mass. The variation is well described by M(b)/M(200)=f(b) tanh(M(b)/M(0))^1/4 with M(0) ≈ 5 × 10^13 M⊙. 
This relation is qualitatively similar to stellar mass-halo mass relations derived from abundance matching, but exhibits less scatter.
Stacy McGaugh, Tobias Mistele, Francis Duey, Konstantin Haubner, Federico Lelli, Jim Schombert, Pengfei Li, "The Baryonic Mass-Halo Mass Relation of Extragalactic Systems" arXiv:2603.06479 (March 6, 2026) (Accepted for publication in the Astrophysical Journal).

Thursday, March 5, 2026

A Muon g-2 Recap


Ref. [8] is R. Aliberti et al., The anomalous magnetic moment of the muon in the Standard Model: an update, Phys. Rept. 1143 (2025) 1 [arXiv:2505.21476].

A paper on the latest developments of calculating muon g-2 (the anomalous magnetic moment of the muon which can be calculated in the Standard Model from first principles) in the latest BMW group calculation not only updates their calculation to be more precise (and consistent with the high precision experimental results), but also provide excellent background for the entire enterprise of calculating muon g-2 and comparing it to the experimental results.

The overview of the paper is as follows:

Almost twenty years ago, physicists at Brookhaven National Laboratory measured the magnetic moment of the muon with a remarkable precision of 0.54 parts per million (ppm) [20]. Since that time, the reference Standard Model prediction forth is quantity has exhibited a persistent discrepancy with experiment of more than three sigma [9]. This raises the tantalising possibility of undiscovered forces or elementary particles. The attention of the world was drawn to this discrepancy when Fermilab presented a brilliant confirmation of Brookhaven’s measurement, which brought the discrepancy to 4.2 sigma [21]. In the meantime a very large-scale lattice QCD calculation of a key theoretical contribution was performed by the Budapest-Marseille-Wuppertal (BMW) collaboration [3], as seen in Fig. 1. This result significantly reduces the difference between theory and experiment, suggesting that new physics may not be needed to explain the experimental results. However, it simultaneously introduces a new discrepancy with the existing data-driven determination of this contribution. 

Since then, the experimental [2] results have been updated with significantly improved precision, and the lattice result has been independently confirmed by other lattice collaborations. At the same time, new developments in the data-driven inputs that the lattice calculations replace [17–19] lead to a significant spread in the results depending on what inputs are taken. This has culminated in an updated theory prediction based on the lattice results instead of the data-driven determinations, as seen in Fig. 1. 

In these proceedings, I present a new hybrid calculation that combines an update to the most precise lattice results with data-driven inputs in a low-energy region where the observed discrepancies are not present. This new result leads to aprediction that differs from the experimental measurement by only 0.5𝜎, providing a remarkable validation of the Standard Model to 0.31 ppm.

As Table 1 shows, the main problem is how to more precisely measure the Hadronic Vacuum Polarization (HVP) component more precisely.

According to the abstract: 

The latest results from the Budapest-Marseille-Wuppertal (BMW) and DMZ collaborations, . . . [make] a determination of the hadronic vacuum polarisation contribution to a precision of 0.45%. [i.e. from ± 6.1 to ± 3.2.]

This new calculation is about twice as precise as the previous HVP calculation. 

The conclusion of the paper states that:

Recent lattice QCD results have surpassed the precision of all other theoretical predictions of the hadronic vacuum polarisation contribution to the muon magnetic moment. When taken together with the latest theory consensus for the other contributions [8], these results show excellent agreement with the latest experimental measurements [2]. This a remarkable success for quantum field theory, bringing together diverse computational tools to include all aspects of the Standard Model in a single calculation that validates the Standard Model to 0.31 ppm.

As a practical matter, this further tightens global constrains on low to medium energy deviations from the Standard Model. 

Mirror Universes And Dark Energy?

An interesting idea, coupled to one of the most plausible explanations for baryon asymmetry and what came before the Big Bang, even if it may not actually be provable.
We investigate a possible resolution of the dark energy problem within a pair-universe framework, in which the Universe emerges as an entangled pair of time-reversed sectors. 
In this setting, a global zero-energy condition allows vacuum energy contributions from the two sectors to cancel, alleviating the need for extreme fine-tuning. We propose that the observed dark energy does not originate from vacuum fluctuations but instead arises as an effective entanglement energy between the visible universe and its mirror counterpart. 
Treating the cosmological constant as an integration constant fixed by boundary conditions rather than a fundamental parameter, we show that the cosmological equations can be formulated without explicitly introducing vacuum energy. By imposing physically motivated boundary conditions at the cosmological event horizon, we obtain an integration constant consistent with the observed dark energy density. The parallel mirror world scenario thus provides a unified framework that may simultaneously explain the origins of dark energy and dark matter.
Merab Gogberashvili, Tinatin Tsiskaridze, "Dark Energy from Entanglements with Mirror Universe" arXiv:2603.03385 (March 3, 2026) (published at 8 Physics 29 (2026)).

MOND-like Behavior Within The Milky Way In Milky Way Subsystems

The radial acceleration relation and baryonic Tully-Fischer relation, while not perfect, work far too well to be consistent with almost any dark matter particle theories (ultra-light bosonic dark matter still might be possible to make work).
We test whether parsec-scale stellar systems in the Milky Way follow the galactic radial acceleration relation (RAR) or the baryonic TullyFisher relation (BTFR). 
We analyse 5646 Gaia DR3 open clusters from the Hunt & Reffert catalogue. Observed accelerations are derived from velocity dispersions and characteristic radii, and baryonic accelerations from stellar masses and characterisitc radii. The clusters are placed on the RAR and BTFR planes and compared with Newtonian and MOND expectations. Approximately 90 per cent of open clusters (those with N⋆≤250) lie close to the RAR, albeit with significant scatter. In a first-of-its-kind test, a smaller fiducial sample is consistent with a best-fitting acceleration scale g†≈1.2×10−10 ms−2±0.5 dex, compatible with canonical MOND values. 
More massive clusters approach the Newtonian virial expectation. No correlations are found between RAR residuals and galactocentric radii, distance to the Galactic disk midplane, age, or morphology. Tidal effects and unresolved binaries are insufficient to reproduce the observations without fine-tuning. 
Interpreted within a MOND framework, the alignment of most open clusters with the RAR and BTFR suggests that low-acceleration dynamics operate on parsec scales within the Milky Way. This implies that the Galactic gravitational field is not smooth on these scales and may include regions where the total gravitational acceleration falls below a0, partially mitigating the external field effect, thereby motivating higher-resolution modelling of the Galactic potential and informing other small-scale gravity tests within the Galaxy.
Mark D. Huisjes, X. Hernandez, "Most open clusters follow the radial acceleration relation (RAR) and the baryonic Tully-Fisher relation (BTFR)" arXiv:2603.03522 (March 3, 2026).

Monday, March 2, 2026

The Wide Binary Wars Continue

Neither the astrophysicists who say that there is evidence of MOND in wide binaries, nor those who say there is not, are relenting, and I currently rate the debate as inconclusive.

If this paper is right, it is bad for MOND, but good for Deur, who reproduces MOND behavior in galaxies by another formula and mechanism.

Wide binaries (WBs) offer a unique opportunity to test gravity in the low-acceleration regime, where modifications such as Milgromian dynamics (MOND) predict measurable deviations from Newtonian gravity. 
We construct a rigorous framework for conducting the wide binary test (WBT), emphasizing high quality sample selection, filtering of poor astrometric solutions, contamination mitigation, and uncertainty propagation. We show that undetected close binaries, chance alignments, and improper treatment of projection effects can mimic MOND-like signals. We introduce a checklist of best practices to identify and avoid these pitfalls. Applying this framework to Gaia DR3 data, we compile a high-purity sample of WBs within 130 pc with projected separations of 1 - 30 kAU, spanning the transition between the Newtonian and MOND regimes. 
We find that the scaled relative velocity distribution of wide binaries does not exhibit the 20% enhancement expected from MOND and is consistent with Newtonian gravity across all separations. A meta-analysis of previous WBTs shows that apparent MOND signals diminish as methodological rigour improves. We conclude that when stringent quality controls are applied, there is no observational evidence for MOND-induced velocity boosts in wide binaries. 
Our results place strong empirical constraints on modified gravity theories operating between a0/10 and 200 a0, where a0 is the MOND acceleration scale. Across this range of internal accelerations, Newtonian gravity is up to 1500x more likely than MOND for our cleanest sample.
Stephen A. Cookson, Indranil Banik, Kareem El-Badry, Will Sutherland, Zephyr Penoyre, Charalambos Pittordis, Cathie J. Clarke, "A Quality Framework for Testing Gravity with Wide Binaries: No Evidence for MOND" arXiv:2602.24035 (February 27, 2026) (published in MNRAS).

Friday, February 27, 2026

Gender And Neanderthal-Modern Human Interbreeding

The New York Times and other general audience media outlets are reporting on a new genetics study in the Journal Science, examining the gender dynamics of Neanderthal admixture. The editor's summary and abstract and citation are:

Editor’s summary

Although a low level of Neanderthal ancestry is present in most humans, these regions are not uniformly distributed. A handful of regions in the autosome are entirely devoid of such ancestry in essentially all living humans, and the X chromosome is strongly depleted across its sequence. Platt et al. modeled the possible demographic processes and selection that could have produced this pattern. They found that these patterns are most consistent with Neanderthal contributions to human populations being heavily male biased. The concurrent additional depletion in functional regions on the X chromosome suggests that the effects of this skew may have been strengthened by negative selection on Neanderthal variants. —Corinne Simonti

Abstract

Sex biases in admixture and other demographic processes are recurrent features throughout human evolution. For admixture between Neanderthals and anatomically modern humans (AMHs), sex bias has been proposed as an explanation for the relative lack of Neanderthal ancestry in modern human X chromosomes compared with that in modern human autosomes. By observing a 62% relative excess of AMH ancestry in Neanderthal X chromosomes, we characterized the interbreeding between the two groups as predominantly male Neanderthals with female AMHs. Analytic and numerical modeling presents mate preference as a more parsimonious cause of the sex bias than purely demographic processes with differential patterns of male and female migration.
Alexander Platt, Daniel N. Harris, and Sarah A. Tishkoff, "Interbreeding between Neanderthals and modern humans was strongly sex biased", 391 (6788) Science 922-925 (February 26, 2026).

While I don't dispute the genetic data that the study discloses and bases its narrative upon, I don't think that the narrative that the general audience presentation and even the study itself have chosen a narrative that is anything close to being the most plausible one.

I've previously discussed the narrative that I think is closer to the truth, using just the Y-DNA, mtDNA, and overall autosomal genetic data previously available, without the X chromosome specific data from both admixed modern humans and admixture in Neanderthal ancient DNA  that this study brings to the table. 

In a nutshell, my prior analysis was that admixed Neanderthal-modern human hybrid children ended up in the communities to which their mothers belonged, and that Haldane's rule (which provides as relevant to this context, that cross-species hybrids are disproportionately female, perhaps sometimes sterile males, and only rarely fertile males) was also a key factor in why there are no modern humans with Neanderthal Y-DNA, there is no Neanderthal ancient DNA with modern human Y-DNA, why there are no modern humans with Neanderthal mtDNA, and why there is no Neanderthal ancient DNA with modern human mtDNA.

I also suggests that hybrid children may have often been the product of rape or episodic hookups, rather than long term marriage-like relationships embedded in a modern human or a Neanderthal tribe.

While the X chromosome data may require some fine tuning of that analysis, I don't think that it justifies a wholesale paradigm shift from it. I think that the new study's narrative gives insufficient consideration of these priors from that data when evaluating when kind of narrative makes the most sense to interpret its X chromosome based data, and focusing on a sexual selection and attraction based narrative instead.

The default assumption of the simple, uniparental DNA driven paradigm with Haldane's law paradigm is that the X chromosome would have the same proportions of Neanderthal and modern human DNA as other chromosomes, since almost all of the fertile hybrid children would have one Neanderthal X chromosome and one modern human X chromosome.

Admixture in X chromosomes could be reduced in early modern human communities if they had an influx of "basal" modern human populations (i.e. introgression from modern humans with no Neanderthal admixture) and that introgression was female biased because the Neanderthal admixed population was usually (at least with its own species pairing in marriage-like ways) patrilocal and recruited brides from outside its tribe in a way that had a significant basal modern human component. This kind of marriage pattern was common in the Neolithic Age, the Bronze Age and the Iron Age, including in herder populations who were culturally more similar to ancestral modern human hunter-gatherer societies, so it is quite plausible.

The lack of modern human mtDNA in Neanderthal ancient DNA strongly constrains the extent to which the mothers of hybrid individuals in Neanderthal communities were modern humans, although the fact that Neanderthal effective populations were falling at the time of Neanderthal-modern human contact means that it would be easier to lose low frequency modern human mtDNA variants in Neanderthal communities than it would be to lose low frequency Neanderthal mtDNA variants in human communities.

But, one still has to explain the new data point that there is an excess of modern human DNA in the ancient DNA of admixed Neanderthal X chromosomes. But that data still has to be reconciled with the lack of modern human mtDNA in any Neanderthal ancient DNA.

Since the Neanderthal ancient DNA sample, unlike the modern human DNA sample, is a small one, the statistical significance of this excess also needs to be examined as well as possible selective genetic fitness based explanations for some of the modern human X chromosome excesses relative to Neanderthal genes at those loci. But the number of genes per chromosome is great enough and 62% is a significant enough excess, that statistical flukes probably don't explain that much of the excess.

The first possible explanation that comes to mind for the excess modern human genes in admixed Neanderthal ancient DNA is that the effective population size of Neanderthals in that era was much smaller than the effective population size of modern humans in that era. In other words, the Neanderthals were more inbred.

This would make children in Neanderthal communities with pure Neanderthal source X chromosomes more vulnerable to harmful X chromosome based recessive diseases than hybrid Neanderthal children who would enjoy hybrid fitness. Over time, at multiple generations and not just the first one, this would favor individuals in Neanderthal communities with hybrid ancestry over those with pure Neanderthal ancestry. And the effect would be strong on the X chromosome than on the other autosomal chromosomes, because Neanderthal boys would not be at risk of suffering from harmful X chromosome based recessive diseases, while unadmixed Neanderthal girls would be at risk of suffering from these diseases.

This factor alone, depending on the prevalence of X chromosome based recessive diseases in much more inbred Neanderthal gene pools, might very well have been enough to explain the excess of modern human X chromosomes in the ancient DNA of admixed Neanderthals, without requiring modern human woman who have hybrid children to frequently live in and raise their children in Neanderthal communities (leaving us with the problem of explaining why there is no modern human mtDNA in these admixed Neanderthal individuals in Neanderthal communities).

I'll update this post with more analysis as time permits, after I've had more time to read the paper and consider its analysis.

A Grammatical Gender And Ergativity Linguistics Refresher

Grammatical gender rules are not a feature that is shared by all Indo-European languages, or even a feature shared by all languages in the Germanic language family. 

Ergativity is is a grammatical feature with more uniformity, but is not uniform within the Indo-European or the Berber language family within the Afro-Asiatic language family.

Grammatical gender

Some of the Germanic languages (Icelandic, Norwegian, German, and Yiddish), the Slavic languages, and Greek have three grammatical genders (masculine, feminine, and neuter).

The subset of Germanic languages made up of Swedish, Danish, Dutch, and Flemish have a "common" and a neuter grammatical gender (the masculine grammatical gender and the feminine grammatical gender are merged relative to the three gender system).

The Celtic languages of the British Isles, the Romance languages, the Baltic languages (Lithuanian and Latvian), the Northern Kurdish languages, and the non-Indo-European Afro-Asiatic languages of Europe and the Mediterranean and the Middle East (Arabic including Maltese, Hebrew, Aramaic, the Berber languages, Coptic) have two grammatical genders (masculine and feminine). But, they don't have a neuter grammatical gender.

English (a Germanic language), the Central Kurdish languages, the non-Indo-European Uralic languages (Saami, Finnish, Estonian, and Hungarian), and the non-Indo-European Turkish languages do not have grammatical gender. Modern English, in common with Icelandic, Norwegian, and German does, however, have a masculine, feminine, and neuter third person singular pronoun (he, she, it), and Central Kurdish has a masculine and feminine but not neuter third person pronoun.

The Non-Indo-European Basque language has an animate noun class and an inanimate noun class that is called a grammatical gender, rather than an actually gender based grammatical gender system.

All of these languages are Indo-European language, except Basque, Turkish, the Uralic languages, the Afro-Asiatic languages (Arabic including Maltese, the Berber languages, and Hebrew).

Ergativity

Ergativity is another grammatical feature that doesn't strictly follow language family lines (probably due to substrate influences). Basque is ergative, as is Kurdish (which is spoken in an area where extinct ergative languages were once spoken), as are some Berber languages.

What is ergativity?

I'll quote the Wikipedia link above to make sure that I get it right:
In linguistic typology, ergative–absolutive alignment is a type of morphosyntactic alignment in which the subject of an intransitive verb behaves like the object of a transitive verb, and differently from the subject of a transitive verb. All known ergative languages show ergativity in their morphology, and a small portion also show ergativity in their syntax.

The ergative-absolutive alignment is in contrast to nominative–accusative alignment, which is observed in English, where the single argument of an intransitive verb behaves grammatically like the agent (subject) of a transitive verb but different from the object of a transitive verb. In ergative–absolutive languages with grammatical case, the case for the single argument of an intransitive verb and the object of a transitive verb is called the absolutive, and the case used for the agent of a transitive verb is called the ergative.

By one measure, 17% the world's languages use an ergative alignment in the marking of noun phrases. Examples of ergative-absolutive languages include Basque, Georgian, Mayan, Tibetan, Sumerian, and certain Indo-European languages such as Pashto, the Kurdish languages and many others.

Tuesday, February 24, 2026

The Higgs Boson Still Matches The Standard Model

The Standard Model Higgs Boson hypothesis continues to be a good fit to the data, this time, an inclusive measurement of all Higgs bosons produced in the LHC data of the CMS experiment over a three year period.
Combined measurements of Higgs boson production and decay rates are reported, representing the most comprehensive study performed by the CMS Collaboration to date. The included analyses use proton-proton collision data recorded by the CMS experiment at s√ = 13 TeV from 2016 to 2018, corresponding to an integrated luminosity of 138 fb−1. The statistical combination is based on analyses that measure the following decay channels: H → γγ, H → ZZ, H → WW, H → ττ, H → bb, H → μμ, and H → Zγ → ℓℓγ (ℓ = e,μ). Information in the events from each decay channel is used to target multiple Higgs boson production processes. Searches for invisible Higgs boson decays are also considered, as well as an analysis that measures off-shell Higgs boson production in the H → ZZ → 4ℓ decay channel. 
The best fit inclusive signal yield is measured to be 1.014 +0.055 −0.053 times the standard model expectation, for a Higgs boson mass of 125.38 GeV. 
Measurements in kinematic regions defined by the simplified template cross section framework are also provided, as well as interpretations in the coupling modifier and standard model effective field theory frameworks. The coupling modifier interpretation is further used to place constraints on various two-Higgs-doublet models. The results show good compatibility with the standard model predictions for the majority of the measured parameters.
CMS Collaboration, "Combined measurements and interpretations of Higgs boson production and decay in proton-proton collisions at s√ = 13 TeV" arXiv:2602.18611 (February 20, 2026) (Submitted to Reports on Progress in Physics).

The result is about 0.2 sigma above the Standard Model expectation, which is very consistent with the result obtained and once again suggests that the uncertainties in the measurement (the average discrepancy from the expected results if the errors are accurately measured and Gaussian should be 1 sigma), in the interests of being conservative in estimating them, are overestimated. This is common in electroweak (as opposed to strong force) high energy physics experiments.

The breakdown of the sources of uncertainty are notable too:

The theoretical uncertainty is the biggest contributor to the total uncertainty. More specifically:
The largest component of the uncertainty originates from the theoretical uncertainty in the signal yield normalization (∆µincl/µincl = 3.6%). The contributions from the experimental uncertainties are shared amongst the different sources of uncertainty, with no single dominant contribution.
The statistical uncertainty (assuming that the uncertainty can correctly be modeled as Gaussian, i.e. a statistical normal distribution) is almost certainly spot on correct because establishing it is a mechanical process that involves few judgment calls. This means that any excess estimates of uncertainty in this experiment come from the theoretical and systemic experimental uncertainties.

A statement about the Higgs boson mass used in this analysis is found in the introduction, and doesn't represent any insights from the inclusive measurement which doesn't meaningfully distinguish between the Higgs boson mass assumed in the analysis and newer more precise measurements by the ATLAS and CMS experiments which are about 0.2% (i.e. 170-200 MeV) less massive.
The SM predictions for the Higgs boson production and decay rates depend on the mass of the Higgs boson mH. For all measurements in this paper, the mass is fixed at mH = 125.38 GeV. This was the most precise measurement of m(H) (± 0.14 GeV) by the CMS Collaboration at the time that the analyses entering the combination were performed. Since then, a more precise measurement of m(H) = 125.08 ± 0.12 GeV has been performed by CMS in the H → ZZ → 4ℓ channel. The ATLAS Collaboration also performed a more precise measurement of m(H) = 125.11 ± 0.11 GeV, combining the H → ZZ → 4ℓ and H → γγ channels. The small difference in m(H) between these values has a negligible effect on the results in this paper.

So, any hope from the abstract that this experiment would also shed light on the Higgs boson mass has been dashed. 

The final point in the abstract about only a majority of the results being compatible with the Standard Model is explained as follows:

In contrast to the inclusive measurement, the per production process measurement shows a small tension with the SM, with a compatibility p-value of pSM = 0.02. This tension is mostly driven by µtH, for which an excess of 2.2 standard deviations above the SM expectation is seen. The µWH and µZH parameters are also measured to be larger than the SM expectations by approximately two standard deviations. The 68% CL intervals range from ±7.5% for µggH to ±39% for µtH, relative to their best fit values. 

The per decay channel measurement shows a better compatibility with the SM (pSM = 0.33). The largest deviations are observed in the µττ and µZγ parameters. However, these are still compatible with the SM expectations within the 95% CL intervals. The µγγ, µZZ, µWW, and µττ parameters are all measured with excellent precision, with 68% CL intervals of approximately ±10% relative to their best fit values. The µbb parameter is measured with a 68% CL interval of ±15%. This represents a significant improvement compared to the previous combined Higgs boson measurement by the CMS Collaboration (±21%), because of the newly added H → bb channels and updated H → bb input analyses. The parameters for the rarer decay channels, µµµ and µZγ, are measured with 68% CL intervals of ±37% and ±39%, respectively, relative to their best fit values.

The biggest deviations in particular channels are still only slight tensions and are expected due to the look elsewhere effect. 

The constraints on the Higgs boson self-coupling relative to the Standard Model expected value, kappa(A), which is a quite hard to measure property of the Higgs boson, are also very consistent with the Standard Model expectation, as shown in the chart below (with kappa(F) and kappa(V) reflecting scenarios where there are different couplings to fermions and vector bosons).

Sunday, February 22, 2026

Quick Hits

* The Sumerians had different number words and symbols to count numbers of different kinds of things. So, for example, the word for five pieces of fruit would be different than the word for five logs.

* Egyptian pyramids were built as trapezoids and then cut down to pyramids with the left over rock used to make new pyramids.

* Reputedly, Emperor Basil II of the Byzantine Empire was cruel.

* The Anglo-Saxons kept slaves in the middle ages.

* According to Gerald of Wales ca. 1316 CE, at that time the Irish were predominantly herders.

* Harsh murder sentences for newborns killed or neglected in the throes of unattended child birth are still common today even though the death penalty is almost never sought now in these circumstances.


* The TYRP1 gene variant discovered in 2012, is the cause of blond hair in the Solomon Islands in Melanesia, which is a different gene than the one that causes blond hair in Europeans.

* Before 1480, India and Sri Lanka were nearly connected by a land bridge known as Adam’s Bridge.


* There were once oceans on Mars.
High-resolution orbital images of Mars' largest canyon reveal ancient river deltas, proving the Red Planet once held an ocean the size of Earth's Arctic.

New high-resolution imagery from the European Space Agency’s ExoMars Trace Gas Orbiter has provided the most definitive evidence to date that Mars was once a blue planet. Researchers at the University of Bern identified distinct fan-shaped sediment deposits in the southeast Coprates Chasma region, part of the massive Valles Marineris canyon system. These structures, remarkably similar to river deltas on Earth, all sit at a consistent elevation between 3,650 and 3,750 meters. This geological alignment points to one unmistakable conclusion: the presence of an ancient coastline where rivers once emptied into a vast, stable sea approximately 3.37 billion years ago.

While previous theories about Martian oceans relied on lower-resolution data, this study offers direct geomorphological proof of a shoreline. The findings suggest that a massive body of water, comparable in size to Earth’s Arctic Ocean, once covered the entirety of Mars’ northern hemisphere. Though today these ancient deltas are buried beneath wind-sculpted dust and dunes, their distinctive shapes remain preserved. This discovery drastically alters our view of Martian history; the existence of a planet-wide water cycle and a stable ocean suggests that the conditions necessary for life were not isolated occurrences but a global phenomenon.

Source: Argadestya, P., et al. "Geomorphological and sedimentological evidence of a coastline in Southeast Coprates Chasma." npj Space Exploration (2026).


* There are social octopi that build homes for themselves off the coast of Australia.

* This very little bugger ,who is part of this clade of animals (and more specifically this one) is kind of cute in a Disney monsters way. They are the most heat-tolerant complex animal known to science after tardigrades (or water bears), which are able to survive temperatures over 150 °C. They were discovered in 1980 off the Galapagos islands.


When they grow up, they look like this (the "fur" is a symbiotic species of bacteria):


* A photograph of the February 19, 2026 solar eclipse in Antarctica (not AI).


* Nature can be amazing (also not AI).

* In West Texas, ca. 4500 BCE, hunter-gatherers used non-returnable boomerang sticks for small game and atlatl to throw their carefully crafted spears further for big game.

A cache of ancient weapons, more than 6,000 years old, has been uncovered in a remote rock shelter in West Texas, offering one of the clearest pictures yet of early life in North America.

The discovery was made at the San Esteban rock shelter in the Big Bend region, an area known for its dry climate and rugged desert landscape. That dryness turned out to be a gift to archaeologists. Items that would normally rot away wood, leather bindings, plant fibers remained intact for thousands of years. Inside the shelter, researchers found a carefully stored hunting kit dating to around 4,500 B.C., including wooden spear shafts wrapped in leather, stone projectile points, and parts of atlatls, the spear-throwing tools that dramatically increased a hunter’s range and power.

An atlatl works like a lever, giving a thrown spear greater speed and force. With it, hunters could strike animals from distances that would otherwise be impossible with a simple hand throw. Tests and prior studies show these tools could send projectiles well over 100 feet with deadly accuracy. The craftsmanship seen in the newly uncovered pieces shows careful shaping, balance, and planning. These were not rough survival tools; they were refined hunting systems built by people who deeply understood their environment.

Researchers also identified curved wooden throwing weapons often described as straight or non-returning boomerangs. Unlike the returning boomerangs many people picture today, these were designed to fly straight and hit small game with strong impact. Their presence adds another layer to what appears to have been a well-organized toolkit, likely stored together for repeated use.

The San Esteban site has a long history of human occupation stretching back thousands of years. Findings from this latest excavation reinforce the idea that the Big Bend region was not a temporary stop for wandering groups but a place where people lived, adapted, and developed sophisticated survival strategies. The tools show planning, skill, and an ability to work with available materials in smart, efficient ways.

Archaeologists involved in the project say the discovery helps rewrite outdated ideas about early North American societies. These communities were not primitive in the way older textbooks sometimes suggested. They engineered effective hunting technology, understood animal behavior, and created tools built to last.

As research continues, scientists hope to learn more about how these weapons were used, how they were stored, and what they reveal about daily life 6,000 years ago. For now, the dry rock shelter in West Texas has delivered something rare: a direct, tangible connection to hunters who once stood in the same desert landscape, preparing their tools for the next expedition.

Maize farmers in Peru’s Chincha Valley were fertilizing their crops with seabird poop as early as the year 1250 CE.

* According to this source:
Around 3,800 years ago, a magnitude-9.5 megaquake struck northern Chile's coast, creating the largest earthquake known in human history. The rupture extended roughly 620 miles along the fault line—longer than the devastating 1960 Valdivia earthquake—and generated tsunamis with waves reaching 66 feet that traveled 5,000 miles across the Pacific Ocean to New Zealand. Archaeologists discovered marine deposits, boulders, shells, and sea life displaced far inland in the Atacama Desert, along with toppled stone structures buried beneath tsunami sediment, all radiocarbon-dated to this single catastrophic event.

The disaster forced complete coastal abandonment. Communities that depended on the ocean for survival relocated inland, staying away from the coast for over 1,000 years—an extraordinary response that demonstrates the quake's devastating impact on human populations. Researchers now recognize this megathrust earthquake, caused when tectonic plates suddenly unlocked after building massive strain, as both the oldest discovered earthquake-tsunami disaster in the Southern Hemisphere and a critical warning for modern coastal populations across the Pacific.

* Every recorded earthquake worldwide, 2015 to 2025 (my source didn't cite a source).

 

Wednesday, February 18, 2026

Another Challenge To ΛCDM That Dispenses With Cosmological Inflation

I have no idea why it took almost two weeks from submission for this preprint to be released by arXiv.
Recent discoveries, e.g., by JWST and DESI, have elevated the level of tension with inflationary ΛCDM. For example, the empirical evidence now suggests that the standard model violates at least one of the energy conditions from general relativity, which were designed to ensure that systems have positive energy, attractive gravity and non-superluminal energy flows.  
In this Letter, we use a recently compiled Type Ia supernova sample to examine whether ΛCDM violates the energy conditions in the local Universe, and carry out model selection with its principal competitor, the Rh=ct universe. We derive model-independent constraints on the distance modulus based on the energy conditions and compare these with the Hubble diagram predicted by both ΛCDM and Rh=ct, using the Pantheon+ Type Ia supernova catalog. 
We find that ΛCDM violates the strong energy condition over the redshift range z⊂(0,2), whereas Rh=ct satisfies all four energy constraints. At the same time, Rh=ct is favored by these data over ΛCDM with a likelihood of ∼89.5% versus ∼10.5%. The Rh=ct model without inflation is strongly favored by the Type Ia supernova data over the currrent standard model, while simultaneously adhering to the general relativistic energy conditions at both high and low redshifts.
Namit Chandak, Fulvio Melia, Junjie Wei, "Model selection with the Pantheon+ Type Ia SN sample" arXiv:2602.15047 (February 5, 2026) (4 pages, accepted for publication in A&A Letters).

Monday, February 16, 2026

Grab Bag Physics Articles

It may be possible to increase the precision with which the top quark mass is measured by a factor of ten to an uncertainty of plus or minus 30 MeV at a next generation positron-electron collider.

Constraints from Big Bang Nucleosynthesis significantly constrain the possibility of heavy neutral leptons (i.e. basically heavy, sterile neutrinos), allowing for the possible parameter space to be constrained from both above and below, potentially making it possible to rule out these hypothetical particles entirely, and in the meantime, focusing the search for them.

The Big Picture In Astrophysics Research

It is more statistical than an analytic description of the most important scientific advances, but it is still a notable overview. Certainly, there is no room to dispute that there has been a surge in astrophysics papers.
Over the past few years, Astrophysics has experienced an unprecedented increase in research output, as is evident from the year-over-year increase in the number of research papers put onto the arXiv. As a result, keeping up with progress happening outside our respective sub-fields can be exhausting. While it is impossible to be informed on every single aspect of every sub-field, this paper aims to be the next best thing. 
We present a summary of statistics for every paper uploaded onto the Astrophysics arXiv over the past year - 2025. We analyse a host of metadata ranging from simple metrics like the number of pages and the most used keywords, as well as deeper, more interesting statistics like the distribution of journals to which papers are submitted, the most used telescopes, the most studied astrophysical objects including GW, GRB, FRB events, exoplanets and much more. We also indexed the authors' affiliations to put into context the global distribution of research and collaboration. 
Combining this data with the citation information of each paper allows us to understand how influential different papers have been on the progress of the field this year. Overall, these statistics highlight the general current state of the field, the hot topics people are working on and the different research communities across the globe and how they function. 
We also delve into the costs involved in publications and what it means for the community. We hope that this is helpful for both students and professionals alike to adapt their current trajectories to better benefit the field.
Rommulus Francis Lewis, Hetansh Shah, Amruth Alfred, "Astrophysics Wrapped 2025: Year-in-Review of Every Astrophysics arXiv Paper from 2025" arXiv:2602.12303 (February 11, 2026).

Friday, February 13, 2026

X17 News

The viable parameter space for the hypothetic X17 particle (with a mass of about 17 MeV) proposed to explain some unexpected nuclear physics is very nearly null.
In recent years, the ATOMKI collaboration has performed a series of measurements of excited nuclei, observing a resonant excess of electron-positron pairs at large opening angles compared to the Standard Model prediction. 
The excess has been hypothesized to be due to the production of a new spin-1 or spin-0 particle, X17, with a mass of about 17 MeV. 
Recently, the PADME experiment has reported an excess in the e+e− cross section at center-of-mass energies near 17 MeV, perhaps further hinting at the existence of a new state. Studies of the spin-1 case have hitherto focused on either vector or axial-vector couplings to quarks and leptons, whereas UV theories more naturally produce both vector and axial-vector i.e. chiral couplings, analogous to the Standard Model weak interactions. 
We consider the ATOMKI anomalies in the context of an X with chiral couplings to quarks and explore the parameter space that can explain the ATOMKI anomalies, contrasting them with experimental constraints. 
We find that it is possible to accommodate the reported ATOMKI signals. However, the 99% CL region is in tension with null results from searches for atomic parity violation and direct searches for new low mass physics coupled to electrons. This tension is found to be driven by the magnitude of the reported excess in the transition of 12C(17.23), which drives the best-fit region towards excluded couplings.
Max H. Fieg, Toni Mäkelä, Tim M.P. Tait, Miša Toman, "The X17 with Chiral Couplings" arXiv:2602.11263 (February 11, 2026).

A Provocative Emergent Gravity Theory

This essay argues that gravity emerges from the running of physical constants with energy scale (called the Renormalization Group flow), and that this viewpoint can guide us to a viable theory of quantum gravity. It explains why this approach is not ruled out by "no go" theorems in the quantum gravity field, and what the existing paradigm for trying to develop a theory of quantum gravity may ge futile.

It's only ten pages long and more readable than many papers on the topic, so give it a read.

In this essay and utilizing the holographic Renormalization Group (RG) flow, we demonstrate how the effective action of a non-gravitating quantum field theory in the ultraviolet (UV) develops an Einstein-Hilbert term in the infrared (IR). That is, gravity is induced by the RG flow. 
An inherent outcome of holography that plays a crucial role in our analysis is the RG flow of boundary conditions: the rigid Dirichlet conditions on the background metric in the UV become an admixture of Dirichlet and Neumann as we flow to the IR, thereby ``unfreezing'' the metric and transforming it from a non-dynamical background into a dynamical field. 
This mechanism, which is a conceptually new addition to the standard Wilsonian RG flow, also provides the mechanism to evade the Weinberg-Witten no-go theorem. 
Within the GR from RG picture outlined here, the search for a quantum theory of gravity by treating the metric as a fundamental field may be a hunt for a phantom -- akin to seeking the atomic structure of water by quantizing the equations of hydrodynamics.
M.M. Sheikh-Jabbari, V. Taghiloo, "GR from RG: Gravity Is Induced From Renormalization Group Flow In The Infrared" arXiv:2602.11806 (February 12, 2026) (Essay written for the Gravity Research Foundation 2026 Awards for Essays on Gravitation).

The S8 Tension

The parameter S(8) quantifies how homogeneous the entire Universe is in terms of matter density, with lower values being more homogeneous than larger values. At higher values, matter is more concentrated in clumps and webs of high matter density, while comparative cosmic voids are bigger and more deep. At lower values, the amount of matter in a volume of space doesn't vary as much across the universe.

S(8) appears to vary between the early-universe and late universe, even though in the paradigmatic ΛCDM model of cosmology, which has been battered by numerous contradictions with astronomy observations, this parameter should remain the same. This tension has also been parallel to the Hubble tension, causing many astrophysicists to suspect that  they have a common cause.

The S8 tension between the early-universe and late universe, however, may be substantially a function of systemic measurement errors, rather than a real phenomena, as a new review article observes.
The parameter S(8)≡σ(8)*(Ωm/0.3)^0.5 quantifies the amplitude of matter density fluctuations. A persistent discrepancy exists between early-universe CMB observations and late-universe probes. 
This review assesses the ``S8 tension'' against a new 2026 baseline: a unified ``Combined CMB'' framework incorporating Planck, ACT DR6, and SPT-3G. This combined analysis yields S(8) = 0.836 + 0.012 − 0.013, providing a higher central value and reduced uncertainties compared to Planck alone. 
Compiling measurements from 2019-2026, we reveal a striking bifurcation: 
DES Year 6 results exhibit a statistically significant tension of 2.4σ--2.7σ (DESY6), whereas KiDS Legacy results demonstrate statistical consistency at <1σ (Wright2025). 
We examine systematic origins of this dichotomy, including photometric redshift calibration, intrinsic alignment modeling, and shear measurement pipelines. We further contextualize these findings with cluster counts (where eROSITA favors high values while SPT favors low), galaxy-galaxy lensing, and redshift-space distortions. The heterogeneous landscape suggests survey-specific systematic effects contribute substantially to observed discrepancies, though new physics beyond ΛCDM cannot be excluded.
Ioannis Pantos, Leandros Perivolaropoulos, "Status of the S8 Tension: A 2026 Review of Probe Discrepancies" arXiv:2602.12238 (February 12, 2026).

Thursday, February 12, 2026

Wednesday, February 11, 2026

Experimental Bounds On Baryon And Lepton Number Non-Conservation

Baryon number (B) conservation means that the number of quarks minus the number of anti-quarks in any interaction remains constant. Lepton number (L) conservation means that the number of leptons (electrons, muons, tau leptons, and neutrinos) minus the number of anti-leptons in any interaction remains constant.

The Standard Model separately conserves B and L in all interactions except sphaleron interactions, which have been never observed and are theoretically confined to extremely high energy scales and mass-energy densities, which the Large Hadron Collider (LHC) (the most powerful particle collider of all time), cannot reach.

The conservation of baryon number and lepton number is established remarkably robustly in experiments.

Some of the main experimental searches that have not detected B and L non-conservation are the searches for neutrinoless double beta decay, the search for tree-level flavor changing neutral currents, and the search for proton decay. These non-detections have ruled out or tightly constrained many theories in physics including Majorana neutrino mass and most of the simpler grand unified theories (GUTs), such as SU(5).

Baryon number (B) conservation underlies the apparent stability of ordinary matter by forbidding the decay of nucleons, while lepton number (L) conservation plays a central role in the structure of lepton interactions and the possible origin of neutrino mass. 
In the Standard Model, B and L are accidental global symmetries rather than imposed fundamental principles. However, they are expected to be violated in many extensions of the theory, including frameworks of unification and processes in the early Universe. 
This review summarizes the status of experimental tests of B and L conservation and discusses them within a unified framework for interpreting current and future searches across different processes and experimental approaches, outlining historical and theoretical motivation, key physical processes, as well as their broader connections and complementarity to other searches.
Volodymyr Takhistov, "Experimental Tests of Baryon and Lepton Number Conservation" arXiv:2602.09097 (February 9, 2026).

A Catalog Of WISP Theories

A new preprint has a catalog (with references) of beyond the Standard Model theories that are "Weakly Interacting Slim Particle" (WISP) theories. Wikipedia explains the concept, which the abstract and introduction fail to do:

In particle physics, the acronym WISP refers to a largely hypothetical weakly interacting sub-eV particle, or weakly interacting slender particle, or weakly interacting slim particle – low-mass particles which rarely interact with conventional particles.

The term is used to generally categorize a type of dark matter candidate, and is essentially synonymous with axion-like particle (ALP). WISPs are generally hypothetical particles.

WISPs are the low-mass counterpart of weakly interacting massive particles (WIMPs).

The goal of the project is as follows: 

The search for physics beyond the Standard Model (SM) has led to the proposal of a vast landscape of theoretical frameworks. Among them, the family of Weakly Interacting Slim Particles (WISPs) has emerged as a particularly rich and versatile class of candidates, capable of addressing open questions in cosmology, astrophysics and particle physics. 

These particles, ranging from axions and axion-like particles to hidden photons, scalars, pseudoscalars, sterile neutrinos and spin-2 particles, illustrate the growing diversity of ideas within the field.  

The WISPedia is motivated by the need for a unified and systematic reference that organises this rapidly expanding model space. While numerous reviews exist on specific Weakly Interacting Slim Particle (WISP) candidates or experimental searches, the goal of this work is different: to provide a concise, model-oriented encyclopedia that outlines the essential ingredients of each framework– its particle content, interactions and phenomenological role, while pointing the reader toward the original literature and key complementary resources. Rather than serving as an exhaustive review, the WISPedia aims to serve as a quick, structured gateway into the theory landscape of light, weakly coupled particles. It also provides some information on bounds for each of them in a succinct way.

It's top level categorization is by the spin (a.k.a. intrinsic angular momentum a.k.a. "J") and parity of each kind of the Beyond the Standard Model (BSM) particles. 


It also has a very cute table of contents that summarizes some of the high points of each model. It uses emojis to annotate it. This cute legend is what inspired me to make this post, even though, like any catalog of BSM theories, the vast majority of theories discussed don't reflect reality and are "garbage theories" (not in the sense of being technically unsound, but in the sense of being ill-motivated and improbable).

This list of models currently in the catalog, envisioned as a Wikipedia-like or Particle Data Group-like encyclopedia of BSM particle theories that fit the (ill-defined) WISP paradigm, is as follows: