Thursday, August 28, 2025

Toponium Discovered

Toponium is a hadron which is the bound state of a valance top quark and a valance top anti-quark. Oversimplified presentations often state that top quarks don't form hadrons, because they decay to bottom quarks extremely rapidly after they are created, leaving no time to form a hadron. And, the vast majority of the time, this is true. But, the lifetime of a top quark is only an average lifetime. Sometimes it decays faster and sometimes it decays slower. In the highly improbable case that a top quark and a top anti-quark are created at the same time and both last much longer than the average lifetime before decaying, they can form a hadron which is called toponium, and it is fairly elementary to determine how likely this is to happen at a given energy scale.

In the paper below, the CMS collaboration at the Large Hadron Collider (LHC) claims to have discovered a resonance which appears to be ground state toponium, which has a highly distinctive signature in collider, because toponium is profoundly more massive (at more than 344 GeV) than any other meson. The background that has to be distinguished from the signal is therefore pretty modest.

Another paper, whose preprint was released today, in the course of considering the possibility of a hadron which is a baryon with three top quarks (a profoundly difficult to form hadron since three top quarks or three antitop quarks need to be formed within about 3 x 10^-25 seconds in essentially the same place), asserts that the ATLAS collaboration at the LHC has also discovered a toponium resonance, although the citation in the preprint does not include any arXiv or journal reference. This citation is to: 

ATLAS Collaboration, “Observation of a cross-section enhancement near the t¯t production threshold in √s =13 TeV pp collisions with the ATLAS detector.” 

Presumably the authors have received advance word of this paper and plan to update the reference in their own paper when it is released.

This paper slightly overstates what the papers actually claim (which is that the resonance is consistent with toponium, but not that it definitely is toponium), but only modestly so.

Discovering this vanishingly rare and incredibly short lived meson, which is the heaviest possible meson (and has a mass about 68% greater than a uranium-235 atom confined to a space on the order of 100 times smaller than a proton in radius) is a remarkable accomplishment in and of itself, and also with more detections, could make it possible to measure the top quark mass to a precision of about ten times as great as current measurements (i.e. ± 0.3 GeV now v. ± 0.03 GeV with this improvement).

A search for resonances in top quark pair (tt¯) production in final states with two charged leptons and multiple jets is presented, based on proton-proton collision data collected by the CMS experiment at the CERN LHC at s√ = 13 TeV, corresponding to 138 fb−1. The analysis explores the invariant mass of the tt¯ system and two angular observables that provide direct access to the correlation of top quark and antiquark spins. A significant excess of events is observed near the kinematic tt¯ threshold compared to the nonresonant production predicted by fixed-order perturbative quantum chromodynamics (pQCD). The observed enhancement is consistent with the production of a color-singlet pseudoscalar (1S[1]0) quasi-bound toponium state, as predicted by nonrelativistic quantum chromodynamics. Using a simplified model for 1S[1]0 toponium, the cross section of the excess above the pQCD prediction is measured to be 8.8 +1.2−1.4 pb.
CMS Collaboration, "Observation of a pseudoscalar excess at the top quark pair production threshold" arXiv:2503.22382v2 (March 28, 2025, published version from Rep. Prog. Phys. 88 (2025) 087801 released on August 23, 2025).

The introduction explains that:
The discovery of the top quark in 1995 at the Fermilab Tevatron collider was a major milestone in particle physics. Uniquely among quarks, the top quark’s lifetime is shorter than the hadronization timescale. This causes the spin of the top quark to be transferred directly to its decay products, enabling precise measurements of spin properties via angular distributions. While the individual polarizations of the top quark and antiquark (t and t) are small when produced via the strong interaction, their spins are correlated in the standard model (SM), which was experimentally confirmed at both the Tevatron and the LHC. 
Although tt pairs do not form stable bound states given the short lifetime of the top quark, calculations in nonrelativistic quantum chromodynamics (NRQCD) predict bound state enhancements at the tt threshold. Since this effect is present only when the tt pairs are in the color singlet configuration, the dominant contribution at the LHC is from the gluon-gluon initial state, leading to the production of the 1S[1]0 “toponium” quasi-bound state ηt. 
Contributions from other spin states are much smaller at the LHC; for instance, the 3P[1]0 state χt is suppressed by additional powers of the top quark velocity, which is nearly zero at the threshold. The color octet configuration, on the other hand, is suppressed below the tt threshold because of a repulsive interaction between the top quarks, and has a steeply rising cross section as a function of the tt invariant mass mt t above the threshold. The presence of such an ηt state would therefore manifest itself as an enhancement in the number of events near the production threshold with distinctive patterns in tt spin correlation observables caused by its pseudoscalar nature. However, due to the possibility of initial- and final-state radiation, the color configurations of the tt pairs are not necessarily the same as the partons in the initial state, making theoretical predictions of toponium production challenging. 
This Letter reports the observation of a threshold enhancement in tt production consistent with pseudoscalar toponium. The analyzed proton-proton (pp) collision data at √s = 13TeV were recorded by the CMS experiment at the CERN LHC in 2016–2018, corresponding to an integrated luminosity of 138fb−1. The analysis, whose tabulated results are provided in the HEPDatarecord, is conducted within the context of a search for neutral spin-0 bosons produced through gluon-gluon fusion and decaying to tt. Here, we focus on the threshold production of a composite CP-odd pseudoscalar ηt and a CP-even scalar χt as signal hypotheses, where CP refers to the charge-parity symmetry. These represent the simplest hypotheses that can explain the observation, since they arise naturally within NRQCD. However, the available experimental data does not exclude alternative explanations like additional pseudoscalar bosons, whose existence is predicted by several theoretical models beyond the SM. This possibility is explored in Ref. [21], the companion paper to this publication, where the same data is interpreted in terms of limits on additional scalar and pseudoscalar bosons over a large mass range. 
The analysis considers final states with two charged leptons (electrons and/or muons) and at least two jets, referred to as the ℓℓ channel. A similar analysis was previously performed by the CMS experiment using the data sample collected in 2016 and considering the ℓj channel (i.e., f inal states with one charged lepton and at least four jets) in addition to the ℓℓ channel. 
In that analysis, a moderate pseudoscalar-like deviation with a mass at the lowest investigated value of 400GeV was found. Compared to that superseded analysis, we consider only the ℓℓ channel here, but use more than three times the data, consider resonances with masses below the tt production threshold, and add a second angular observable that provides direct access to tt spin correlation. 
Similar searches have also been conducted by the ATLAS Collaboration using data at √ s =2 8 [23] and 13TeV [24]. The results presented in Ref. [24] use the data sample collected in 2015-2018 and combine the ℓℓ and ℓj channels, with the latter being predominant. The analysis in the ℓℓ channel differs from our approach in that it investigates the invariant mass mbbℓℓ of the bbℓ+ℓ− system rather than mtt and it utilizes an angular variable whose sensitivity to tt spin correlation is significantly diluted by kinematic effects. 
We have verified that incorporating these differences into our analysis would not result in a significant enhancement at the threshold. Consequently, the conclusions of Ref. [24] are not directly comparable to the ones reported in this paper, nor do they refute or confirm the findings reported herein. 
Moreover, our findings are consistent with enhancements at the threshold in previous tt differential cross section measurements reported by ATLAS and CMS. Similarly, the mild tension between the observed and expected measurement of spin correlation in the tt threshold region, which has been reported by both ATLAS and CMS as part of their studies of quantum entanglement, has been reproduced by this analysis. 
. . . 

[22] CMS Collaboration, “Search for heavy Higgs bosons decaying to a top quark pair in proton-proton collisions at √s = 13TeV”, JHEP 04 (2020) 171, doi:10.1007/JHEP04(2020)171, arXiv:1908.01115. 

[23] ATLAS Collaboration, “Search for heavy Higgs bosons A/H decaying to a top quark pair in pp collisions at √s = 8TeV with the ATLAS detector”, Phys. Rev. Lett. 119 (2017) 191803, doi:10.1103/PhysRevLett.119.191803, arXiv:1707.06025. 

[24] ATLAS Collaboration, “Search for heavy neutral Higgs bosons decaying into a top quark pair in 140fb−1 of proton-proton collision data at √ s =13TeV with the ATLAS detector”, JHEP 08 (2024) 013, doi:10.1007/JHEP08(2024)013, arXiv:2404.18986. 

The discovery is basically a side effect of LHC searches for a neutral heavy Higgs boson. Preprints with more analysis can be found here and here and here and here and here and here and here and here and here and here and here.

This is a substance that has a greater mass per volume than a neutron star or an atomic nucleus by a long shot (it is about 344 million times as dense). If you use a definition of density for a black hole of mass within the spatial volume of an event horizon, it even has more mass per volume than a stellar or greater mass black hole, although some primordial black holes, if they exist, would have a greater density.

The Schwarzchild radius of toponium is about 2.2 x 10^-29 meters, which is about 10^11 times shorter than the estimated radius of toponium, and about 10^14 times shorter than the size of a proton or neutron. So, there is no risk of the LHC or a future collider creating a primordial black hole when this hadron is formed.

Some Linguistic Hypotheses

* I think that it is very likely that the Korean language family and the Japanese language family are related, even if it is challenging to find "smoking gun" evidence of it today. Japanese may have also have some Manchurian linguistic influence. The broader Altaic hypothesis has less strong support, but there may be something to it.

* I think that it is very likely that the Dravidian language family was influenced by an African language family, with the vectors of that transmission probably being people from the Horn of Africa who also brought some key African Sahel domesticates to Southern India around the time of the South Indian Neolithic ca. 2500 BCE. 

* The Harappan language is almost surely not Indo-European, not Dravidian, and not Munda as a language family. It could conceivably have some connection to language isolates in the general region known as Indo-Pacific languages, or it might not. It is probably the main substrate influence on Sanskrit and through Sanskrit on the other Indo-European languages of India. The script associate with it was probably a proto-script, like a set of emojis or trademarks, and not a full written language. The same is true of the early Vinca script used in the Neolithic Balkans.

* I think that it is very likely that Indo-Aryans (Sanskrit speaking derived people) conquered almost all of India sometime in pre-history and imposing their language and the Hindu religion (although not as faithfully to some of its tenants like vegetarianism), except a small last stronghold, more or less in the vicinity of the modern city of Visakhapatnam, which then reconquered territory from the Indo-Aryans, restoring their dialect of the Dravidian language, but not effectively displacing the Hindu religion that the Indo-Aryan conquerors brought with them. This is why the Dravidian language family seems younger than it really is; it's historic linguistic diversity was wiped out at this point with most of its variants extinguished at this time. As I noted in a post at Wash Park Prophet:

[A]reas that are linguistically Indo-Aryan are more likely to be vegetarian than areas that are linguistically Dravidian, Munda or Tibeto-Burmese. Meat eating may reflect a thinner Indo-Aryan influence even in places that experienced a language shift to Indo-Aryan languages. Vegetarianism may alternatively reflect a stronger influence from the pre-Indo-Aryan Harappan society.

* Brahui, a Dravidian language pocket found far from the geographic range of the other Dravidian language, probably was not within the historic range of the Dravidian languages. Instead, it is probably a result of language shift through elite dominance around 1000 CE or so, by some foreign Dravidian warlord or king.

* Sometime around the Copper Age (a.k.a. the Eneolithic) in Anatolia, people from the eastern highlands brought the Hattic language (which preceded the Hittite language) to Anatolia. It is related to Kassite, other Iranian highland languages, and more remotely to most of the Caucasian languages (which are related to each other even if the connections are hard to establish), to Sumerian, and probably to Elamite. It is also probably related to Minoan. One of the litmus tests of all of these languages is that they were ergative. 

Hattic probably replaced the Neolithic language(s) of Anatolia, including the Western Neolithic language which spread across Europe in two main branches, the Linear Pottery culture (LBK) to through the rivers of the north, and the Cardial Pottery culture to more or less along the Mediterranean coast, which was very different from Hattic. The Western Anatolian Neolithic languages were the substrate languages for the Indo-European language in most of Europe, but not in Anatolia where the Hattic language was the substrate. Hattic substrate influence is the reason that Anatolian Indo-European languages like Hittite seem so diverged from other Indo-European languages, because the Hattic society was much healthier when the Indo-Europeans arrived than in other places where the Indo-Europeans conquered Neolithic societies in a state of collapse. The most basil branch of Indo-European was probably that spoken in the Tarim Basin, which was on a frontier with almost no substrate influence.

* It is very likely that the languages of the European hunter-gatherers are completely lost. The Uralic languages arrived much later. In the Americas and Japan and Australia, we know that indigenous hunter-gather language substrates had very little impact on the food producing conquerer languages, even when indigenous peoples made a large genetic contribution to the people speaking the food producer languages.

* Basque, therefore, is very unlikely to be an indigenous European hunter-gatherer language. It could be the last survivor of the language family of the first farmers of Europe rooted in Western Anatolia frmo around 6000 BCE to 4000 BCE, or it could reflect a very distant outpost of a Copper Age language probably in the same language family as Hattic and Minoan. I probably lean towards the Neolithic hypothesis, as the corpus of Hattic (which remained a written liturgical language for a thousand years after the Hittites took over) and of Basque are both large enough that a connection would have been established by linguists by now if it was present, even though both are ergative languages, but the rarity of ergative languages outside the West Asian highlands, ancient Mesopotamia, and places to the east of that, favor a copper age origin for it. The Paleo-Hispanic languages may have all been a coherent group and Tartessos in Southwest Iberia was metal rich and a strong candidate for the source for Plato's Atlantis story. The "Tartessian culture was born around the 9th century B.C. as a result of hybridization between the Phoenician settlers and the local inhabitants. Scholars refer to the Tartessian culture as "a hybrid archaeological culture".

* We know the Etruscan, Raetic, and Lemnian (together called the Tyrsenian languages, an areal designation, since while the connection of Etruscan and Raetic is pretty solid, the linguistic family connection to Lemnian is not, and possibly Camunic as well, although it could also be related to Celtic) are also not Indo-European languages and pre-date Indo-European:

  • Etruscan: 13,000 inscriptions, the overwhelming majority of which have been found in Italy; the oldest Etruscan inscription dates back to the 8th century BC, and the most recent one is dated to the 1st century AD.
  • Raetic: 300 inscriptions, the overwhelming majority of which have been found in the Central Alps; the oldest Raetic inscription dates back to the 6th century BC.
  • Lemnian: 2 inscriptions plus a small number of extremely fragmentary inscriptions; the oldest Lemnian inscription dates back to the late 6th century BC.
  • Camunic: may be related to Raetic; about 170 inscriptions found in the Central Alps; the oldest Camunic inscriptions dates back to the 5th century BC.

The ergative substrate influence probably explains its presence in Indo-European Pashto, Kurdish languages and Indo-Aryan languages, which was shared with Basque and is absent from most Indo-European languages. It suggest that Harappan was probably ergative. The Tyresnian languages apparently non-ergative character suggests that they aren't part of the same language family as Basque, and tends to favor a Copper Age origin for Basque rather than a Neolithic origin for it.

But we haven't deciphered them very well since the corpus of those writings has mostly been lost, and what we have left is mostly monolingual and short. We can't even say with completge confidence that they were all in the same language family, although ancient Rhaetic spoken to the north of Etruscan (not linguistically related to the similarly named modern Indo-European minority language of Switzerland) was probably in the same language family with Etruscan. somewhat conflicting historical evidence suggests that Lemnians were migrants from the Alps and/or northern Italy, probably during the Greek dark ages after Bronze Age collapse had run its course.

We also don't know much about the substrate language that influenced Mycenaean Greek.

Wednesday, August 27, 2025

Another Mirror Cosmology Paper

One of the cleaner solutions in physics to the idea of the Big Bang in cosmology is one in which the universe extends infinitely forward and backward in time from the Big Bang. 

This could also explain the matter-antimatter asymmetry of the Universe is that an antimatter dominated universe extends backward in time (in our coordinate system) from the Big Bang and in which the arrow of time runs in the opposite direction due to entropy, which is the boundary between our matter dominated universe and the antimatter dominate universe on the other side of the Big Bang, 

This paper explores such a cosmology model from a mathematical physics perspective, in a way which embraces not only mirror cosmologies of the kind that I have suggested above, but also "bounce" cosmologies that do not posit an antimatter dominated and time reversed universe before the Big Bang. 

There could be quantum entanglement connections between the two sides of the Big Bang, but as the body text of the paper explains (at page 19), "quantum-entanglement correlations cannot be used to send a message from one world to the other: the arguments are essentially the same as those against faster-than-light communication from quantum entanglement."

In this model, the paper suggests that "the maximal energy density and curvature values are very large but finite (the typical energy scale may be the so-called Planck scale given by a combination of sqrt ((h-bar x c^5)/G) ≈ 1.22 × 10^19 GeV ≈ 1.42 × 10^32 Kelvin)."

We review the suggestion that it is possible to eliminate the Big Bang curvature singularity of the Friedmann cosmological solution by considering a particular type of degenerate spacetime metric. Specifically, we take the 4-dimensional spacetime metric to have a spacelike 3-dimensional defect with a vanishing determinant of the metric. 
This new solution suggests the existence of another "side" of the Big Bang (perhaps a more appropriate description than "pre-Big-Bang" phase used in our original paper). The corresponding new solution for defect wormholes is also briefly discussed.
F.R. Klinkhamer, "Big Bang as spacetime defect" arXiv:2412.03538 (Submitted December 4, 2024, last revised August 21, 2025, published version with expanded references) (31 pages).

Friday, August 22, 2025

An Electroweak Centric Model For Standard Model Mass Generation

The basic intuitive gist of the proposal of this paper is one that I've entertained myself, although I don't have the theoretical physics chops to spell it out at this level of formality and technical detail (and I'm really not qualified to evaluate the merits to this proposal at that level). I've seen one or two other papers (not recent ones) that take a similar approach.

The ratio of the electron mass to the lightest neutrino mass eigenstate is roughly the same as the ratio of the electromagnetic coupling constant to the weak force coupling constant, and both are masses are similar to what would be expected from the self-interactions of electrons and neutrinos via the electromagnetic and weak forces with themselves. Electrons interact via both of these forces, while neutrinos interact only via the weak force.

The down quark mass is about twice as much as the up quark mass, just as the absolute value of the down quark electromagnetic charge is twice the absolute value of the up quark electromagnetic charge. All quarks have the same magnitude of strong force color charge. And all of the fundamental fermions of the Standard Model have the same magnitude of weak force charge. Quarks interact via the strong force, the electromagnetic, and the weak force, so their self-interactions might be expected to be larger than for the electron which doesn't interact via the strong force.

Figuring out how this can work in concert with the three fundamental fermion generations is particularly challenging. I'm inclined to associate it with a W boson mediated dynamic process that sets the relative values of the Higgs Yukawas. This paper doesn't attempt to look beyond the first generation of fundamental fermions in implementing its model.

I'm not thrilled with the "leptoquark" component of this theory, but the fact that it gives rise to neutrino mass without either Majorana mass or a see-saw mechanism is very encouraging.

In the Standard Model of elementary particles the fermions are assumed to be intrinsically massless. Here we propose a new theoretical idea of fermion mass generation (other than by the Higgs mechanism) through the coupling with the vector gauge fields of the unified SU(2) ⊗ SU(4) gauge symmetry, especially with the Z boson of the weak interaction that affects all elementary fermions. The resulting small masses are suggested to be proportional to the self-energy of the Z field as described by a Yukawa potential. Thereby the electrically neutral neutrino just gets a tiny mass through its Z-field coupling. In contrast, the electrically charged electron and quarks can become more massive by the inertia induced through the Coulomb energy of the electrostatic fields surrounding them in their rest frames.
Eckart Marsch, Yasuhito Narita, "On the Lagrangian and fermion mass of the unified SU(2) ⊗ SU(4) gauge field theory" arXiv:2508.15332 (August 21, 2025) (13 pages).

The introduction of the paper is as follows:
According to the common wisdom of the Standard Model (SM) of elementary particle physics, the fermions are intrinsically massless, but they gain their masses via phase transition from the vacuum of the Higgs field. However, this notion introduces many free parameters (the Yukawa coupling constants) that are to be determined through measurements. These have been made at the LHC only for some members of the second and third family of heavy leptons and quarks, yet not for the important first family of fermions, of which the stable and long-lived hadrons form according to the gluon forces of quantum chromodynamics (QCD). 
Here we just consider the first fermion family of the SM and propose a new idea of the fermion mass generation. The key assumption is that their masses may be equal to the relevant gauge-field energy in the rest frames of these charged fermions carrying electroweak or strong charges. Their masses are suggested to originate from jointly breaking the chiral SU(2) symmetry combined with the hadronic isospin SU(4) symmetry, as described in the recent model by Marsch and Narita, following early ideas of Pati and Salam and their own work. Unlike in the SM, in their model both symmetries are considered as being unified to yield the SU(2) ⊗ SU(4) symmetry, which then is broken by the same procedures that are applied successfully in the electroweak sector of the SM. 
The outline of the paper is as follows. We briefly discuss the extended Dirac equation and its Lagrangian including the Higgs, gauge-field and fermion sectors. Especially, the covariant derivative is discussed and the various gauge-field interactions are described. Also the different charge operators (weak and strong) are presented. Then the CPT theorem is derived for the extended Dirac equation including the gauge field terms. The remainder of the paper addresses the idea of mass generation from gauge field energy in the fermion rest frame. Finally we present the conclusions.

The paper's conclusion states:

In this letter, we have considered a new intuitive idea of how the elementary fermions might acquire their finite empirical masses. We obtained diagonal mass matrices as Kronecker products within the framework of the unified gauge-field model of Marsch and Narita. The mass matrices still commute with the five Gamma matrices of the extended free Dirac equation without gauge fields. However, when including them the chiral SU(2) and the hadronic SU(4) symmetries both are broken by the mass term. Thus, the breaking of the initial unified SU(2) ⊗ SU(4) symmetry by the Higgs-like mechanism gives the fermions their different charges as well as specific masses. 

In the SM the initial common mass m is assumed to be zero, and then the Dirac spinor splits into two independent two-component Weyl spinors. But when the gauge fields are switched on, their self-energy gives inertia and thus mass to the fermions in their rest frame. The breaking of gauge symmetry yields the electromagnetic massless photon field E(µ) and the weak boson field Z(µ), which becomes very massive via the Higgs mechanism. It also induces inertia for all eight fermions, yet the resulting masses are rather small owing to the very small Compton wavelength of the Z boson. The neutrino and electron can acquire masses in this way, which yet differ by six orders of magnitude. The hadronic charge of the leptons is zero, and thus they decouple entirely from QCD. It is responsible by confinement through the gluons for the mass of the various resulting composite fermions, in particular for the proton mass. 

The masses of the light fermions are thus argued to originate physically from the major self-energy of the electrostatic field as well as from the minor self-energy of the Z-boson field, which is proportional to the Higgs vacuum that determines the Z-boson mass. It is clear, however, that the masses of heavy composite hadrons, in particular of the proton and neutron, involve dominant contributions from the energy of the binding gluon fields, as the QCD lattice simulations have clearly shown. 

In conclusion, the extended Dirac equation contains a physically well motivated mass term. It remedies the shortcoming of the SM that assumes massless fermions at the outset, whereas the empirical reality indicates that they are all massive. Therefore, the neutrino cannot be a Majorana particle, as it has often been suggested in the literature. This notion is in obvious contradiction to the observed neutrino oscillation, implying clearly finite masses. Chiral symmetry is broken in our theory, yet the parity remains intact. 

Finally, we like to mention the masses of the heavy gauge bosons involved in the above covariant derivative and related matrix. In the reference of the particle data group we find in units of MeV/c^2 the values: M(Z) = 91.2 and M(W) = 80.4. For the “leptoquark" boson V we obtain M(V) = 35.4. For the sum of these masses we find the following surprising results: M(V) + M(Z) = 126.6, which equals within less than a one-percent margin the measured mass of the Higgs boson, M(H) = 125.3. Also, M(W) + M(Z) = 171.6, which again equals within less than a one-percent margin the measured mass of the top quark, M(T) = 172.7. Whether this is just a fortuitous coincidence or indicates a physical connection has to remain open. 

Tuesday, August 19, 2025

A Wide Binary Paper Supporting Non-Newtonian Gravity

Efforts to determine is wide binary star systems show a change in gravitational acceleration at Newtonian accelerations below Milgrom's constant, as predicted by MOND (but not generically by all modified gravity explanations of dark matter phenomena) have had mixed and contradictory results. The latest paper on the subject, a small pilot study using new methods supports the existence of a non-Newtonian gravitational enhancement for wide binaries whose gravitational pull on each other is below Milgrom's constant.

Wide binary tests exclude almost all dark matter particle theories, if they show non-Newtonian gravitational enhancements in weak fields, and also discriminate meaningfully between different gravity based approaches to explain dark matter phenomena if the data allows for sufficiently precise conclusions. 

But competing considerations of data quality (e.g., it is easy to mistake a system with more than two stars for a wide binary system if one of the stars is feint or the angle of observation is poor), and data quantity (to give the observations statistical power), make this astronomy test of weak field gravity challenging to extract convincing results from. 
When 3D relative displacement r and velocity v between the pair in a gravitationally-bound system are precisely measured, the six measured quantities at one phase can allow elliptical orbit solutions at a given gravitational parameter G. Due to degeneracies between orbital-geometric parameters and G, individual Bayesian inferences and their statistical consolidation are needed to infer G as recently suggested by a Bayesian 3D modeling algorithm. 
Here I present a fully general Bayesian algorithm suitable for wide binaries with two (almost) exact sky-projected relative positions (as in the Gaia data release 3) and the other four sufficiently precise quantities. Wide binaries meeting the requirements of the general algorithm to allow for its full potential are rare at present, largely because the measurement uncertainty of the line-of-sight (radial) separation is usually larger than the true separation. 
As a pilot study, the algorithm is applied to 32 Gaia binaries for which precise HARPS radial velocities are available. The value of Γ ≡ log(10)√ G/G(N) (where G(N) is Newton's constant) is −0.002 + 0.012 −0.018 supporting Newton for a combination of 24 binaries with Newtonian acceleration g(N) > 10^−9 ms^−2, while it is Γ = 0.063 + 0.058 − 0.047 or 0.134 + 0.050 − 0.040 for 7 or 8 binaries with g(N) < 10^−9 ms^−2 (depending on one system) showing tension with Newton. The Newtonian ``outlier'' is at the boundary set by the Newtonian escape velocity, but can be consistent with modified gravity. 
The pilot study demonstrates the potential of the algorithm in measuring gravity at low acceleration with future samples of wide binaries.
Kyu-Hyun Chae, "Bayesian Inference of Gravity through Realistic 3D Modeling of Wide Binary Orbits: General Algorithm and a Pilot Study with HARPS Radial Velocities" arXiv:2508.11996 (August 16, 2025).

Monday, August 18, 2025

A Potentially Good New World Population Genetics Study Bumbled

A paper was published last year about the population genetics and historical genetics of the Blackfoot people. It compared a modest sample of Blackfoot affiliated genomes with other New World and Old World genomes. A small sample size isn't a big problem for a paleo-genetic study, however, because each individual's DNA has so many data points and generations of intermarriage in a fairly closed gene pool makes each individual highly representative of the population as a whole. But while the study does lots of things right, but makes a critical error in its analysis, which seriously detracts from the reliability of the analysis. This error arises from a weak review of the literature and deficient peer-review, which leads to an erroneous analysis.

The big problem with the paper is that it makes flawed assumptions about the peopling of the Americas. It relies on a model in which all Native Americans fit into two groups: Native North Americans (ANC-B) and Central and Southern Americans (ANC-A), and tries to determine where the Blackfoot people fit into that model.

The trouble is that the established paradigm is more complicated. While ANC-A is a valid and pretty much unified group that descend from basically Pacific coast route peoples in a primary founding population wave perhaps 14,000 years ago, Native Americans in North America have a more complex ancestry.

North American Native Americans have the lineages found in ANC-A (which results from a serial founder effect) and probably a least two other clades close in time to the initial founding era that spread into different parts of North America. 

Then, around 3500-2400 BCE, the ancestors of the Na-Dene people migrated to Alaska from Northeast Asia and admixed with pre-existing populations (their languages have remote but traceable connections to the Paleo-Siberian Ket people, whose language family is named after the Yenesian River in central Siberia) and are associated with the Saqqaq Paleoeskimo culture who also were the source of the Dorest Paleo-Eskimo populations (see also here and here) About 10% of Na-Dene ancestry is distinct from the initial founding population of the Americas.[2] The Na-Dene, like Inuits, have Y-DNA haplogroups that are specific to them and of more recent origin that the founding Y-DNA haplogroups of the Americas.[3].

And then, a final significant pre-Columbian wave with lasting demographic impact arrived from Northeast Asia, perhaps around 500s and 600s CE, and they are the ancestors of the Inuits (a.k.a. modern Eskimo-Aleut peoples) who have their roots in an Arctic and sub-Arctic population also known as the Thule. The 6th to 7th century CE Berginian Birnirk culture (in turn derived from Siberian populations) is the source of the proto-Inuit Thule people, who were the last substantial and sustained pre-Columbian peoples to migrate to the Americas.

A paper in 2020 refined and confirmed this analysis, and the 2024 paper even adopts its NNA v. SNA classification while failing to recognize the distinct temporal waves involved in the pre-Columbian peopling of the Americas.

See generally:

[1] Maanasa Raghavan, et al., "The genetic prehistory of the New World Arctic", Science 29 August 2014: Vol. 345 no. 6200 DOI: 10.1126/science.1255832.
[2] David Reich, et al., "Reconstructing Native American population history", Nature 488, 370-374 (16 August 2012) doi: 10.1038/nature11258
[4] Erika Tamm, et al., "Beringian Standstill and Spread of Native American Founders", PLOS One DO: 10.1381/journal.pone.0000829 (September 5, 2007).
[5] Alessandro Achilli, "Reconciling migration models to the Americas with the variation of North American native mitogenomes", 110 PNAS 35 (August 27, 2013) doi: 10.1073/pnas.1306290110
[7] Judith R. Kidd, et al., "SNPs and Haplotypes in Native American Populations", Am J. Phys Anthropol. 146(4) 495-502 (Dec. 2011) doi: 10.1002/aipa/21560

The critical problem with the paper is that Athabascans are a poor representative of Northern Native American lineages from the founding era ca. 14,000 years ago, because they have significant Na-Dene wave admixture, also shared, for example, with the Navajo, who migrated in turn migrated from what is now central to western Canada to the American Southeast around 1,000 CE (possibly, in part, due to the push factor of the incoming wave of proto-Inuits). 

In contrast, the vast majority of North American Native Americans have no Na-Dene or Inuit ancestry and are in population genetic continuity with one or more of the several founding populations of North America. Almost any other choice of a North American Native American comparison population would have been much, much better.

In contrast, the Karitiana are indeed representative (and the standard choice to represent) the ANC-A population.

It is entirely plausible that the Blackfoot are indeed from a wave of North American founding population that is under sampled and that their lineage is not represented in prior published works. 

Latin American indigenous peoples (and to a lesser extent and more recently, Canadian First Peoples) have, in general, been more receptive to population genetic work by anthropologists and Native American populations in the United States who have given these researchers the cold shoulder until very recently, due to a historical legacy that has understandably fostered distrust of people associated with the establishment in the U.S. including anthropologists. So, Native Americans in the U.S. are greatly under sampled.

But, because the thrust of the paper heavily relies on comparisons between Blackfoot DNA and Athabascan DNA with misguided assumptions about the Athabascan population histories entering into the calculations and analysis, it is hard to confidently extract reliable conclusions from that analysis. The Athabascan may be mostly ANC-B, but are probably the most divergent sample one could use to represent that population, particularly since no attempt is made to distinguish the ancestry components in that population. This seriously confounds the efforts to pin down the prehistoric time line.

A good quality peer-review should have caught this problem, but peer-review in practice is less effective than it is given credit for being.

Realistically, the only way to really do it right would be to withdraw the 2024 paper and replace it with a new paper that reanalyzes the Blackfoot genetic data by comparing it to a more suitable representative of North American Native American ancestry.




Studies of human genomes have aided historical research in the Americas by providing rich information about demographic events and population histories of Indigenous peoples, including the initial peopling of the continents. The ability to study genomes of Ancestors in the Americas through paleo-genomics has greatly increased the power and resolution at which we can infer past events and processes. However, few genomic studies have been completed with populations in North America, which could be the most informative about the initial peopling process. Those that have been completed in North America have identified Indigenous Ancestors with previously undescribed genomic lineages that evolved in the Late Pleistocene, before the split of two lineages [called the “Northern Native American (NNA)” or “ANC-B” and “Central and Southern American (SNA)” or “ANC-A” lineages] from which all present-day Indigenous populations in the double continent that have been sampled derive much, if not all, their ancestry before European contact. Specifically, the lineage termed “Ancient Beringian” was ascribed to a genome in an Ancestor who lived 11,500 years ago at Xaasaa Na’ (Upward Sun River) and named Xach’itee’aanenh t’eede gaay (USR1) by the local Healy Lake Village Council in Alaska. An Ancestor who lived 9500 years ago at what is now called Trail Creek Caves on the Seward Peninsula, Alaska, also belongs to the Ancient Beringian lineage. In addition, another Ancestor, under the stewardship of Stswecem’c Xgat’tem First Nation, who lived in what is now called British Columbia, belongs to a distinct genomic lineage that predates the NNA-SNA split but postdates the split from Ancient Beringians on the Americas’ genomic timeline. This Ancestor was identified at Big Bar Lake near the Frasier River and lived 5600 years ago. Thus, these previous studies of North American Indigenous Ancestors have successfully helped to identify previously unknown genomic diversity. However, the ancient lineages identified in these studies have not been observed in samples of Indigenous peoples of the Americas living today. Research in Mesoamerica and South America suggests that certain sampled populations (e.g., Mixe) have at least partial ancestry in present-day Indigenous groups from unknown genomic lineages in the Americas, possibly dating as far back as 25,000 years ago. . . .

With multiple genomic analyses showing the ancient Blood/Blackfoot clustering together with present-day Blood/Blackfoot but on a separate lineage from other North and South American groups, we created a demographic model using momi2, which used the site frequency spectra of present-day Blood/Blackfoot, Athabascan (as a representative of Northern Native American lineage), Karitiana (as a representative of Southern Native American lineage), and Han, English, Finnish, and French representing lineages from Eurasia. The best-fitting model shows a split time of the present-day Blood/Blackfoot at 18,104 years ago, followed by a split of Athabascan and Karitiana at 13,031 years ago.

The paper and its abstract are as follows:

Mutually beneficial partnerships between genomics researchers and North American Indigenous Nations are rare yet becoming more common. Here, we present one such partnership that provides insight into the peopling of the Americas and furnishes another line of evidence that can be used to further treaty and aboriginal rights. We show that the genomics of sampled individuals from the Blackfoot Confederacy belong to a previously undescribed ancient lineage that diverged from other genomic lineages in the Americas in Late Pleistocene times. Using multiple complementary forms of knowledge, we provide a scenario for Blackfoot population history that fits with oral tradition and provides a plausible model for the evolutionary process of the peopling of the Americas.
Dorothy First Rider, et al., "Genomic analyses correspond with deep persistence of peoples of Blackfoot Confederacy from glacial times" 10(14) Science Advances (April 3, 2024).

Two Papers Related To Quark Masses

How much of a nucleon's mass is due to Higgs field sourced quark mass?

Background

To the nearest 0.1 MeV, the mass of the proton is 938.3 MeV (the experimentally measured value is 938.272 089 43 (29) MeV) and the mass of the neutron is 939.6 MeV (the experimentally measured value is 939.565 421 94 (48) MeV).

A proton has two valence up quarks and one valence down quark. A neutron has one valence up quark and two valence down quarks.

According to the Particle Data Group (relying on state of the art averages of Lattice QCD calculations that extra this from measurable masses of particles made up of quarks bound by gluons which are called hadrons) concludes that the average of the up quark mass and the down quark mass is measured to be 3.49 ± 0.4 MeV, the up quark mass is 2.16  ± 0.4 MeV, and the down quark mass is 4.70  ± 0.4 MeV. 

What would the masses of the proton and the neutron be, hypothetically, if the up quark and down quark has zero mass?

A new paper calculates that the proton mass would be 882.4 ± 2.5 MeV (about 94% of its measured value), while the mass of the neutron would be 883.7 MeV ± 2.5 MeV (about 94.1% of its measured value). Thus, this would reduce the total nucleon mass by 55.9 MeV ± 2.5 MeV, and ignoring the effect of the difference between the up quark and down quark masses (which has a roughly 3.5 MeV effect in the average massless quark estimate, according to the body text of the paper). 

These masses can be conceptualized as the combined pure gluon and electroweak field source mass of a proton or neutron in a minimum energy ground state.

Naively, one would think that the reduction from the measured value would be smaller, because the sum of three times the average of the up and down quark mass is about 10.5 MeV, and this figure is often cited on popular science discussions of the proton and neutron mass. 

But, massive quarks indirectly impact the strength of the gluon field between the three valence quarks of a nucleon, and this indirect effect has a magnitude of roughly 45.4 MeV.

Why does this matter?

Prior to this paper, there was a large gap between the values produced by different kinds of calculations of this amount, which the new paper reconciles.

Also, this is not entirely a hypothetical question, because it is part of, for example, how one calculates the mass of protons and neutrons at higher energy scales, and how one can reverse engineer the quark masses of the proton and neutron masses.

At higher momentum transfer scales (a.k.a. energy scales) the Higgs field is weaker and the quark masses get smaller, and eventually, extrapolated to high enough energy scales, the Higgs field goes to zero and the quarks really are massless. 

The strong force coupling constant also runs with energy scale, however, and also gets weaker at higher energies, although not at the same rate at the quark masses. 

There is also a modest electroweak contribution to the proton and neutron masses and the electromagnetic force (which predominates over the weak force component) gets stronger at higher energy scales, modestly mitigating the declining quark masses and strong force field strength.

So, in order to be able to make a Standard Model calculation of the expected mass of protons and neutrons at high energies, you need to be able to break these distinct sources of the proton and neutron masses into their respective components, because the different components run with energy scale in different ways.

Charge parity violation and the quark masses

Another new paper argues that because the Standard Model has CP-violation, the masses of the up quarks must be related to the masses of the down quarks, giving rise to five independent degrees of freedom for quark masses rather than six.

A physically viable ansatz for quark mass matrices must satisfy certain constraints, like the constraint imposed by CP-violation. In this article we study a concrete example, by looking at some generic matrices with a nearly democratic texture, and the implications of the constraints imposed by CP-violation, specifically the Jarlskog invariant. This constraint reduces the number of parameters from six to five, implying that the six mass eigenvalues of the up-quarks and the down-quarks are interdependent, which in our approach is explicitly demonstrated.
A. Kleppe, "On CP-violation and quark masses: reducing the number of free parameters" arXiv:2508.11081 (August 14, 2025).

This relies on some assumptions, but the assumptions are quite general, and its basic conclusion is that up-type quark masses (i.e. the Higgs Yukawas of up-type quarks) can't arise from a mechanism independent of that for down-type quark masses (i.e. the Higgs Yukawas of down-type quarks), something that is the case, for example, in an extended Koide's rule approach.

Bonus content

Supersymmetry (a.ka. SUSY) is still a failure, when it comes to describing reality. It does not describe the world we live in.

Friday, August 15, 2025

A Technical But Interesting Paper On Fermion Mass Ratios

A new paper looks at fundamental fermion mass ratios from the perspective of something similar to an extended Koide's rule approach.
We revisit the "three generations" problem and the pattern of charged-fermion masses from the vantage of octonionic and Clifford algebra structures. Working with the exceptional Jordan algebra J3(OC) (right-handed flavor) and the symmetric cube of SU(3) (left-handed charge frame), we show that a single minimal ladder in the symmetric cube, together with the Dynkin Z2 swap (the A2 diagram flip), leads to closed-form expressions for the square-root mass ratios of all three charged families. The universal Jordan spectrum (q - delta, q, q + delta) with a theoretically derived delta squared = 3/8 fixes the endpoint contrasts; fixed Clebsch factors (2, 1, 1) ensure rung cancellation ("edge universality") so that adjacent ratios depend only on which edge is taken. The down ladder determines one step, its Dynkin reflection gives the lepton ladder, and choosing the other outward leg from the middle yields the up sector.

From the same inputs we obtain compact CKM "root-sum rules": with one 1-2 phase and a mild 2-3 cross-family normalization, the framework reproduces the Cabibbo angle and Vcb and provides leading predictions for Vub and Vtd/Vts. We perform apples-to-apples phenomenology (common scheme/scale) and find consistency with current determinations within quoted uncertainties. Conceptually, rank-1 idempotents (points of the octonionic projective plane), fixed symmetric-cube Clebsches, and the Dynkin swap together account for why electric charge is generation-blind while masses follow the observed hierarchies, and they furnish clear, falsifiable mass-ratio relations beyond the Standard Model.
Tejinder P. Singh, "Fermion mass ratios from the exceptional Jordan algebra" arXiv:2508.10131 (August 13, 2025) (90 pages).

Another interesting paper develops a relationship between the mixing ratios of the unitary triangle in the CKM matrix and the CP violating phase of that matrix. The abstract below deviates from my usual editing conventions to preserve the details of superscripts and subscripts in the notion without a lot of extra editing work that is prone to human error:

In this letter, we obtain a rephasing invariant formula for the CP phase in the Kobayashi--Maskawa parameterization δKM=arg[VuddetVCKM/VusVubVcdVtd]. General perturbative expansion of the formula and observed value δKMπ/2 reveal that the phase difference of the 1-2 mixings ei(ρd12ρu12) is close to maximal for sufficiently small 1-3 quark mixings su,d13. Moreover, by combining this result with another formula for the CP phase δPDG in the PDG parameterization, we derived an exact sum rule δPDG+δKM=πα+γ which relating the phases and the angles α,β,γ of the unitarity triangle.

Masaki J. S. Yang, "Rephasing Invariant Formula for the CP Phase in the Kobayashi-Maskawa Parametrization and the Exact Sum Rule with the Unitarity Triangle δPDG + δKM = π −α +γ" arXiv:2508.10249 (August 14, 2025) (6 pages).