Friday, August 22, 2025

An Electroweak Centric Model For Standard Model Mass Generation

The basic intuitive gist of the proposal of this paper is one that I've entertained myself, although I don't have the theoretical physics chops to spell it out at this level of formality and technical detail (and I'm really not qualified to evaluate the merits to this proposal at that level). I've seen one or two other papers (not recent ones) that take a similar approach.

The ratio of the electron mass to the lightest neutrino mass eigenstate is roughly the same as the ratio of the electromagnetic coupling constant to the weak force coupling constant, and both are masses are similar to what would be expected from the self-interactions of electrons and neutrinos via the electromagnetic and weak forces with themselves. Electrons interact via both of these forces, while neutrinos interact only via the weak force.

The down quark mass is about twice as much as the up quark mass, just as the absolute value of the down quark electromagnetic charge is twice the absolute value of the up quark electromagnetic charge. All quarks have the same magnitude of strong force color charge. And all of the fundamental fermions of the Standard Model have the same magnitude of weak force charge. Quarks interact via the strong force, the electromagnetic, and the weak force, so their self-interactions might be expected to be larger than for the electron which doesn't interact via the strong force.

Figuring out how this can work in concert with the three fundamental fermion generations is particularly challenging. I'm inclined to associate it with a W boson mediated dynamic process that sets the relative values of the Higgs Yukawas. This paper doesn't attempt to look beyond the first generation of fundamental fermions in implementing its model.

I'm not thrilled with the "leptoquark" component of this theory, but the fact that it gives rise to neutrino mass without either Majorana mass or a see-saw mechanism is very encouraging.

In the Standard Model of elementary particles the fermions are assumed to be intrinsically massless. Here we propose a new theoretical idea of fermion mass generation (other than by the Higgs mechanism) through the coupling with the vector gauge fields of the unified SU(2) ⊗ SU(4) gauge symmetry, especially with the Z boson of the weak interaction that affects all elementary fermions. The resulting small masses are suggested to be proportional to the self-energy of the Z field as described by a Yukawa potential. Thereby the electrically neutral neutrino just gets a tiny mass through its Z-field coupling. In contrast, the electrically charged electron and quarks can become more massive by the inertia induced through the Coulomb energy of the electrostatic fields surrounding them in their rest frames.
Eckart Marsch, Yasuhito Narita, "On the Lagrangian and fermion mass of the unified SU(2) ⊗ SU(4) gauge field theory" arXiv:2508.15332 (August 21, 2025) (13 pages).

The introduction of the paper is as follows:
According to the common wisdom of the Standard Model (SM) of elementary particle physics, the fermions are intrinsically massless, but they gain their masses via phase transition from the vacuum of the Higgs field. However, this notion introduces many free parameters (the Yukawa coupling constants) that are to be determined through measurements. These have been made at the LHC only for some members of the second and third family of heavy leptons and quarks, yet not for the important first family of fermions, of which the stable and long-lived hadrons form according to the gluon forces of quantum chromodynamics (QCD). 
Here we just consider the first fermion family of the SM and propose a new idea of the fermion mass generation. The key assumption is that their masses may be equal to the relevant gauge-field energy in the rest frames of these charged fermions carrying electroweak or strong charges. Their masses are suggested to originate from jointly breaking the chiral SU(2) symmetry combined with the hadronic isospin SU(4) symmetry, as described in the recent model by Marsch and Narita, following early ideas of Pati and Salam and their own work. Unlike in the SM, in their model both symmetries are considered as being unified to yield the SU(2) ⊗ SU(4) symmetry, which then is broken by the same procedures that are applied successfully in the electroweak sector of the SM. 
The outline of the paper is as follows. We briefly discuss the extended Dirac equation and its Lagrangian including the Higgs, gauge-field and fermion sectors. Especially, the covariant derivative is discussed and the various gauge-field interactions are described. Also the different charge operators (weak and strong) are presented. Then the CPT theorem is derived for the extended Dirac equation including the gauge field terms. The remainder of the paper addresses the idea of mass generation from gauge field energy in the fermion rest frame. Finally we present the conclusions.

The paper's conclusion states:

In this letter, we have considered a new intuitive idea of how the elementary fermions might acquire their finite empirical masses. We obtained diagonal mass matrices as Kronecker products within the framework of the unified gauge-field model of Marsch and Narita. The mass matrices still commute with the five Gamma matrices of the extended free Dirac equation without gauge fields. However, when including them the chiral SU(2) and the hadronic SU(4) symmetries both are broken by the mass term. Thus, the breaking of the initial unified SU(2) ⊗ SU(4) symmetry by the Higgs-like mechanism gives the fermions their different charges as well as specific masses. 

In the SM the initial common mass m is assumed to be zero, and then the Dirac spinor splits into two independent two-component Weyl spinors. But when the gauge fields are switched on, their self-energy gives inertia and thus mass to the fermions in their rest frame. The breaking of gauge symmetry yields the electromagnetic massless photon field E(µ) and the weak boson field Z(µ), which becomes very massive via the Higgs mechanism. It also induces inertia for all eight fermions, yet the resulting masses are rather small owing to the very small Compton wavelength of the Z boson. The neutrino and electron can acquire masses in this way, which yet differ by six orders of magnitude. The hadronic charge of the leptons is zero, and thus they decouple entirely from QCD. It is responsible by confinement through the gluons for the mass of the various resulting composite fermions, in particular for the proton mass. 

The masses of the light fermions are thus argued to originate physically from the major self-energy of the electrostatic field as well as from the minor self-energy of the Z-boson field, which is proportional to the Higgs vacuum that determines the Z-boson mass. It is clear, however, that the masses of heavy composite hadrons, in particular of the proton and neutron, involve dominant contributions from the energy of the binding gluon fields, as the QCD lattice simulations have clearly shown. 

In conclusion, the extended Dirac equation contains a physically well motivated mass term. It remedies the shortcoming of the SM that assumes massless fermions at the outset, whereas the empirical reality indicates that they are all massive. Therefore, the neutrino cannot be a Majorana particle, as it has often been suggested in the literature. This notion is in obvious contradiction to the observed neutrino oscillation, implying clearly finite masses. Chiral symmetry is broken in our theory, yet the parity remains intact. 

Finally, we like to mention the masses of the heavy gauge bosons involved in the above covariant derivative and related matrix. In the reference of the particle data group we find in units of MeV/c^2 the values: M(Z) = 91.2 and M(W) = 80.4. For the “leptoquark" boson V we obtain M(V) = 35.4. For the sum of these masses we find the following surprising results: M(V) + M(Z) = 126.6, which equals within less than a one-percent margin the measured mass of the Higgs boson, M(H) = 125.3. Also, M(W) + M(Z) = 171.6, which again equals within less than a one-percent margin the measured mass of the top quark, M(T) = 172.7. Whether this is just a fortuitous coincidence or indicates a physical connection has to remain open. 

Tuesday, August 19, 2025

A Wide Binary Paper Supporting Non-Newtonian Gravity

Efforts to determine is wide binary star systems show a change in gravitational acceleration at Newtonian accelerations below Milgrom's constant, as predicted by MOND (but not generically by all modified gravity explanations of dark matter phenomena) have had mixed and contradictory results. The latest paper on the subject, a small pilot study using new methods supports the existence of a non-Newtonian gravitational enhancement for wide binaries whose gravitational pull on each other is below Milgrom's constant.

Wide binary tests exclude almost all dark matter particle theories, if they show non-Newtonian gravitational enhancements in weak fields, and also discriminate meaningfully between different gravity based approaches to explain dark matter phenomena if the data allows for sufficiently precise conclusions. 

But competing considerations of data quality (e.g., it is easy to mistake a system with more than two stars for a wide binary system if one of the stars is feint or the angle of observation is poor), and data quantity (to give the observations statistical power), make this astronomy test of weak field gravity challenging to extract convincing results from. 
When 3D relative displacement r and velocity v between the pair in a gravitationally-bound system are precisely measured, the six measured quantities at one phase can allow elliptical orbit solutions at a given gravitational parameter G. Due to degeneracies between orbital-geometric parameters and G, individual Bayesian inferences and their statistical consolidation are needed to infer G as recently suggested by a Bayesian 3D modeling algorithm. 
Here I present a fully general Bayesian algorithm suitable for wide binaries with two (almost) exact sky-projected relative positions (as in the Gaia data release 3) and the other four sufficiently precise quantities. Wide binaries meeting the requirements of the general algorithm to allow for its full potential are rare at present, largely because the measurement uncertainty of the line-of-sight (radial) separation is usually larger than the true separation. 
As a pilot study, the algorithm is applied to 32 Gaia binaries for which precise HARPS radial velocities are available. The value of Γ ≡ log(10)√ G/G(N) (where G(N) is Newton's constant) is −0.002 + 0.012 −0.018 supporting Newton for a combination of 24 binaries with Newtonian acceleration g(N) > 10^−9 ms^−2, while it is Γ = 0.063 + 0.058 − 0.047 or 0.134 + 0.050 − 0.040 for 7 or 8 binaries with g(N) < 10^−9 ms^−2 (depending on one system) showing tension with Newton. The Newtonian ``outlier'' is at the boundary set by the Newtonian escape velocity, but can be consistent with modified gravity. 
The pilot study demonstrates the potential of the algorithm in measuring gravity at low acceleration with future samples of wide binaries.
Kyu-Hyun Chae, "Bayesian Inference of Gravity through Realistic 3D Modeling of Wide Binary Orbits: General Algorithm and a Pilot Study with HARPS Radial Velocities" arXiv:2508.11996 (August 16, 2025).

Monday, August 18, 2025

A Potentially Good New World Population Genetics Study Bumbled

A paper was published last year about the population genetics and historical genetics of the Blackfoot people. It compared a modest sample of Blackfoot affiliated genomes with other New World and Old World genomes. A small sample size isn't a big problem for a paleo-genetic study, however, because each individual's DNA has so many data points and generations of intermarriage in a fairly closed gene pool makes each individual highly representative of the population as a whole. But while the study does lots of things right, but makes a critical error in its analysis, which seriously detracts from the reliability of the analysis. This error arises from a weak review of the literature and deficient peer-review, which leads to an erroneous analysis.

The big problem with the paper is that it makes flawed assumptions about the peopling of the Americas. It relies on a model in which all Native Americans fit into two groups: Native North Americans (ANC-B) and Central and Southern Americans (ANC-A), and tries to determine where the Blackfoot people fit into that model.

The trouble is that the established paradigm is more complicated. While ANC-A is a valid and pretty much unified group that descend from basically Pacific coast route peoples in a primary founding population wave perhaps 14,000 years ago, Native Americans in North America have a more complex ancestry.

North American Native Americans have the lineages found in ANC-A (which results from a serial founder effect) and probably a least two other clades close in time to the initial founding era that spread into different parts of North America. 

Then, around 3500-2400 BCE, the ancestors of the Na-Dene people migrated to Alaska from Northeast Asia and admixed with pre-existing populations (their languages have remote but traceable connections to the Paleo-Siberian Ket people, whose language family is named after the Yenesian River in central Siberia) and are associated with the Saqqaq Paleoeskimo culture who also were the source of the Dorest Paleo-Eskimo populations (see also here and here) About 10% of Na-Dene ancestry is distinct from the initial founding population of the Americas.[2] The Na-Dene, like Inuits, have Y-DNA haplogroups that are specific to them and of more recent origin that the founding Y-DNA haplogroups of the Americas.[3].

And then, a final significant pre-Columbian wave with lasting demographic impact arrived from Northeast Asia, perhaps around 500s and 600s CE, and they are the ancestors of the Inuits (a.k.a. modern Eskimo-Aleut peoples) who have their roots in an Arctic and sub-Arctic population also known as the Thule. The 6th to 7th century CE Berginian Birnirk culture (in turn derived from Siberian populations) is the source of the proto-Inuit Thule people, who were the last substantial and sustained pre-Columbian peoples to migrate to the Americas.

A paper in 2020 refined and confirmed this analysis, and the 2024 paper even adopts its NNA v. SNA classification while failing to recognize the distinct temporal waves involved in the pre-Columbian peopling of the Americas.

See generally:

[1] Maanasa Raghavan, et al., "The genetic prehistory of the New World Arctic", Science 29 August 2014: Vol. 345 no. 6200 DOI: 10.1126/science.1255832.
[2] David Reich, et al., "Reconstructing Native American population history", Nature 488, 370-374 (16 August 2012) doi: 10.1038/nature11258
[4] Erika Tamm, et al., "Beringian Standstill and Spread of Native American Founders", PLOS One DO: 10.1381/journal.pone.0000829 (September 5, 2007).
[5] Alessandro Achilli, "Reconciling migration models to the Americas with the variation of North American native mitogenomes", 110 PNAS 35 (August 27, 2013) doi: 10.1073/pnas.1306290110
[7] Judith R. Kidd, et al., "SNPs and Haplotypes in Native American Populations", Am J. Phys Anthropol. 146(4) 495-502 (Dec. 2011) doi: 10.1002/aipa/21560

The critical problem with the paper is that Athabascans are a poor representative of Northern Native American lineages from the founding era ca. 14,000 years ago, because they have significant Na-Dene wave admixture, also shared, for example, with the Navajo, who migrated in turn migrated from what is now central to western Canada to the American Southeast around 1,000 CE (possibly, in part, due to the push factor of the incoming wave of proto-Inuits). 

In contrast, the vast majority of North American Native Americans have no Na-Dene or Inuit ancestry and are in population genetic continuity with one or more of the several founding populations of North America. Almost any other choice of a North American Native American comparison population would have been much, much better.

In contrast, the Karitiana are indeed representative (and the standard choice to represent) the ANC-A population.

It is entirely plausible that the Blackfoot are indeed from a wave of North American founding population that is under sampled and that their lineage is not represented in prior published works. 

Latin American indigenous peoples (and to a lesser extent and more recently, Canadian First Peoples) have, in general, been more receptive to population genetic work by anthropologists and Native American populations in the United States who have given these researchers the cold shoulder until very recently, due to a historical legacy that has understandably fostered distrust of people associated with the establishment in the U.S. including anthropologists. So, Native Americans in the U.S. are greatly under sampled.

But, because the thrust of the paper heavily relies on comparisons between Blackfoot DNA and Athabascan DNA with misguided assumptions about the Athabascan population histories entering into the calculations and analysis, it is hard to confidently extract reliable conclusions from that analysis. The Athabascan may be mostly ANC-B, but are probably the most divergent sample one could use to represent that population, particularly since no attempt is made to distinguish the ancestry components in that population. This seriously confounds the efforts to pin down the prehistoric time line.

A good quality peer-review should have caught this problem, but peer-review in practice is less effective than it is given credit for being.

Realistically, the only way to really do it right would be to withdraw the 2024 paper and replace it with a new paper that reanalyzes the Blackfoot genetic data by comparing it to a more suitable representative of North American Native American ancestry.




Studies of human genomes have aided historical research in the Americas by providing rich information about demographic events and population histories of Indigenous peoples, including the initial peopling of the continents. The ability to study genomes of Ancestors in the Americas through paleo-genomics has greatly increased the power and resolution at which we can infer past events and processes. However, few genomic studies have been completed with populations in North America, which could be the most informative about the initial peopling process. Those that have been completed in North America have identified Indigenous Ancestors with previously undescribed genomic lineages that evolved in the Late Pleistocene, before the split of two lineages [called the “Northern Native American (NNA)” or “ANC-B” and “Central and Southern American (SNA)” or “ANC-A” lineages] from which all present-day Indigenous populations in the double continent that have been sampled derive much, if not all, their ancestry before European contact. Specifically, the lineage termed “Ancient Beringian” was ascribed to a genome in an Ancestor who lived 11,500 years ago at Xaasaa Na’ (Upward Sun River) and named Xach’itee’aanenh t’eede gaay (USR1) by the local Healy Lake Village Council in Alaska. An Ancestor who lived 9500 years ago at what is now called Trail Creek Caves on the Seward Peninsula, Alaska, also belongs to the Ancient Beringian lineage. In addition, another Ancestor, under the stewardship of Stswecem’c Xgat’tem First Nation, who lived in what is now called British Columbia, belongs to a distinct genomic lineage that predates the NNA-SNA split but postdates the split from Ancient Beringians on the Americas’ genomic timeline. This Ancestor was identified at Big Bar Lake near the Frasier River and lived 5600 years ago. Thus, these previous studies of North American Indigenous Ancestors have successfully helped to identify previously unknown genomic diversity. However, the ancient lineages identified in these studies have not been observed in samples of Indigenous peoples of the Americas living today. Research in Mesoamerica and South America suggests that certain sampled populations (e.g., Mixe) have at least partial ancestry in present-day Indigenous groups from unknown genomic lineages in the Americas, possibly dating as far back as 25,000 years ago. . . .

With multiple genomic analyses showing the ancient Blood/Blackfoot clustering together with present-day Blood/Blackfoot but on a separate lineage from other North and South American groups, we created a demographic model using momi2, which used the site frequency spectra of present-day Blood/Blackfoot, Athabascan (as a representative of Northern Native American lineage), Karitiana (as a representative of Southern Native American lineage), and Han, English, Finnish, and French representing lineages from Eurasia. The best-fitting model shows a split time of the present-day Blood/Blackfoot at 18,104 years ago, followed by a split of Athabascan and Karitiana at 13,031 years ago.

The paper and its abstract are as follows:

Mutually beneficial partnerships between genomics researchers and North American Indigenous Nations are rare yet becoming more common. Here, we present one such partnership that provides insight into the peopling of the Americas and furnishes another line of evidence that can be used to further treaty and aboriginal rights. We show that the genomics of sampled individuals from the Blackfoot Confederacy belong to a previously undescribed ancient lineage that diverged from other genomic lineages in the Americas in Late Pleistocene times. Using multiple complementary forms of knowledge, we provide a scenario for Blackfoot population history that fits with oral tradition and provides a plausible model for the evolutionary process of the peopling of the Americas.
Dorothy First Rider, et al., "Genomic analyses correspond with deep persistence of peoples of Blackfoot Confederacy from glacial times" 10(14) Science Advances (April 3, 2024).

Two Papers Related To Quark Masses

How much of a nucleon's mass is due to Higgs field sourced quark mass?

Background

To the nearest 0.1 MeV, the mass of the proton is 938.3 MeV (the experimentally measured value is 938.272 089 43 (29) MeV) and the mass of the neutron is 939.6 MeV (the experimentally measured value is 939.565 421 94 (48) MeV).

A proton has two valence up quarks and one valence down quark. A neutron has one valence up quark and two valence down quarks.

According to the Particle Data Group (relying on state of the art averages of Lattice QCD calculations that extra this from measurable masses of particles made up of quarks bound by gluons which are called hadrons) concludes that the average of the up quark mass and the down quark mass is measured to be 3.49 ± 0.4 MeV, the up quark mass is 2.16  ± 0.4 MeV, and the down quark mass is 4.70  ± 0.4 MeV. 

What would the masses of the proton and the neutron be, hypothetically, if the up quark and down quark has zero mass?

A new paper calculates that the proton mass would be 882.4 ± 2.5 MeV (about 94% of its measured value), while the mass of the neutron would be 883.7 MeV ± 2.5 MeV (about 94.1% of its measured value). Thus, this would reduce the total nucleon mass by 55.9 MeV ± 2.5 MeV, and ignoring the effect of the difference between the up quark and down quark masses (which has a roughly 3.5 MeV effect in the average massless quark estimate, according to the body text of the paper). 

These masses can be conceptualized as the combined pure gluon and electroweak field source mass of a proton or neutron in a minimum energy ground state.

Naively, one would think that the reduction from the measured value would be smaller, because the sum of three times the average of the up and down quark mass is about 10.5 MeV, and this figure is often cited on popular science discussions of the proton and neutron mass. 

But, massive quarks indirectly impact the strength of the gluon field between the three valence quarks of a nucleon, and this indirect effect has a magnitude of roughly 45.4 MeV.

Why does this matter?

Prior to this paper, there was a large gap between the values produced by different kinds of calculations of this amount, which the new paper reconciles.

Also, this is not entirely a hypothetical question, because it is part of, for example, how one calculates the mass of protons and neutrons at higher energy scales, and how one can reverse engineer the quark masses of the proton and neutron masses.

At higher momentum transfer scales (a.k.a. energy scales) the Higgs field is weaker and the quark masses get smaller, and eventually, extrapolated to high enough energy scales, the Higgs field goes to zero and the quarks really are massless. 

The strong force coupling constant also runs with energy scale, however, and also gets weaker at higher energies, although not at the same rate at the quark masses. 

There is also a modest electroweak contribution to the proton and neutron masses and the electromagnetic force (which predominates over the weak force component) gets stronger at higher energy scales, modestly mitigating the declining quark masses and strong force field strength.

So, in order to be able to make a Standard Model calculation of the expected mass of protons and neutrons at high energies, you need to be able to break these distinct sources of the proton and neutron masses into their respective components, because the different components run with energy scale in different ways.

Charge parity violation and the quark masses

Another new paper argues that because the Standard Model has CP-violation, the masses of the up quarks must be related to the masses of the down quarks, giving rise to five independent degrees of freedom for quark masses rather than six.

A physically viable ansatz for quark mass matrices must satisfy certain constraints, like the constraint imposed by CP-violation. In this article we study a concrete example, by looking at some generic matrices with a nearly democratic texture, and the implications of the constraints imposed by CP-violation, specifically the Jarlskog invariant. This constraint reduces the number of parameters from six to five, implying that the six mass eigenvalues of the up-quarks and the down-quarks are interdependent, which in our approach is explicitly demonstrated.
A. Kleppe, "On CP-violation and quark masses: reducing the number of free parameters" arXiv:2508.11081 (August 14, 2025).

This relies on some assumptions, but the assumptions are quite general, and its basic conclusion is that up-type quark masses (i.e. the Higgs Yukawas of up-type quarks) can't arise from a mechanism independent of that for down-type quark masses (i.e. the Higgs Yukawas of down-type quarks), something that is the case, for example, in an extended Koide's rule approach.

Bonus content

Supersymmetry (a.ka. SUSY) is still a failure, when it comes to describing reality. It does not describe the world we live in.

Friday, August 15, 2025

A Technical But Interesting Paper On Fermion Mass Ratios

A new paper looks at fundamental fermion mass ratios from the perspective of something similar to an extended Koide's rule approach.
We revisit the "three generations" problem and the pattern of charged-fermion masses from the vantage of octonionic and Clifford algebra structures. Working with the exceptional Jordan algebra J3(OC) (right-handed flavor) and the symmetric cube of SU(3) (left-handed charge frame), we show that a single minimal ladder in the symmetric cube, together with the Dynkin Z2 swap (the A2 diagram flip), leads to closed-form expressions for the square-root mass ratios of all three charged families. The universal Jordan spectrum (q - delta, q, q + delta) with a theoretically derived delta squared = 3/8 fixes the endpoint contrasts; fixed Clebsch factors (2, 1, 1) ensure rung cancellation ("edge universality") so that adjacent ratios depend only on which edge is taken. The down ladder determines one step, its Dynkin reflection gives the lepton ladder, and choosing the other outward leg from the middle yields the up sector.

From the same inputs we obtain compact CKM "root-sum rules": with one 1-2 phase and a mild 2-3 cross-family normalization, the framework reproduces the Cabibbo angle and Vcb and provides leading predictions for Vub and Vtd/Vts. We perform apples-to-apples phenomenology (common scheme/scale) and find consistency with current determinations within quoted uncertainties. Conceptually, rank-1 idempotents (points of the octonionic projective plane), fixed symmetric-cube Clebsches, and the Dynkin swap together account for why electric charge is generation-blind while masses follow the observed hierarchies, and they furnish clear, falsifiable mass-ratio relations beyond the Standard Model.
Tejinder P. Singh, "Fermion mass ratios from the exceptional Jordan algebra" arXiv:2508.10131 (August 13, 2025) (90 pages).

Another interesting paper develops a relationship between the mixing ratios of the unitary triangle in the CKM matrix and the CP violating phase of that matrix. The abstract below deviates from my usual editing conventions to preserve the details of superscripts and subscripts in the notion without a lot of extra editing work that is prone to human error:

In this letter, we obtain a rephasing invariant formula for the CP phase in the Kobayashi--Maskawa parameterization δKM=arg[VuddetVCKM/VusVubVcdVtd]. General perturbative expansion of the formula and observed value δKMπ/2 reveal that the phase difference of the 1-2 mixings ei(ρd12ρu12) is close to maximal for sufficiently small 1-3 quark mixings su,d13. Moreover, by combining this result with another formula for the CP phase δPDG in the PDG parameterization, we derived an exact sum rule δPDG+δKM=πα+γ which relating the phases and the angles α,β,γ of the unitarity triangle.

Masaki J. S. Yang, "Rephasing Invariant Formula for the CP Phase in the Kobayashi-Maskawa Parametrization and the Exact Sum Rule with the Unitarity Triangle δPDG + δKM = π −α +γ" arXiv:2508.10249 (August 14, 2025) (6 pages).