Tuesday, May 13, 2025

Physical Constants Have Remained Unchanged For 7.2 Gyr

A new paper determines from astronomy measurements that there have been no changes to the electromagnetic coupling constant (a.k.a. the fine structure constant), to the proton-electron mass ratio, or to another property of the proton called the proton g-factor, in the last 7.2 billion years. The constraints on any change in that time period are very tight.

This confirms all prior studies of this type.

Monday, May 12, 2025

Interstellar Travel In The Galaxy's Core

Interstellar travel at speeds slower than light isn't really viable in human lifetimes from Earth, but it would be in many cases in the center of the Milky Way galaxy where the distances between stars are ten to a hundred times smaller, on average. So, a trip to the next solar system could take as little as four years at 1% of the speed of light, which is fast, but not technologically unattainable.
If we lived in the center of the Milky Way, we would look up at a sky thick with stars, up to 1 million times denser than we’re used to seeing. The closest star to our Sun is about four light-years away. In the center of the galaxy, stars are only 0.4 to 0.04 light-years apart. In the inner 10,000 light-year region of the Milky Way, the galaxy’s spiral arm structure transitions into a dense, spherical “bulge” of stars. At its heart — and the dominant force in that area of the galaxy — is a supermassive black hole approximately 4 million times the mass of the Sun, called Sagittarius A* (pronounced Sagittarius A star).
[T]he average distance between any two stars in our galaxy. . . . [is] about 5 light years, which is very close to the 4 light year distance between our Sun and Alpha Centauri.

In the central part of some galaxies, interstellar distances average as little as 0.008 light years to 0.013 light years. So, one could travel from one star to the next there in 8-13 years at only 0.1% of the speed of light (about 300 km/s). And, even in very dense galaxies it is rare for stars to collide:
Even at these high star densities, collisions are rare. Globular clusters have stars in their centers called “blue stragglers,” which astronomers think are new, massive stars formed by the collision of two old, lower-mass stars. Fewer than one in every 10,000 globular-cluster stars are blue stragglers, which suggests how rare stellar collisions are even in these extreme environments.

These collisions would take place only over billions of years. 

Will A Next Generation Collider See A Sphaleron?

Overview

A new pair of preprints, whose abstracts and citations are set forth below, examine the sphaleron energy threshold with newly updated experimental values of Standard Model physical constants and rigorous calculations, and its other properties. This tells experimentalists very specifically where to look and what to look for when trying to observe a Standard Model sphaleron interaction.

The sphaleron interaction is the only time in the Standard Model of Particle Physics that baryon number and lepton number are not simultaneously conserved. 

But it requires extremely high collider energies to form in detectable numers (an order of magnitude greater collider energy than the nominal energy levels is required because the energy of the interaction must be confined so compactly, so a collider energy of something on the order of 91 TeV is needed to confidently assert that they have been discovered).

Not Yet Observed Standard Model Phenomena

The two biggest predictions of the Standard Model of Particle Physics that haven't been observed yet are the sphaleron and the failure to definitively observe pure glueballs (a strong force bound hadron with no quarks) and certain other hadrons predicted to exist in Standard Model quantum chromodynamics (QCD). 

Standard Model Hadron Predictions Not Yet Seen

Current experiments have more than enough energy to produce glueballs, which are predicted to have masses on the order of 0.5 GeV to 3 GeV in their ground states (while the LHC can create energies up to 14,000 GeV), at some experimentally observed mesons have been provisionally identified as likely glueballs. 

But identifying them definitively against other possible explanations of glueball-like resonances is challenging, since they are electromagnetically neutral, color charge neutral, don't interact via the weak force at the tree-level, and are bosons with integer spins shared by mesons in overlapping mass ranges. Glueballs have a natural tendency to blend into mesons with the same quantum numbers, resulting in mixed hadron resonances.

A similar issue applies to some of the heaviest hadrons predicted by the Standard Model but not yet definitively identified with resonances observed at sufficient statistical significance. But the most massive of these have ground states with masses of 20 GeV or less, and a few new ones are identified every year these days. Observing the last few is mostly just a matter of time.

Similarly, the project of identifying the underlying structure of hadron resonances other than simple pseudo-scalar valence quark-antiquark mesons and simple three valence quark baryons, is progressing one hard won resonance identification at a time with no sweeping explanations for large groups of resonances whose structures do not have a consensus explanation. 

There are even hints of exceeding improbable, short lived, and rare ultrahigh energy toponium (i.e. a meson made up of a top quark and anti-top quark with a mass on the order of 340 to 350 GeV), which would also be enhanced at a next generation higher energy particle collider, with a 73 TeV collider energy with a very large number of collisions, being a key threshold to detect this highly improbable resonance. It is rare not just because it takes high energies, but also because of the high risk that the valence top quark and valence anti-top quark necessary to form one would decay or annihilate with each other, before the quarks could hadronize into the most massive theoretically possible simple quark-antiquark meson. Toponium is almost guaranteed to be seen at a next generation collider along the lines of the LHC but more powerful.

Sphalerons

A sphaleron, in contrast, is basically the only major Standard Model prediction that is not yet confirmed, with an energy scale about 730 times that of a Higgs boson, that requires a bigger accelerator to confirm or rule out, even though the nominal sphaleron energy of 9.1 TeV is less than the 14 TeV peak energy of the LHC.

Analysis

To be clear, this study does not alter the long standing conclusion that sphaleron interactions cannot explain the baryon asymmetry of the universe (i.e. the extreme excess of matter over anti-matter in quarks, which is also true of charged leptons), although it does tweak estimates of what percentage of baryon asymmetry this interaction can explain with Standard Model physics.

I put even odds on whether sphalerons actually exist or not. A mathematically consistent modification of the Standard Model that would make baryon number and lepton number conserved symmetries of the Standard Model, which would make sphalerons impossible, would have almost no important phenomenological consequences for anything other than (i) baryogenesis and leptogenesis in the first few seconds after the Big Bang, which we don't really understand yet anyway and which could be explained without any baryon number and lepton number violations in other ways, such as a mirror universe model, and (ii) the presence or absence of sphaleron decay signatures in ultrahigh energy collider experiments. 

Non-detection of sphalerons would also disfavor a wide variety of grand unified theories (GUTs) and Theories of Everything (TOEs), which usually permit violations of baryon number and lepton number, which makes phenomena like proton decay, which is forbidden in the Standard Model of Particle Physics, rare but possible.

But, detecting sphalerons as predicted would also be a triumph for the Standard Model taken to the extremes of its domain of applicability.

The Standard Model prediction is that there would be a desert of new physics (within or beyond the Standard Model) at energies above those where a sphaleron are observed.

The Papers

The electroweak sphaleron is a static, unstable solution of the Standard Model classical field equations, representing the energy barrier between topologically distinct vacua. 

In this work, we present a comprehensive updated analysis of the sphaleron using current Standard Model parameters with the physical Higgs boson mass of m(H)=125.1 GeV and m(W)=80.4 GeV, rather than the m(H)=m(W) approximation common in earlier studies. The study includes: (i) a complete derivation of the SU(2)×U(1) electroweak Lagrangian and field equations without gauge fixing constraints, (ii) high-precision numerical solutions for the static sphaleron configuration yielding a sphaleron energy E(sph)≃9.1 TeV, (iii) an analysis of the minimum energy path in field space connecting the sphaleron to the vacuum (a 1D potential barrier as a function of Chern-Simons number), and (iv) calculation of the sphaleron single unstable mode with negative eigenvalue ω^2=−2.7m(W)^2, providing analytical fits for its eigenfunction. 

We find that using the measured Higgs mass modifies the unstable mode frequency, with important implications for baryon number violation rates in both early universe cosmology and potential high-energy collider signatures. These results provide essential input for accurate lattice simulations of sphaleron transitions and precision calculations of baryon number violation processes.
Konstantin T. Matchev, Sarunas Verner, "The Electroweak Sphaleron Revisited: I. Static Solutions, Energy Barrier, and Unstable Modes" arXiv:2505.05607 (May 8, 2025).
We present a comprehensive analysis of electroweak sphaleron decay dynamics, employing both analytical techniques and high-resolution numerical simulations. 

Using a spherically symmetric ansatz, we reformulate the system as a (1+1)-dimensional problem and analyze its stability properties with current Standard Model parameters (m(H)=125.1 GeV, m(W)=80.4 GeV). We identify precisely one unstable mode with eigenvalue ω^2 ≃ −2.7m(W)^2 and numerically evolve the full non-linear field equations under various initial conditions. Through spectral decomposition, we quantify the particle production resulting from the sphaleron decay. 

Our results demonstrate that the decay process is dominated by transverse gauge bosons, which constitute approximately 80% of the total energy and multiplicity, while Higgs bosons account for only 7-8%. On average, the sphaleron decays into 49 W bosons and 4 Higgs bosons. The particle spectra consistently peak at momenta k ∼ 1−1.5m(W), reflecting the characteristic size of the sphaleron. 

Remarkably, these properties remain robust across different decay scenarios, suggesting that the fundamental structure of the sphaleron, rather than specific triggering mechanisms, determines the decay outcomes. These findings provide distinctive experimental signatures of non-perturbative topological transitions in the electroweak theory, with significant implications for baryon number violation in the early universe and potentially for high-energy collider physics.
Konstantin T. Matchev, Sarunas Verner, "The Electroweak Sphaleron Revisited: II. Study of Decay Dynamics" arXiv:2505.05608 (May 8, 2025).

Tuesday, May 6, 2025

A Notable Life In Quantum Physics

Chien-Shiung Wu was one of the pioneers of quantum mechanics and high energy physics, and was a female Chinese physicists in an era where women still made up only a tiny percentage of scientists in the field. 

If something remembers you with a speech like this, one hundred and ten years after you were born, when you have long passed away, then you did something right in life.
In 1950, Chien-Shiung Wu and her student published a coincidence experiment on entangled photon pairs that were created in electron-positron annihilation. This experiment precisely verified the prediction of quantum electrodynamics. 
Additionally, it was also the first instance of a precisely controlled quantum entangled state of spatially separated particles, although Wu did not know about this at the time. 
In 1956, Wu initiated and led the so-called Wu experiment, which discovered parity nonconservation, becoming one of the greatest experiments of the 20th century. 
As Chen Ning Yang said, Wu's experiments were well known for their precision and accuracy. Experimental precision and accuracy manifested Wu's scientific spirit, which we investigate here in some detail. 
Yu Shi, "Scientific Spirit of Chien-Shiung Wu: From Quantum Entanglement to Parity Nonconservation" arXiv:2504.16978 (May 31, 2022) (This paper is the translated transcript of the speech the author made at the International Symposium Commemorating the 110th Anniversary of the Birth of Chien-Shiung Wu, on May 31, 2022 in Chinese. The above abstract is the translation of the original abstract of the speech.)

She earned her undergraduate degree in physics (which had a thesis requirement at the time) in China in 1934, prior to the Maoist Revolution, and earned a PhD working under a professor only three years older than her who had studied under Madame Curie, and under the first female PhD in Physics in China, who earned that degree at the University of Michigan (where I also earned my graduate degree). Wu earned her PhD at the University of California at Berkley in 1940 (thirty years before my father earned his PhD at Stanford).
Wu was admitted by the University of Michigan to study at her own expense, and was financially supported by her uncle. On her way to Michigan, Wu visited Berkeley, where she was so impressed, especially by Ernest O. Lawrence’s cyclotron, that she wanted to stay in Berkeley. The cyclotron had been invented by Lawrence, so it was an ideal place for studying physics. Another important factor that influenced Wu’s decision was that she cared a lot about gender equality, and there was gender discrimination at the University of Michigan. In addition, there were a lot of Chinese students at the University of Michigan at the time, and Wu didn’t want her socializing be dominated by fellow Chinese students. So she stayed in Berkeley. Her decision reflected her devotion to physics as a woman.
She then taught at Smith (from which my sister-in-law graduated), and then Princeton, and then she worked at Columbia University as part of the Manhattan Project. 

She was highly productive (publishing more than fifty papers in the early 1950s when a huge share of U.S. women were homemakers in the Baby Boom), and her early post-war research agenda involved the verification of Fermi’s theory of β decay.

Chien-Shiung Wu served as the President of the American Physical Society from 1975 to 1976.
James W. Cronin, who won the 1980 Nobel Prize for his discovery of charge conjugation-parity (CP) nonconservation, once said, “The great discovery of Chien-Shiung Wu started the golden age of particle physics.” 

She continued to publish through at least 1980, and died in February of 1997. The author of the paper had met her.

Current LHC SUSY Exclusions

Supersymmetry, and its further elaboration, string theory, have turned out to be the biggest wrong turns in the history of high energy physics. They produced a few mathematical insights and better understandings of some deeper points about the Standard Model of Particle Physics along the half century journey, but ultimately these theories and these lines of inquiry were dead ends.  These theories continue to receive academic and experimental attention now, mostly because so many tenured physicists built their careers on these theories before their flaws became apparent.

(In due time, I expect that concepts like dark matter, dark energy, and cosmological inflation in astrophysics will similarly be found to have been plausible, but ultimately mistaken wrong turns, as well. Indeed, supersymmetry was originally one of several important reasons that dark matter particles received wider popularity as an explanation of dark matter phenomena and that the LambdaCDM model of cosmology gained wide initial acceptance, because it provided dark matter particle candidates that were well-motivated at the time, in roughly the right quantities in a thermal freeze out scenario in what was called the "WIMP miracle" until observational and experimental evidence ruled it out.)   

Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin (bosons) and particles with half-integer spin (fermions). It proposes that for every known particle, there exists a partner particle with different spin properties. There have been multiple experiments on supersymmetry that have failed to provide evidence that it exists in nature. If evidence is found, supersymmetry could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics. . . . 
There is no experimental evidence that either supersymmetry or misaligned supersymmetry holds in our universe, and many physicists have moved on from supersymmetry and string theory entirely due to the non-detection of supersymmetry at the LHC.
From Wikipedia (the emphasis in bold is mine). The chart below related to the theory sets forth the components of supersymmetry in one of its simpler forms (placing some of the terms used in the material quoted below in a clearer context):


A preprint of a commissioned review article for the Encyclopedia for Particle Physics, submitted on May 3, 2025 by Hyun Min Lee, entitled "Supersymmetry and LHC era" updates the experimental bounds on supersymmetry (SUSY) theories based upon the Large Hadron Collider (LHC) data. 

The author tries to paint this limitations in a positive light for supersymmetry, but the hard basic facts are that supersymmetry is now all but dead as a viable explanation of the real world, and that supersymmetric particles (a.k.a. "sparticles") don't exist.

Some of articles' more specific conclusions are as follows:
[T]he gluino masses up to about 2.4 TeV and the squark masses up to about 1.8 TeV have been ruled out by ATLAS. Similar limits from CMS can be also found. . . . stop masses up to about 1250 GeV have been ruled out from ATLAS data and there are similar bounds on the stop mass from CMS. . . . the mass ranges up to about 700 GeV for sleptons and about 1200 GeV for chargino or heavier neutralino have been excluded.

Some supersymmetric models are inconsistent with the current W boson mass (apart from the anomalous and deeply flawed CDFII measurement that is 7 sigma from the global average measurement of more recent and more precise experiments and from the other experiment at Tevatron). For example, this constrains the slepton mass.

More supersymmetric models are inconsistent with the agreement between the experimental value of muon g-2 and the Lattice QCD prediction for its value (rather than the flawed "white paper" value which is 5.2 sigma from the experimental value). The white paper estimate of the muon g-2 deviation from the Standard Model expectation is about 2.5 ± 0.5 * 10^-9, while the Lattice QCD prediction which is confirmed by more recent experimental date is closer to 0.5 * 10^ -9. The paper examine several supersymmetric particle parameter models at values from 1.56 to 2.24 * 10^-9 noting that:

In the left of Fig.3, we present three benchmark models which are consistent with the muon g−2 anomaly within1σ (Model II and III) or 2σ (Mode I) in general gauge mediation.We listed the masses for electroweak superpartners in general gauge mediation and the predictions for ∆aµ and ∆MW. In particular, ModelIII is also consistent with the W boson mass measured by CDFII within1σ. 

In the right plot of Fig.3, we also show the general independent mass parameters for superpartners, namely, m˜ qL ≃m˜ uR ≃m˜ dR, M1, M2, M3, m˜ lL and m˜ eR. 

The paper doesn't even try to devise a set of parameters of supersymmetry with the much smaller muon g-2 anomaly that the overwhelming evidence suggests is actually the case when the Standard Model expectation is calculated correctly. It does note, however that:

There are two typos [sic] of supersymmetric interactions relevant for the muon g−2. One Yukawa type interaction of the muon is to a neutral scalar ϕ with mass mϕ and a charged fermion F with charge −1 and mass mF . . . . Another Yukawa type interaction of the muon is to a charged scalar χ with charge −1 and mass mχ and a neutral fermion λ with mass mλ[.]

So, given the actual muon g-2 consistency between experiment and the predicted Standard Model value, the mass of the superpartners on the order of 400 GeV in the Model I scenario would have to be much higher. It would probably have to be at least in the 800s GeV, if not more, as estimated by me using linear interpolation from the other models in what is actually a non-linear relationship that probably sets the lower bounds even higher.

We comment on the dark matter candidates and collider signatures with electroweak superpartners in the benchmark models. 

In Model I, the neutralino is the LSP, so it can be a dark matter candidate. In this case, due to small splitting between the slepton masses and the neutralino mass, the decay products of the sleptons may be composed of soft jets or leptons (See the left plot in Fig. 4). If the gravitino is lighter than the neutralino, then it can be a dark matter candidate, but displaced decays of the neutralino can lead to distinct signatures at the LHC (See the right plot in Fig. 4 for the results from ATLAS). 

On the other hand, in Model II and III, smuon or sneutrino is the LSP, so there is no dark matter candidate within the MSSM. However, if the gravitino is lighter than the sleptons, the MSSM LSP is long-lived, decaying into a pair of the gravitino and charged leptons(neutrinos) at the displaced vertex from the production point (See the right plot in Fig. 5 for the results from ATLAS). In all the benchmark models, the heavier neutralinos or charginos have been searched for from their prompt decays, as shown in the left plot of Fig. 5.

The paper doesn't connect the dots, but because we know from direct dark matter detection experiments that there is no supersymmetric particle consistent with being a dark matter particle in anything approximating these mass ranges, that the paper's Model I's lightest supersymmetric particle (LSP) cannot be a dark matter candidate.

Proton decay also rules out much more of the SUSY parameter space if one doesn't resort to Baroque convolutions in the theory to suppress it:

In the minimal SU(5) unification with TeV-scale masses for superpartners, the dimension-5 operators generated at the mass scale of the colored Higgisnos are problematic for the proton lifetime. Even in the scenarios with split masses between colored and non-colored superpartners in general gauge mediation, the problem remains because of a large decay rate of the proton coming from the stop, the light stau and the Higgsinos in loops. However, the problem of the proton lifetime can be solved when the model is embedded in the orbifold GUTs where the Higgsino wavefunctions get suppressed at the orbifold fixed point, so do the dimension-5 operators for the proton decays.

Finally, not to put too fine a point on it, but there is absolute zero positive experimental evidence for supersymmetry or superpartners, despite the fact that looking for them was what the LHC was optimized to do. The exclusions are only as low as they are due to the constraints of the instrumentation to explore higher energies.

The bottom line is that supersymmetry is absent up to the TeV level, and the non-detection of proton decay, the lack of a muon g-2 anomaly, and the W boson mass all disfavor it at masses for superpartners at least up to the 10s of TeVs level.

Supersymmetry doesn't address the hierarchy problem is was devised to address, doesn't provide a dark matter candidate, doesn't explain any well established anomalies in the observational or experimental data. In general, supersymmetry no longer well-motivated as a beyond the Standard Model theory.

Moreover, because supersymmetry is a low energy approximation of the lion's share of string theories given serious examination by theoretical physicists, these experimental bounds also strongly disfavor string theory.

Of course, supersymmetry advocates and string theory advocates can try, as they have many times  over the last fifty years or so since it was proposed in the late 1960s and early 1970s, to move the goalposts of their predictions for its parameters "just around the corner" from the experimental limits that rule out or strongly disfavor it to values that can't yet be ruled out, even though these parameter values would no longer address anything that motivated this theoretical approach in the first place. 

But, increasingly, "just around the corner" means energy scales that cannot be reached even at the next generation of particle collider, but which might be reached at the next generation particle collider after that one, maybe several decades in the future.

Footnote

Another preprint today looks at dark matter particle candidates whose properties are strongly fixed by existing measurements in an SU(6) Grand Unified Theory (GUT) model. 

The allowed parameters for these dark matter candidates, however, are ruled out by astrophysics observations (the single digit plus TeV mass candidates are too massive to be directly ruled out by direct dark matter experiments).

Finally, yet another preprint today also considers a two component dark matter particle model suggested by a different GUT, which is ruled out by direct dark matter detection experiments, unless the GUT is twisted in a manner specifically designed to overcome this problem that otherwise has no observational or theoretical motivation. This elaborated version of the GUT does create a theoretical basis for neutrino mass, but also establishes a dark matter candidate that is ruled out by astronomy evidence, even though it isn't ruled out by direct dark matter detection experiments or high energy physics experiments.

Dark Matter


From here.

Thursday, May 1, 2025

Kana v. Kanji In Japanese Writing

The use of kanji (Chinese characters) relative to kana (a phonetic script) in Japanese, has slowly and steadily declined in the Japanese language

This graph shows the percentages of A words of Sinitic derivation written in kana, B words of Sinitic derivation written with kanji, C words of Japanese derivation written with kanji, and D words of Japanese derivation written in kana over the years 1879-1968. As you might expect, words in A increase very slightly. B shrinks linearly and a bit more rapidly. The ratio of C/D shrinks quite a bit[.]

There is every reason to believe that this trend has continued from 1968 to the present. 

MONG

MONG is another notable modified gravity explanation of dark matter, with this paper focusing on the intra cluster medium, a relatively novel observable to examine modified gravity proposals. The focus on gravitational self-energy is also notable, as is the fact that this is not a "lone wolf" paper.
In view of the negative results from various dark matter detection experiments, we had earlier proposed an alternate theoretical framework through Modification of Newtonian Gravity (MONG). Here, the Poison's equation is modified by introducing an additional gravitational self-energy density term along with the usual dark energy density term. 
In this work we extend this model to account for the presence of low-density gas at high temperatures (10^8 K) in the intra cluster medium (ICM) by estimating the velocities to which particles will be subjected by the modified gravitational force. Considering that the ICM is under the influence of the cluster's gravity, particle velocities of the ions in the ICM must be balanced by the cluster's gravitational force. The particle velocities obtained for various clusters from their temperature profiles match the velocity produced by the MONG gravitational force. Thus, the increase in the gravitational potential at the outskirts of galaxies balances the thermal pressure of the ICM, maintaining hydrostatic equilibrium without invoking DM. 
The effect of MONG on the angular momentum of galaxies is also studied by obtaining a scaling relation between the angular momentum and the mass of a galaxy. MONG predicts a higher dependence on mass in comparison to the Lambda-CDM model. This increased dependence on mass compensates for the halo contribution to the angular momentum. The angular momentum from MONG for galaxies from the SPARC database is compared to the halo angular momentum by a Chi-square fit technique. The correlation coefficient is found to be unity, showing a replicable result.
Louise Rebecca, Dominic Sebastian, C Sivaram, Kenath Arun, "Modification of Newtonian Gravity: Implications for Hot Gas in Clusters and Galactic Angular Momentum" arXiv:2504.21021 (April 22, 2025).

Wednesday, April 30, 2025

Prehistoric Germans And Finns

Razib Kahn has a short little recap of what ancient DNA and other sources tell us about the ancestors and linguistic origins of the Germanic and Finnish people. It is summed up in the following image:


He opens with this summary:

A 2023 preprint out of David Reich’s lab seems to have come close to pinpointing the origin of the Baltic’s Finnic peoples, while a 2025 preprint from his rival Eske Willerslev’s group may have uncovered the proto-Germanic tribes’ ancestral homeland in the most unexpected locale. Because whereas the Finnic tribes’ destination was the eastern Baltic, that same zone now appears to have been the proto-Germanics’ and their ancestors’ long mysterious origination point. In a general sense, the Finns’ and Estonians’, and their proto-Uralic ancestors’ more than 1,000-year journey from one end of Eurasia to the other is little surprise, just a refinement whose precise details linguists, archaeologists and now geneticists had long quested to pin down. But the very suggestion that what became Finland and Estonia were meanwhile the mysterious homeland of the earliest proto-Germanic-speaking people comes straight out of left field. Disciplines like archaeology have barely had time to come to grips with the ramifications of this possibility, with early scholarly response thus far amounting to little more than stunned silence.

He also provides a map of the range of the Uralic language families:


Both narratives make sense to me. 

The oldest historically attested homeland of the Germanic peoples, where Old Norse a.k.a. proto-Germanic was spoken, is in the vicinity of Denmark, Southern Sweden, and Southern Norway. But, there was good reason, based upon historic archaeological cultures, the proto-Indo-European homeland, and genetics, to assume that their ancestors were among the Corded Ware people further to the east.

Likewise, there has never been any real doubt that the Uralic homeland was not in Finland, or that it was someplace further east, either in the Ural Mountains themselves, or in the Northeastern Asian region loosely described as Siberia.

To some extent, the arrival of the linguistically Uralic proto-Finns may have provided a motive for the exodus of the proto-Germans.

The genetics also tell us that the proto-Finns arrival followed a pattern familiar in demographic history. A male dominated group arrived, conquered, and took local wives. As Razib explains:
Finnic mtDNA did not differ from that of their Scandinavian neighbors to the west, Finnic Y chromosomes were markedly distinct. About 60% of Finnish men carried haplogroup N, as compared to 7% of Swedish males, 3% of Norwegian men and 1% of male Danes (while N is basically wholly absent from Western and Southern Europe). 
Interestingly, haplogroup N, and Finland’s particular sublineage, is also found in populations to the east, from European Russia all the way out to Siberia’s Pacific coast. In Russia’s frigid far north, half of men carry this lineage, while among the Finnic-speaking ethnicities of the Russian Urals, the Udmurts and Mari, its frequency hovers around 30-50%. Among the Samoyed tribes, over 50% of men carry N. Finally, among the northeasternmost Turkic-speaking people in the world: the Yakuts of eastern Siberia, 80-95% of men are N. 
Though haplogroup N’s ambit is more extensive than the map of Uralic languages today, save for Hungarians, all Uralic-speaking populations harbor N in high numbers. If you are a man who carries N, you may not be Uralic, but if you are a (non-Hungarian) Uralic male, odds are good that you carry N.

We know why Hungary is different. Their Uralic language arrived in central Europe around 1000 CE, when it is historically attested that Magyar conquerers arrived, and didn't admix much with the locals, but through elite dominance, converted the local central European peasants to their language. We even know that these Magyar conquerers ventured west because Turkic speaking tribes of Huns pushed them out.

We also know about the first farmers of Europe, derived from Western Anatolian derived Linear Pottery Neolithic people, and the European hunter-gatherers who preceded them, that came before the people who were the ancestors of the Germans, and then, of the Finns. 

The European hunter-gathers who preceded the first farmers of Europe started from a clean slate in the Mesolithic era, because most of Northern Europe was either under glaciers or too frigid to be habitable around the time of the Last Glacial Maximum (LGM) around 20,000 years ago, and these glaciers had to retreat for several thousand years before the region could be repopulated. 

Neanderthals don't appear to have ever managed to populate regions that far north and were extinct many thousands of years prior to the LGM. There were some Cro-Magnon (i.e. modern human) European hunter-gathers, who started to arrive in Europe about 40,000 years ago, before the ice age that gave rise to the LGM quite far north in Europe. They were genetically rather similar to the European hunter-gatherers who repopulated Europe after the LGM. But the pre-LGM European hunter-gatherers either retreated to one of three main refugia in Southern Europe during that ice age, or died.

Slavic people replaced Finns in much of what is now Russia, between their ethnogenesis in the historic era, around the time of the fall of the Roman empire, as they expanded until sometime around the early modern period in Europe, which started around the time of the Renaissance and the Protestant Reformation (and later into Northeast Asia).

John Hawks On Scientific Consensus

John Hawks, an anthropologist who specializes in archaic hominins, considers at his blog what the term "scientific consensus" means in reaction to an article in the peer reviewed scientific journal Science. He begins as follows:

In an editorial in this week's Science, the journal's editor Holden Thorp develops an argument that the notion of “scientific consensus” has confused public discussion of science. In Thorp's view, the public misunderstands “consensus” as something like the result of an opinion poll. He cites the communication researcher Kathleen Hall Jamieson, who observes that arguments invoking “consensus” are easy for opponents to discredit merely by finding some scientists who disagree.

Thorp notes that what scientists mean by “consensus” is much deeper than a popularity contest. He describes it as “a process in which evidence from independent lines of inquiry leads collectively toward the same conclusion.” Leaning into this idea, Thorp argues that policymakers should stop talking about “scientific consensus” and instead use a different term: “convergence of evidence”.

It would be a big move for a magazine representing the entire breadth of American science to reject the idea of scientific consensus.

For the last twenty years the idea of “scientific consensus” has been widely adopted by scientific organizations and policymakers, especially applied to politically contentious topics such as climate change, vaccine hesitancy, and COVID-19 response. Many organizations shifted their policy advocacy by issuing statements reflecting the consensus of their members. Science itself helped launch this era with the publication of Naomi Oreskes' 2004 article, “The Scientific Consensus on Climate Change”. This article recounted the number of organizations representing scientists that had issued statements or policy documents about the evidence for human-induced climate change.

The purpose of such statements was to counter public perceptions that there might be significant scientific disagreement about climate change.

There use of the term has grown steadily since the late 1960s (image from the linked blog entry):


Another chart in the blog post notes that use of the term "convergence of evidence" peaked in the 1950s and that "scientific consensus" overtook it in frequency of use around 1985.

Hawks argues that there is also a place for the concept of a consilience of evidence:
The idea was formulated by the nineteenth-century philosopher William Whewell, who also coined the word scientist. Whewell wanted to understand how observations give rise to theories. His idea was that the induction of a hypothesis or theory from observations requires another step, a step in which evidence developed by other means of observation must also show consistency with the same theory or hypothesis. He used the term “consilience” for this matching of evidence of different kinds.

Michael Ruse noted Charles Darwin's work as a hallmark of the consilience approach. Darwin brought together evidence from entirely different fields of inquiry: animal and plant breeding, geology, natural history, biogeography, sociology, and many others. He had a remarkable ability to answer questions in one field by examining data in another field entirely. The ability to bring together observations that seem disconnected from each other, explain all of them with one unifying explanation is a powerful mode of scientific thinking.

Consilience of evidence also helps to answer criticism that scientists are closing off debate by excluding ideas that do not fit within their disciplinary boundaries. Where “convergence of evidence” may seem inward-facing, confined to a single research tradition, consilience is explicitly outward-reaching. It requires translation and integration across disciplinary boundaries and sometimes even across different ways of knowing.

I'm skeptical of both the "convergence of evidence" question, which has many of the same problems, and the "consilience of evidence" phrase, which is just beyond the vocabulary of most of the people for whom the rhetoric in which the term might be used is directed.

But the deeper and ultimate question is how to draw the line between what is settled and accepted science, to which any legitimate future challenge has not yet manifested, and scientific ideas which remain the subject of controversy among scientists. 

On one hand, one doesn't really want to adopt the standard of a "hecklers veto" which suggests that a crackpot with weak methodology or methods that cause other scientists inclined to be sympathetic to not take the crackpot seriously, is sufficient to upset a scientific consensus. But on the other hand, one doesn't want to leave the misimpression that scientific truth is a popularity contest or a matter of democratic decision-making, or that the authority and prestige of individual scientists holding a scientific opinion is more important than the evidence and reasoning supporting their opinions. 

When there are two well-reasoned theories advanced by people who are behaving like genuine scientists that are each supported by evidence that doesn't conclusively disprove the alternative, there is no scientific consensus on which one is correct (although there are still many crackpot theories that are definitively rejected by the scientific consensus even at times where there is not a scientific consensus around more than one disputed possible scientific truth to explain the world).

Making these distinctions at a vague, common sense level, in actual real world cases, usually isn't that hard. But putting into words a rule that puts some cases on one side of the line, and other cases on the other side of the line, can be challenging.

Saturday, April 26, 2025

The Punic People Were Mostly Greek, Not Levantine, In Ancestry

Ancient DNA from the Iron Age and classical Greco-Roman era reveals that the Punic people were much closer genetically to the Greeks and modern Sicilians than to the Phoenicians of the Levant who founded this maritime empire in the Western Mediterranean.

Punic people from this time period had been expected to be genetically similar to the Phoenicians were had often been assumed to be the ancestors of the Punic people, since archaeological and historical information indicated that the Phoenicians founded Carthage and other Punic cities. Linguistic information had also supported this expectation:

The Punic language, also called Phoenicio-Punic or Carthaginian, is an extinct variety of the Phoenician language, a Canaanite language of the Northwest Semitic branch of the Semitic languages. An offshoot of the Phoenician language of coastal West Asia (modern Lebanon and north western Syria), it was principally spoken on the Mediterranean coast of Northwest Africa, the Iberian Peninsula and several Mediterranean islands, such as Malta, Sicily, and Sardinia by the Punic people, or western Phoenicians, throughout classical antiquity, from the 8th century BC to the 6th century AD.

To the extent that the Punic people were genetically different from the Greeks, this was predominantly due to Iberian and Northwest African admixture, rather than due to Levantine admixture. 

Levantine admixture was completely absent from the Punic sample, except in three individuals (about 5% of the Punic sample analysed with Admixture) who were predominantly Levantine, and another four individuals who were predominantly North African in ancestry with very minor Levantine admixture (but with no Greek, Iberian, or other kinds of ancestry). 

This suggests a narrative in which a 95% Greek-like Punic people may have mostly replaced (without meaningful admixing with) a society in which some people with nearly purely Levantine Phoenicians, and some people were assimilated indigenous Northwest Africans with minor Phoenician ancestry.

Likewise, none of the contemporaneous ancient DNA from the Levant showed any Greek admixture at all, although three of eleven samples had small amounts of North African ancestry, and a fourth had small amounts of Iranian and Iberian ancestry (but no North African admixture).


The paper is Harald Ringbauer, et al., "Punic people were genetically diverse with almost no Levantine ancestors" Nature (April 2025).

As Bernard explains at his blog (via Google translate from French):
Phoenician culture emerged in Bronze Age city-states in the Levant. By the early first millennium BCE, the Phoenicians had established an extensive trade network along the Mediterranean coast as far south as the southwest shores of the Iberian Peninsula, spreading their culture, religion, and language. 
By the mid-sixth century BCE, Carthage, a Phoenician colony in present-day Tunisia, emerged as a major center of power in the central and western Mediterranean, as Levantine influence declined as their cities fell under the control of the Neo-Assyrian and Neo-Babylonian empires. Carthage subsequently came into conflict with Greek city-states in the fifth and fourth centuries BCE, and then with the Roman Empire in the third and second centuries BCE, before its final destruction in 146 BCE. 
In this article, the term Punic is given to all archaeological sites in the central and western Mediterranean associated with Phoenician culture, dated between the sixth and second centuries BCE, corresponding to the hegemony of Carthage in the region.

They analyzed the genomes of 210 ancient individuals from 14 Phoenician or Punic archaeological sites located in the Iberian Peninsula, Sardinia, Sicily, North Africa and the Levant dated between 600 and 150 BCE. There are no individuals older than 600 BCE, because before this date cremation was the most common burial method in these communities.

We don't know if the Phoenician founders of Carthage were later replaced by Greeks, if the original Bronze Age Phoenician colonists were recruited from Greece in the first place with a small endogamous caste of Levantine elites leading them, or if they were brought in by the Phoenicians later on as a caste of maritime people subordinate to the Phoenicians who ultimately rose to become the dominant caste in Punic society as the Bronze Age Phoenician maritime empire fell apart.

The ancient DNA samples come almost entirely from the time period at and after the Punic region lost close contact with the Phoenicians of the Levant.

It is possible that Levantine Phoenicians and Greek/North African/Iberian peoples co-existed in the Punic region but were basically genetically distinct endogamous castes, and that the Phoenician ancient DNA from this later period is mostly absent from the sample because Phoenicians continued to cremate their dead, rather than because they had been replaced, while the other caste that had substantially Greek ancestry buried their dead at this point. (The Bronze Age Greeks also mostly cremated their dead at the point in time when Indo-Europeans conquered them and converted them to an Indo-European language.)

This linguistic data can help us weigh which of the possible narratives to explain the ancient DNA is most plausible.

The fact that the Punic people spoke a Phoenician language, rather than Greek or Latin, however, despite their lack of significant Levantine Phoenician genetic ancestry, suggests that the ancestors of the Punic people with Greek ancestry underwent a language shift from Greek to Phoenician due to elite dominance by a Levantine Phoenician elite.  

If ancestors of the genetically Greek Punic people had replaced the Levantine Phoenician people by simply conquering them, we would have expected the Punic people to speak a language related to Greek rather than a North Semitic language (that is a close linguistic cousin of Hebrew and Arabic).

Yet, the lack of admixture between the caste whose members had any Levantine Phoenician ancestry, and the caste that is mostly Greek in genetic ancestry tends to disfavor the presence of the non-Levantine caste in the earliest Bronze Age founding period of Carthage. This inference is particularly strong in light of that fact that the Phoenicians did have some admixture with the indigenous North Africans who proceeded them in Carthage and the vicinity.

It is more plausible that the primarily Greek caste became part of Punic society in the roughly two and a half entry long time period from the mid-sixth century BCE, when Levantine influence declined as their cities fell under the control of the Neo-Assyrian and Neo-Babylonian empires, to the fifth and fourth centuries BCE, when Carthage subsequently came into conflict with Greek city-states. 

Before that, these Phoenician colonies were probably just Levantine Phoenicians and indigenous North Africans. It also seems likely that this demographic shift took place at the early end of this quarter millennium time period, allowing the dominant-subordinate status of the respective castes to emerge before the conflicts with the Greek city-states reached their high water mark.

Another possibility is that part of what keep a Levantine Phoenician caste distinct and endogamous from a caste with an ancestral Greek core, is that the Levantine Phoenician caste spoke Punic, while the caste with an ancestral Greek core spoke some dialect of Greek as their primary language, but didn't interact with the outside world much because the Levantine Phoenicians were the ruling caste of the Punic world, even though they made up only a modest percentage of the total population. 

This would have some similarities to the situation in medieval Finland while it was under Swedish rule, where power was held by Swedish speakers for centuries, even though most of the people spoke Finnish as their primary language, but with less genetic admixture between the two linguistic groups.

Thursday, April 24, 2025

A Prime Planet 9 Candidate Has Been Identified

Astronomers have scoured existing solar system astronomy data from two different collaborations, twenty-three years apart, to identify a single candidate for Planet 9 using the properties that previous studies have determined it should have. 

Further observation will be needed to determine if this candidate is actually Planet 9 or not, as they have only two point in time observations from twenty-three years apart to go on. But, they now know precisely where to look for it.

How Big Are Ultralight Dark Matter Cores?

Hypothetical particles of "Cold dark matter" usually has a natural tendency to form a cusp at the center of a galaxy. But this isn't how inferred dark matter distributions look in real life. Instead, they are inferred from galaxy dynamics and lensing observations to form a more homogeneous "core" with fairly constant density a.k.a. (in some models) a soliton, in the inner region of a dark matter halo.

But hypothetical ultralight dark matter has wave-like behavior of the appropriate scale that in theory cause it to form cores in galaxies, rather than forming cuspy central regions of these dark matter halos as more massive cold dark matter particle candidates, such as WIMPs (weakly interacting massive particles) with masses in the GeV range, suggested as dark matter candidates in supersymmetry theories would.

A new paper explains how this happens and how big the cores are predicted to be at a technical level in ultralight dark matter scenarios. By making predictions about the size of ultralight dark matter cores, this, in turn, provides a way to test the ultralight dark matter particle theory in new observations or new analyses of old data. 

The paper and its abstract are as follows:
In theories of ultralight dark matter, solitons form in the inner regions of galactic halos. The observational implications of these depend on the soliton mass. Various relations between the mass of the soliton and properties of the halo have been proposed. We analyze the implications of these relations, and test them with a suite of numerical simulations. 
The relation of Schive et al. 2014 is equivalent to (E/M)(sol)=(E/M)(halo) where E(sol)(halo) and M(sol)(halo) are the energy and mass of the soliton (halo). If the halo is approximately virialized, this relation is parametrically similar to the evaporation/growth threshold of Chan et al. 2022, and it thus gives a rough lower bound on the soliton mass. A different relation has been proposed by Mocz et al. 2017, which is equivalent to E(sol)=E(halo), so is an upper bound on the soliton mass provided the halo energy can be estimated reliably. 
Our simulations provide evidence for this picture, and are in broad consistency with the literature, in particular after accounting for ambiguities in the definition of E(halo) at finite volume.
Kfir Blum, Marco Gorghetto, Edward Hardy, Luca Teodori, "Bracketing the soliton-halo relation of ultralight dark matter" arXiv:2504.16202 (April 22, 2025).

The introduction to the paper frames the question in the context of ultralight dark matter theories and the astronomy observations relevant to them.
Ultra-Light Dark Matter (ULDM) is a well-motivated Dark Matter (DM) candidate, potentially arising in high energy completions of the Standard Model of Particle Physics. It is generically produced in the early Universe via the vacuum misalignment mechanism, and is stable on cosmological timescales. Compared to collision-less Cold Dark Matter, ULDM leads to novel behavior on distances comparable to or smaller than its de-Broglie wavelength λdB = 2π/(mv), where v is the characteristic velocity of a system. On such scales ULDM’s wave-like nature is manifest. This results in a suppression of power in cosmological perturbations, leaving observable imprints on the Cosmic Microwave Background (CMB) anisotropy power spectrum, galaxy clustering, and the Lyman-alpha forest. ULDM wave-like density fluctuations can also lead to astrophysical effects inside galaxies, such as dynamical heating and dynamical friction, leading to constraints using systems like dwarf and ultra-faint dwarf galaxies. Observational constraints on the magnitude of such effects bounds the particle mass of an ULDM candidate m that comprises all of Dark Matter to satisfy m ≳ 10^−21eV.

Another key feature of ULDM, on which we focus in this work, is the formation of cored density profiles at the centers of galaxy halos. These cores consist of ‘solitons’, which are a ground state of the system in the sense that the soliton solution to the ULDM equations of motion minimizes the energy for a fixed mass. Solitons have been seen in ULDM halos in many numerical simulations. Solitons can affect the observed rotation curves of low-surface-brightness galaxies and irregular dwarf galaxies, stellar kinematics and dynamics of dwarf galaxies, and even strong gravitational lensing time delays, and involve interesting physics such as stochastic motion and quasi-normal mode fluctuations. It has been suggested that soliton cores may play a role in resolving the core-cusp problem, namely, the mismatch between simulations of cold dark matter and observations.

A natural question is whether a soliton forms within the lifetime of a galaxy and, if so, what is its expected mass for a given host galaxy halo. Dynamical relaxation estimates of the timescale for soliton formation are consistent with the results of simulations that use “noise” initial conditions, which are designed to be in the kinetic regime. Meanwhile, simulations with cosmological initial conditions suggest that solitons in cosmological halos may form more rapidly than predicted by kinetic theory estimates. Regarding the expected soliton mass, the cosmological simulations . . . provided numerical evidence for a simple relation between the soliton mass and the host halo mass and energy. Those authors also suggested that the relation may represent an attractor of the equations of motion, supporting this point via simulations with different initial conditions. Many other investigations of the soliton-halo relation have subsequently appeared in the literature, reporting varying levels of agreement. In this work, we present a new perspective on the problem.

Thursday, April 17, 2025

How Big Does The Sun Look From Solar System Planets?

 


More Dark Matter Papers

A stubborn astrophysicist denies the strong evidence that dark matter, if it exists, is not significantly made up of primordial black holes, in a wholly unconvincing argument. Another paper reaches the opposite conclusion.

Japanese researchers are continuing the futile search for WIMP dark matter (which there is no evidence to support).

A new dark galaxy that shouldn't be possible in either LambdaCDM or self-interacting dark matter (SIDM) models has been observed:

Its remarkable compactness challenges the standard cold dark matter (CDM) paradigm. In this paper, we explore whether such a compact perturber could be explained as a core-collapsed halo described by the self-interacting dark matter (SIDM) model. . . . Our comparison with observations indicates that only a core-collapsed halo with a total mass of approximately 10^11 M⊙ could produce an inner density profile and mass enclosed within 1 kpc that is consistent with observational data. However, such a massive dark matter halo should host a galaxy detectable by prior Hubble imaging, which is not observed. Thus, the core-collapsed SIDM halo model struggles to fully account for the exotic nature of the "little dark dot" in the "Jackpot" lens.
Shubo Li, et al., "The "Little Dark Dot": Evidence for Self-Interacting Dark Matter in the Strong Lens SDSSJ0946+1006?" arXiv:2504.11800 (April 16, 2025).

Possible ways to experimentally confirm the existence of ultralight dark matter are considered in this paper.

A paper that investigates a model with warm dark matter that interacts with dark energy, in a manner that effectively creates a pressure component, is slightly favored over cold dark matter and partially relieves the S(8) tension in the LambdaCDM model, which presents issues for the model similar to those associated with the Hubble tension, but receives less attention because it is a more technical, less intuitive LamdaCDM parameter.

There were several cosmological inflation papers that I have not linked, because almost all inflation papers are speculative junk research.

There are lots of black hole papers today, as there are most days, adding to a continuing rich literature on the topic, but it simply doesn't interest me much, so I'm disinclined to blog about it. While subtle properties of black holes are explored in both conventional general relativity, and various quantum gravity and modified gravity variations on it, the take on black holes provided by plain vanilla general relativity work in the area hits the core points, and the further analysis doesn't add all that much to the fundamental astrophysics of gravity or the big picture.

Likewise papers on white holes and traversable wormholes are generally junk papers that are high speculative, are not supported by observational evidence or conventional theoretical analysis, and often contain flawed analysis.

Papers exploring hypothetical sources of Lorentz invariance violations, which multiple lines of observational evidence already rule out to high precision, are likewise usually speculative junk papers.

I've also made several recent posts in a thread at the Physics Forums, primarily spelling out with a selection of pertinent journal references, the fatal defects in a variety of proposed dark matter particle candidates, such as cold dark matter, warm dark matter, and self-interacting dark matter. But not yet definitively ruling out ultralight dark matter candidates. This is cut and pasted below the fold.

Wednesday, April 16, 2025

F(R) Gravity To Explain Dark Matter Phenomena

One of the most well studied variants of conventional general relativity is f(R) gravity. As the introduction to the paper below explains:
f(R) gravity is a straightforward extension of General Relativity (GR) where, instead of the Hilbert-Einstein action, linear in the Ricci scalar R, one considers a power-law f(R) = f(0n)R^n in the gravity Lagrangian. In the weak field limit, a gravitational potential is of the form:

This paper argues that this modification to general relativity can recover the baryonic Tully-Fisher relation which is also produced by MOND in the context of galaxy dynamics, but in the naturally relativistic and mathematically consistent framework of f(R) gravity (it is not the first paper to do so). The money chart is this one:

The conclusion of the paper explains that:
In this paper we use f(R) theories of gravity, particularly power-law Rn gravity, and demonstrate that the missing matter problem in galaxies can be addressed by power-law Rn gravity. Using this approach, it is possible to explain the Fundamental Plane of elliptical galaxies and the baryonic Tully-Fisher relation of spiral galaxies without the DM hypothesis. Also, we can claim that the effective radius is led by gravity and the whole galactic dynamics can be addressed by f(R) theories. Also, f(R) gravity can give a theoretical foundation for rotation curve of galaxies. We have to stress that obtained value for parameter β from galactic rotation curves or BTF differs from parameter β obtained using observational data at planetary or star orbit scales. The reason for this result is that gravity is not a scale-invariant interaction and then it differs at galactic scales with respect to local scales. 
Also, we investigated some forms of TFR in the light of f(R) gravities. These investigations are leading to the following conclusions: 
- f(R) gravity can give a theoretical foundation for the empirical BTFR, 
- MOND is a particular case of f(R) gravity in the weak field limit, 
- ΛCDM is not in satisfactory agreement with observations, 
- FP [i.e. the Fundamental Plane of elliptical galaxies] can be recovered by Rn gravity.

The paper and its abstract are as follows:

Here we use the samples of spiral and elliptical galaxies, in order to investigate theoretically some of their properties and to test the empirical relations, in the light of modified gravities. We show that the baryonic Tully-Fisher relation can be described in the light of f(R) gravity, without introducing the dark matter. Also, it is possible to explain the features of fundamental plane of elliptical galaxies without the dark matter hypothesis.
V. Borka Jovanović, D. Borka, P. Jovanović, "The baryonic Tully-Fisher relation and Fundamental Plane in the light of f(R) gravity" (April 15, 2025) arXiv:2504.11135 (Accepted for publication in Contrib. Astron. Obs. Skalnate Pleso https://doi.org/10.31577/caosp.2025.55.2.24).

Friday, April 11, 2025

Proteins In Hominin Fossil In Taiwan Are Denisovan

While the jaw bone still isn't enough to develop much of an image of what Denisovans looked like, this is definitely a major, although not unexpected, development. Denisovan admixture in modern humans had already strongly suggested a broad range for them in Asia, even though this is the first definitively identified Denisovan bone sample from comparatively warm regions in southern Asia.

A fossilized jawbone found off the coast of Taiwan more than 20 years ago belonged to a group of ancient humans, called the Denisovans, first identified in a Siberian cave.

The finding, published today in Science1, is the result of time-consuming work to extract ancient proteins from the fossil. It also expands the known geographical range of the group, from colder, high-altitude regions to warmer climates.

“I’m very excited to see this,” says Janet Kelso, a computational biologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.

The lower jawbone, with four teeth intact, is called Penghu 1 and was dredged up by fishing crews from the Penghu channel, 25 kilometres off the west coast of Taiwan. Penghu 1 was donated to Taiwan’s National Museum of Natural Science in Taichung after researchers recognized its significance as coming from an ancient human relative2. But the identity of that unknown relative remained a mystery, until now.
Ancient proteins

Researchers spent more than two years carefully refining techniques for extracting ancient proteins from animal bones taken from the channel. They then used acid to isolate protein fragments from the surface of a Penghu 1 molar tooth and enzymes to extract them from the jawbone.

The team identified several degraded fragments, two of which bore specific amino-acid sequence variations matching those seen in the genetic sequences of a Denisovan finger bone3 found in the Denisova Cave in southern Siberia in 2008. The researchers could also tell that the jawbone came from a male Denisovan.

It’s the second location that molecular evidence from ancient proteins has definitively linked fossil remains to the Denisovans. The first was in a cave in Xiahe, Tibet where proteins from a jawbone4 and then a rib bone were determined to be from Denisovans.

Pinning down an exact age for the Penghu fossil is challenging because scientists do not have samples of the sediment it was buried in.

“One can only say it’s older than 50,000” years, says Rainer Grün, a geochronologist at the Australian National University in Canberra, who dated the fossil in 2015 and subsequently reanalysed the data5.

The Xiahe 1 mandible is at least 160,000 years old, and material from the Denisova cave indicates that Denisovans lived in Siberia between 200,000 and 50,000 years ago. At that time, sea levels were lower and the Chinese mainland was connected to Taiwan.

From here

The paper and its abstract are as follows:

Editor’s summary

Denisovans are a Pleistocene hominin lineage first identified genomically and known from only a few fossils. Although genomic studies suggest that they were widespread throughout Asia, fossils of this group have thus far only been identified from regions with cold climates, Siberia and Tibet. 
Tsutaya et al. used ancient proteomic analysis on a previously unidentified hominin mandible from Taiwan and identified it as having belonged to a male Denisovan. This identification confirms previous genomic predictions of the group’s widespread occurrence, including in warmer climates. The robust nature of this mandible is similar to that seen in a Denisovan one from Tibet, suggesting that this is a consistent trait for the lineage. —Sacha Vignieri

Abstract

Denisovans are an extinct hominin group defined by ancient genomes of Middle to Late Pleistocene fossils from southern Siberia. Although genomic evidence suggests their widespread distribution throughout eastern Asia and possibly Oceania, so far only a few fossils from the Altai and Tibet are confidently identified molecularly as Denisovan. 
We identified a hominin mandible (Penghu 1) from Taiwan (10,000 to 70,000 years ago or 130,000 to 190,000 years ago) as belonging to a male Denisovan by applying ancient protein analysis. We retrieved 4241 amino acid residues and identified two Denisovan-specific variants. The increased fossil sample of Denisovans demonstrates their wider distribution, including warm and humid regions, as well as their shared distinct robust dentognathic traits that markedly contrast with their sister group, Neanderthals.
Takumi Tsutaya, et al., "A male Denisovan mandible from Pleistocene Taiwan" 388 (6743) Science 176-180 (April 11, 2025). Hat tip to Neo in the comments.