Pages

Friday, April 29, 2022

Tau g-2 Crudely Measured

Muon g-2 has been measured to better than parts per million precision and compared to highly precise Standard Model predictions. The differences between the two main Standard Model predictions and the experimental measurements of this physical observable (the anomalous magnetic moment of the muon) are at the parts per ten million level although the differences between the two main Standard Model predictions, and between the experimentally measured value and one of those predictions, is statistically significant.

The tau lepton's anomalous magnetic moment has a quite precise (but not quite as precise as muon g-2) Standard Model predicted value of the SM prediction of 0.001 177 21 (5), which is quite close to the quite precisely measured and theoretically calculated electron g-2 value (the experimentally measured value of which is in mild 2.5 sigma tension with this value measured at a parts per hundred million level, by being too low), and the muon g-2 value (discussed at length in previous posts at this blog which is in tension with the Standard Model value in the other direction).

The ATLAS experiment's latest measurement of this quantity at the Large Hadron Collider (LHC) (like previously efforts to measure this quantity) is far less precise, mostly due to the seven orders of magnitude shorter mean lifetime of the tau lepton than its less massive, but otherwise identical cousins the muon (the electron is stable). The ATLAS measure of tau g-2 has a 95% confidence interval range of (−0.058, −0.012) ∪ (−0.006, 0.025), which includes the Standard Model predicted value in a very broad discount range of .046 and 0.31 for a combined range of 0.77 (about 2.25 times the uncertainty of the global average PDF value below), due to the imprecise measurement. 

The Particle Data Group value as explained in a power point presentation providing more background on the question from 2020 is -0.018(17), which is also consistent with the Standard Model predicted value at about a 1.1 standard deviation level. Still, the uncertainty in this value is still about sixteen times the magnitude of its theoretically predicted value.

The same paper does reveal that the rate at which tau lepton pairs are produced by photo-production (i.e. creation of a tau lepton and tau anti-lepton by colliding two photons, in this case in connection with the high energy collision of two lead atoms), is right in line with the Standard Model prediction. The measured value is 1.04 times the value predicted in the Standard Model with an uncertainty of + 0.06 and - 0.05, so within about two-thirds of a standard deviation of experimental uncertainty of the predicted value.

Thursday, April 28, 2022

Seventh Higgs Boson Decay Channel Confirmed

Numerically, the decays of a 125 GeV Higgs boson in the Standard Model (which is a little below the experimentally measured value) are approximately as follows (updated per a PDG review paper with details of the calculations, for example, here and here):

b-quark pairs, 57.7% (observed)
W boson pairs, 21.5% (observed)
gluon pairs, 8.57%
tau-lepton pairs, 6.27% (observed)
c-quark pairs, 2.89%
Z boson pairs, 2.62% (observed)
photon pairs, 0.227% (observed)
Z boson and a photon, 0.153% (observed)
muon pairs, 0.021 8% (observed)
electron-positron pairs, 0.000 000 5% 

The total adds 99.9518005% rather than to 100% due to rounding errors, and due to omitted low probability decays including strange quark pairs (a bit less likely than muon pairs), down quark pairs (slightly more likely than electron-positron pairs), up quark pairs (slightly more likely than electron positron pairs), and asymmetric boson pairs other than Z-photon decays (also more rare than muon pairs).

A new paper from the CMS experiment at the Large Hadron Collider (LHC) detects a seventh Higgs boson decay channel predicted in the Standard Model, a decay to a Z boson and a photon, although not yet at the five sigma "discovery" level of significance.

The signal strength μ, defined as the product of the cross section and the branching fraction [σ(pp→H)B(H→Zγ)] relative to the standard model prediction, is extracted from a simultaneous fit to the ℓ+ℓ−γ invariant mass distributions in all categories and is found to be μ=2.4±0.9 for a Higgs boson mass of 125.38 GeV. The statistical significance of the observed excess of events is 2.7 standard deviations. This measurement corresponds to σ(pp→H)B(H→Zγ) = 0.21±0.08 pb. 
The observed (expected) upper limit at 95% confidence level on μ is 4.1 (1.8). The ratio of branching fractions B(H→Zγ)/B(H→γγ) is measured to be 1.5+0.7−0.6, which agrees with the standard model prediction of 0.69 ± 0.04 at the 1.5 standard deviation level.
In absolute terms, the predicted branching fraction is a little less than 0.2%, and the measured branching fraction is about 0.3% but subject to large uncertainties.

As of March 2022, one of the most important decay channel of the Higgs boson not yet definitively observed is the decay of a Higg boson into a charm quark-charm antiquark pair. ATLAS and CMS have so far only been able to place upper bounds on the branching fraction of these decays (in part, because backgrounds from Z boson decays to charm quark/charm antiquark pairs make direct decays of a Higgs boson to charm quark/charm antiquark pairs difficult to distinguish).

Gluon pairs decays are likewise difficult to distinguish from other background sources of the same hadrons.

How Far Is Earth From The Center Of The Milky Way Galaxy?

The planet Earth is 26,800 ± 391 light years (i.e. ± 1.5%) from the center of the Milky Way galaxy.

The distance to the Galactic center R(0) is a fundamental parameter for understanding the Milky Way, because all observations of our Galaxy are made from our heliocentric reference point. The uncertainty in R(0) limits our knowledge of many aspects of the Milky Way, including its total mass and the relative mass of its major components, and any orbital parameters of stars employed in chemo-dynamical analyses. 
While measurements of R(0) have been improving over a century, measurements in the past few years from a variety of methods still find a wide range of R(0) being somewhere within 8.0 to 8.5 kpc. The most precise measurements to date have to assume that Sgr A∗ is at rest at the Galactic center, which may not be the case. 
In this paper, we use maps of the kinematics of stars in the Galactic bar derived from APOGEE DR17 and Gaia EDR3 data augmented with spectro-photometric distances from the astroNN neural-network method. These maps clearly display the minimum in the rotational velocity vT and the quadrupolar signature in radial velocity vR expected for stars orbiting in a bar. From the minimum in vT, we measure R(0) = 8.23 ± 0.12 kpc. We validate our measurement using realistic N-body simulations of the Milky Way. 
We further measure the pattern speed of the bar to be Ω(bar)=40.08±1.78kms^−1kpc^−1. Because the bar forms out of the disk, its center is manifestly the barycenter of the bar+disc system and our measurement is therefore the most robust and accurate measurement of R(0) to date.
Henry W. Leung, et al. "A direct measurement of the distance to the Galactic center using the kinematics of bar stars" arXiv:2204.12551 (April 6, 2022).

Evidence Of Medieval Era Viking Trade Of North American Goods

There is a lot of basically irrefutable evidence that Vikings had settlements in Pre-Columbian North America starting around 1000 CE, and that they made return trips to Iceland and more generally, to Europe.

There are ruins of a settlement in Vinland, and there is another such settlement in Eastern Canada as well.

There are people in Iceland with Native American mtDNA that can traced back genealogically, using Iceland's excellent genealogy records, to a single woman in the right time frame.

There are legendary history attestations of the trips in written Viking accounts from the right time period.

Now, there is a Walrus ivory trinket that made its was to Kyiv (a.k.a. Kiev) in the Middle Ages which has been identified by a recent paper as coming from a Greenland Walrus. The paper and its abstract are as follows:

Mediaeval walrus hunting in Iceland and Greenland—driven by Western European demand for ivory and walrus hide ropes—has been identified as an important pre-modern example of ecological globalization. By contrast, the main origin of walrus ivory destined for eastern European markets, and then onward trade to Asia, is assumed to have been Arctic Russia. 
Here, we investigate the geographical origin of nine twelfth-century CE walrus specimens discovered in Kyiv, Ukraine—combining archaeological typology (based on chaîne opératoire assessment), ancient DNA (aDNA) and stable isotope analysis. We show that five of seven specimens tested using aDNA can be genetically assigned to a western Greenland origin. Moreover, six of the Kyiv rostra had been sculpted in a way typical of Greenlandic imports to Western Europe, and seven are tentatively consistent with a Greenland origin based on stable isotope analysis. Our results suggest that demand for the products of Norse Greenland's walrus hunt stretched not only to Western Europe but included Ukraine and, by implication given linked trade routes, also Russia, Byzantium and Asia. These observations illuminate the surprising scale of mediaeval ecological globalization and help explain the pressure this process exerted on distant wildlife populations and those who harvested them.
James H. Barrett, et al., "Walruses on the Dnieper: new evidence for the intercontinental trade of Greenlandic ivory in the Middle Ages" 289 (1972) Proceedings of the Royal Society B: Biological Sciences (April 6, 2022). https://doi.org/10.1098/rspb.2021.2773

Tuesday, April 26, 2022

Twenty-Two Reasons That The Standard Model Is Basically Complete (Except For Gravity)

The Assertion

I strongly suspect that the Standard Model of Particle Physics is complete (except for gravity), by which I mean that it describes all fundamental particles (i.e. non-composite particles) other than possibly a massless spin-2 graviton, and all forces (other than gravity), and that the equations of the three Standard Model forces are basically sound. 

I likewise strongly suspect that the running of the experimentally measured constants of the Standard Model with energy scale, according to their Standard Model beta functions, is basically correct, although these beta functions may require slight modification (that could add up to something significant at very high energies) to account for gravity.

This said, I do not rule out the possibility that some deeper theory explains everything in the Standard Model more parsimoniously, or that there are additional laws of nature that while not contradicting the Standard Model, further constrain it. Also, I recognize that neutrino mass is not yet well understood (but strongly suspect that neither a see-saw mechanism, nor Majorana neutrinos, are the correct explanation).

There are some global tests of the Standard Model and other kinds of reasoning that suggest this result. Twenty-two such reasons are set forth below.

Reasoning Up To Future Collider Energy Scales

1. Experimentally measured W and Z boson decay branching fractions are a tight match to Standard Model theoretically prediction. This rules out the existence of particles that interact via the weak force with a mass of less than half of the Z boson mass (i.e. of less than 45.59 GeV).

2. The consistency of the CKM matrix entries measured experimentally with a three generation model likewise supports the completeness of the set of six quarks in the Standard Model. If there were four or more generations, the probabilities of quark type transitions from any given quark wouldn't add up to 100%.

3. The branching fractions of decaying Higgs bosons are a close enough match to the Standard Model theoretical prediction that new particles that derive their mass from the Higgs boson with a mass more than about 1 GeV and less than about half of the Higgs boson mass (i.e. about 62.6 GeV) would dramatically alter these branching fractions. The close match of the observed and Standard Model Higgs boson also disfavors the composite Higgs bosons of technicolor theories.

4. The muon g-2 measurement is consistent with the BLM calculation of the Standard Model expectation for muon g-2 and I strongly suspect that the BLM calculation is more sound than the other leading calculation of that value. If the BLM calculation is sound, then that means that nothing that makes a loop contribution to muon g-2 (which is again a global measurement of the Standard Model) that does not perfectly cancel out in the muon g-2 calculation has been omitted. The value of muon g-2 is most sensitive to new low mass particles, with the loop effects of high mass particles becoming so small that they effective decouple from the muon g-2 calculation. But this does disfavor new low mass particles in a manner that makes the exclusion from W and Z boson decays more robust.

5.  The Standard Model mathematically requires each generation of fermions to contain an up-type quark (excluded below 1350 GeV), a down-type quark (excluded below 1530 GeV), a charged lepton (excluded below 100.8 GeV), and a neutrino. Searches have put lower bounds on the masses of new up-type quarks, down-type quarks, and charged leptons, but the most powerful constraint is that there can't be a new active neutrino with a mass of less than 45.59 GeV based upon Z boson decays. Yet, there is observational evidence constraining the sum of all three of the neutrino mass eigenstates to be less than about 0.1 eV and there is no plausible reason for there to be an immense gap in neutrino masses between the first three and a fourth generation neutrino mass. There are also direct experimental search exclusions for particles predicted by supersymmetry theories and other simple Standard Model extensions such as new heavy neutral Higgs bosons (with either even or odd parity) below 1496 GeV, exclusions for new charged Higgs bosons below 1103 GeV, W' bosons below 6000 GeV, Z' bosons below 5100 GeV, stable supersymmetric particles below 46 GeV, unstable supersymmetric neutralinos below 380 GeV, unstable supersymmetric charginos below 94 GeV, R-parity conserving sleptinos below 82 GeV, supersymmetric squarks below 1190 GeV, and supersymmetric gluinos below 2000 GeV. There are also strong experimental exclusions of simple composite quark and composite lepton models (generally below 2.3 TeV for some cases and up to 24 TeV in others) and similar scale experimental exclusions of evidence for extra dimensions that are relevant to collider physics.

6. In the Standard Model, fermions decay into lighter fermions exclusively via W boson interactions, and the mean lifetime of a particle in the Standard Model is something that can be calculated. So, it ought to be impossible for any fermion to decay faster than the W boson does. But, the top quark decays just 60% slower than the W boson, and any particle that can decay via W boson interactions that is more massive than mass range for new Standard Model type fermions that has not been experimentally excluded should have a mean lifetime shorter than the W boson.

7. Overall, the available data on neutrino oscillations, taken as a whole, disfavors the existence of a "sterile neutrino" that oscillates with the three active Standard Model neutrinos.

8. It is possible to describe neutrino oscillation as a W boson mediated process without resort to a new boson to govern neutrino oscillation.

9. There are good explanations for the lack of CP violation in the strong force that do not require the introduction of an axion or another new particle. One of the heuristically more pleasing of these explanations is that gluons are massless and hence do not experience the passage of time and hence should not be time asymmetric (which CP violation implies).

10. Collider experiments, moreover, have experimentally tested the Standard Model at energies up to those present a mere fraction of a second after the Big Bang (by conventional cosmology chronologies). Other than evidence for lepton universality violations, there are no observations which new particles or forces are clearly necessary to explain at colliders with new physics. 

11. I predict that lepton universality violations observed in semi-leptonic B meson decays, but nowhere else, will turn out to be due to a systemic error, most likely, in my view, a failure of energy scale cutoffs that were designed to exclude electrons produced in pion decays to be excluded as expected. To a great extent, in addition to a general Standard Model correctness Bayesian prior, this is also due to the fact that semi-leptonic B meson decays in the Standard Model are a W boson mediated process indistinguishable from many other W boson mediated processes in which lepton universality is preserved.

12. The consistency of the LP&C relationship (i.e. that the sum of the square of the masses of the fundamental particles of the Standard Model is equal to the square of the Higgs vacuum expectation value) with experimental data to the limits of experimental accuracy (with most of the uncertainty arising from uncertainty in the measurements of the top quark mass and the Higgs boson mass), suggests that this is a previously unarticulated rule of electroweak physics and that there are no fairly massive (e.g. GeV scale) fundamental fermions or bosons omitted from the Standard Model. Uncertainties in the top quark mass and Higgs boson mass, however, are sufficient large that this does not rule out very light new fundamental particles (which can be disfavored by other means noted above, however). The LP & C relationship, if true, also makes the "hierarchy problem" and the "naturalness" problem of the Standard Model, that supersymmetry was devised to address, non-existent.

13. I suspect that the current Standard Model explanation for quark, charged lepton, W boson, Z boson and Higgs boson masses, which is based upon a coupling to the Higgs boson understates the rule played by the W boson in establishing fundamental particle masses underscored by the fact that all massive fundamental particles have weak force interactions and all massless fundamental particles do not have weak force interactions, and by cryptic correlations between the CKM and PMNS matrixes which are properties of W boson mediated events, and the fundamental particle masses. A W boson centric view of fundamental particle mass is attractive, in part, because it overcomes theoretical issues associated with Standard Model neutrinos having Dirac mass in a Higgs boson centric view.

Astronomy and Cosmology Motivated Reasons

14. The success of Big Bang nucleosynthesis supports that idea that no new physics is required, and that many forms of new physics are ruled out, up to energy scales conventionally placed at ten seconds after the Big Bang and beyond. 

15. The Standard Model is mathematically sound and at least metastable at energy scales up to the GUT scale without modification. So, new physics are not required at high energies. Consideration of gravity when determining the running of the Standard Model physical constants to high energies may even take the Standard Model from merely being metastable to being just barely stable at the GUT scale. The time that would elapse from the Big Bang t=0 to GUT or Planck energies at which the Standard Model may be at least metastable is brief indeed and approaches the unknowable. Apart from this exceeding small fraction of a second, energies above this level don't exist in the universe and new physics above this scale essentially at the very moment of the Big Bang could explain its existence.

16. Outside the collider context, dark matter phenomena, dark energy phenomena, cosmological inflation, and matter-antimatter asymmetry, are the only observations that might require new physics to explain.

17. A beyond the Standard Model particle to constitute dark matter is not necessary because dark matter can be explained, and is better explained, as a gravitational effect not considered in General Relativity as applied in the Newtonian limit in astronomy and cosmology settings. There are several gravitational approaches that can explain dark matter phenomena. 

18. There are deep, general, and generic problems involved in trying to reconcile observations to collisionless dark matter particle explanations, some of which are shared by more complex dark matter particle theories like self-interacting dark matter, and as one makes the dark matter particle approach more complex, Occam's Razor disfavors it relative to gravitational approaches. 

19. Direct dark matter detection experiments have ruled out dark matter particles even with cross-sections of interaction with ordinary matter much weaker than the cross-section of interaction with a neutrino for masses from somewhat less than 1 GeV to 1000 GeV, and other evidence from galaxy dynamics strongly disfavors truly collisionless dark matter that interacts only via gravity. This limitation, together with collider exclusions, in particular, basically rule out supersymmetric dark matter candidates which are no longer theoretically well motivated.

20. Consideration of the self-interaction of gravitational fields in classical General Relativity itself (without a cosmological constant), can explain the phenomena attributed to dark matter and dark energy without resort to new particles, new fields, or new forces, and without the necessity of violating the conservation of mass-energy as a cosmological constant or dark energy do. The soundness of this analysis is supported by analogy to mathematically similar phenomena in QCD that are well tested. Explaining dark energy phenomena with the self-interaction of gravitational fields removing the need for a cosmological constant also makes developing a quantum theory of gravity fully consistent in the classical limit with General Relativity and consistent with the Standard Model much easier. There are also other gravity based approaches to explain dark energy (and indeed the LambdaCDM Standard Model of Cosmology uses the gravity based approach of General Relativity with a cosmological constant to explain dark energy phenomena).

21. The evidence for cosmological inflation is weak even in the current paradigm, and it is entirely possible and plausible that a gravitational solution to the phenomena attributed to dark matter and dark energy arising from the self-interaction of gravitational fields, when fully studied and applied to the phenomena considered to be evidence of cosmological inflation will show that cosmological inflation is not necessary to explain those phenomena.

22. There are plausible explanations of the matter-antimatter imbalance in baryons and charged leptons that is observed in the universe that do not require new post-Big Bang physics. In my view, the most plausible of these is the possibility that there exists an antimatter dominated mirror universe of ours "before" the Big Bang temporally in which time runs in the opposite direction, consistent with the notion of antimatter as matter moving backward in time and with time flowing from low entropy to high entropy. So, no process exhibiting extreme CP asymmetry (and baryon and lepton number violation) and limited to ultra-high energies present in the narrow fraction of a second after the Big Bang that has not been explored experimentally is necessary. 

Monday, April 25, 2022

Neolithic Ancient DNA In Normandy Fit Paradigm

A new ancient DNA analysis of a moderate sized collection of Neolithic era remains from a funerary complex in Normandy, France matches what you would expect from the paradigm of prior works in the field, and shows affinity to the Mediterranean branch of the Neolithic expansion, rather than the Central European Neolithic expansion.

Two individuals are outliers, but not really outside the paradigm either. One has an abnormally low level of European hunter-gather ancestry. The other has a modest level of Iranian farmer ancestry (the other farmer ancestry is typical of Anatolian farmers). Neither are terribly unusual in the European Neolithic, although both are uncommon.

The Y-DNA, mtDNA, and autosomal genetic profiles are overall very typical of coastal Western Europe in the Neolithic era. The ancient DNA also confirms a previously observed patriarchal, patrilocal pattern with significant female exogamy.

Looking For Cryptids in Flores

The best evidence for the existence of a currently extant or recently (i.e. in the last couple of hundred years) extinct archaic hominin species in the world, basically a cryptid, is in or near the island of Flores in Indonesia as an example of the species Homo floresiensis, which solid evidence suggests co-existed with modern humans, at least for an extended period of time on that island.

A new book by Gregory Forth, entitled Between Ape and Human, available in May 2022, explores that possibility. The author, touting his book in the linked article, states:
My aim in writing the book was to find the best explanation—that is, the most rational and empirically best supported—of Lio accounts of the creatures. These include reports of sightings by more than 30 eyewitnesses, all of whom I spoke with directly. And I conclude that the best way to explain what they told me is that a non-sapiens hominin has survived on Flores to the present or very recent times.

Between Ape and Human also considers general questions, including how natural scientists construct knowledge about living things. One issue is the relative value of various sources of information about creatures, including animals undocumented or yet to be documented in the scientific literature, and especially information provided by traditionally non-literate and technologically simple communities such as the Lio—a people who, 40 or 50 years ago, anthropologists would have called primitive. To be sure, the Lio don’t have anything akin to modern evolutionary theory, with speciation driven by mutation and natural selection. But if evolutionism is fundamentally concerned with how different species arose and how differences are maintained, then Lio people and other Flores islanders have for a long time been asking the same questions.

Monday, April 18, 2022

Everything You Ever Wanted To Know About Ditauonium

Ditauonium is a hydrogen atom-like bound state of a tau lepton and an anti-tau lepton, which can come in para- or ortho- varieties depending upon their relative spins. Its properties can be calculated to high precision using quantum electrodynamics and to a lesser extent other Standard Model physics considerations. Its properties are as follows, according to a new paper analyzing the question and doing the calculations:


Monday, April 11, 2022

Pluto's Orbit Is Metastable

A new paper examining some unusual features of Pluto's orbit concludes that the gas giant planets interact in a complex manner to produce its orbit. Two of them stabilize that orbit, while one of them destabilizes it. 

"Neptune account for the azimuthal constraint on Pluto's perihelion location," and "Jupiter has a largely stabilizing influence whereas Uranus has a largely destabilizing influence on Pluto's orbit. Overall, Pluto's orbit is rather surprisingly close to a zone of strong chaos."

Evaluating Dark Matter And Modified Gravity Models

A less model dependent approach to comparing dark matter and modified gravity theories to the galaxy data, which is complementary to the Radial Acceleration Relation (RAR), which is dubbed Normalized Additional Velocity (NAV) is presented in a new paper and compared to the SPARC database. 

This method looks at the gap between the Newtonian gravitational expectation and the observed rotation curves of rotationally supported galaxies. 

I'll review the five theories tested in the paper.

The Burkert profile (sometimes described as "pseudo-isothermal" and first proposed in 1995) is used to represent particle dark matter in a trial run of this method is a basically phenomenological distribution of dark matter particles in a halo (i.e. it was not devised to flow naturally from the plausible properties of a particular dark matter candidate), unlike the Navarro-Frank-White profile for dark matter particle halos which flows directly from a collisionless dark matter particle theory from first principles, but is a poor fit to the inferred dark matter halos that are observed. The Burkert profile has two free parameters that are set on a galaxy by galaxy basis: the core radius r0 and the central density ρ0. It systemically favors rotational velocities too high a small radii and too low a large radii, however.

MOND is familiar to readers of this blog, was proposed in 1983, is a very simple tweak of Newtonian gravity, and is purely phenomenological toy model (that doesn't have a particularly obvious relativistic generalization), but has been a good match to the data in galaxy sized and smaller systems. There are a variety of more theoretically deep gravity based explanations of dark matter phenomena out there, but this is the most widely known, is one of the oldest, and has a good track record in its domain of applicability. It has one universal constant, a(0), with units of acceleration beyond Newtonian gravity. MOND shows less dispersion in NAV than the data set does, although it isn't clear how much this is due to uncertainties in the data, as opposed to shortcomings in the theory.

The Palatini f(R) gravity and Eddington-inspired-Born-Infeld (EiBI) are theories that modify General Relativity in ways that tweak add and/or modify Einstein's field equations in ways that make plausible hypotheses to adjust them in a theoretically consistent manner and do lead to some explanation of dark matter phenomena, but very well, as this paper notes. Palatini f(R) gravity replaces the Ricci tensor and scalar with a function that has higher order terms not present in Einstein's field equations. The Eddington-inspired-Born-Infeld theory starts with the "Lagrangian of GR, with an effective cosmological constant Λ = 𝜆−1 𝜖 , and with additional added quadratic curvature corrections. Essentially, it is a particular 𝑓 (𝑅) case that fits the Palatini 𝑓 (𝑅) case that was presented in the previous section, together with a squared Ricci tensor term[.]"

General relativity with renormalization group effects (RGGR) is based on a GR correction due to the scale-dependent 𝐺 and Λ couplings, and has one free parameter that varies from galaxy to galaxy that is absent in MOND. It outperforms Burkert and MOND predictions at smaller galactic radii and in the middle of the probability distribution of NAV measurements, but somewhat overstates modification of gravity at large radii at the extremes of the probability distributions.

Also, while the Burkert profile with two fitting constants, outperforms RGGR with one, and MOND with none, once you penalize that theory for the extra degrees of freedom it has to fit the data, the grounds to prefer one model over another is significantly weaker. Some of the galaxy specific fitting may simply be mitigating galaxy specific measurement errors.

Here we propose a fast and complementary approach to study galaxy rotation curves directly from the sample data, instead of first performing individual rotation curve fits. The method is based on a dimensionless difference between the observational rotation curve and the expected one from the baryonic matter (δV^2). It is named as Normalized Additional Velocity (NAV). Using 153 galaxies from the SPARC galaxy sample, we find the observational distribution of δV^2. This result is used to compare with the model-inferred distributions of the same quantity. 
We consider the following five models to illustrate the method, which include a dark matter model and four modified gravity models: Burkert profile, MOND, Palatini f(R) gravity, Eddington-inspired-Born-Infeld (EiBI) and general relativity with renormalization group effects (RGGR). 
We find that the Burkert profile, MOND and RGGR have reasonable agreement with the observational data, the Burkert profile being the best model. The method also singles out specific difficulties of each one of these models. Such indications can be useful for future phenomenological improvements. 
The NAV method is sufficient to indicate that Palatini f(R) and EiBI gravities cannot be used to replace dark matter in galaxies, since their results are in strong tension with the observational data sample.
Alejandro Hernandez-Arboleda, Davi C. Rodrigues, Aneta Wojnar, "Normalized additional velocity distribution: a fast sample analysis for dark matter or modified gravity models" arXiv:2204.03762 (April 7, 2022).

Friday, April 8, 2022

A New W Boson Mass Measurement From CDF

The CDF collaboration, one ofd the two main experimental groups, together with D0, at Fermilab's Tevatron collider which ceased operations in 2011, released a paper making a new measurement of the W boson mass in the journal Science yesterday, which has attracted attention because it is in strong tension the the current global average of experimental measurements of the W boson mass and with the global electroweak fit expectation for the W boson mass. 

But, the paper greatly exaggerates what the paper shows, inaccurately asserting that their result "is in significant tension with the standard model expectation."

There Is No Standard Model W Boson Mass Prediction

The 80,357 ± 6 MeV value to which the paper compares its new measurement is not a "prediction of the Standard Model" as the paper claims. 

Instead, it is a global electroweak fit of the Standard Model physical constants utilizing data points like the Higgs boson mass and top quark mass, neither of which have a direct functional relationship to the W boson mass in the electroweak portion of the Standard Model of Particle Physics. See, e.g.this 2018 global electroweak fit paper

The same global electroweak fit procedure suggested that the Higgs boson had a mass of 90,000 ± 20,000 MeV, with contributing estimates from data used in that fit that ranged from 35,000 MeV to 463,000 MeV, each with huge error bars, when the current inverse error weighted global average of the measured real value of the Higgs boson mass is 125,250 ± 170 MeV. A global electroweak fit is not analogous to a Standard Model physics calculation or prediction.

The W boson's mass is an experimentally determined free parameter of the Standard Model (in other words, it is an input to the model, not an output).

More precisely, the W boson mass, the Z boson mass, the electromagnetic coupling constant, the weak force coupling constant, and the Higgs vacuum expectation value are five experimentally determined Standard Model physical constants related to each other in the electroweak portion of the Standard Model that have three degrees of freedom. You can take your pick to some extent which of them you treat as input parameters that are measured, and which you treat as derived values.

The W boson mass is the least precisely determined of these five electroweak constants, but all five of these related Standard Model parameters are known quite precisely (note that the table below which I put together uses the Particle Data Group global averages).


The global electroweak fit process is not part of the Standard Model and is really not all that much more than a sophisticated informed guessing game.

Calling a global electroweak fit the "standard model expectation" is nothing more or less than misleading, and the fact that the results were spun this way suggests that the authors want to direct attention away from the real story which is that their measurement is an outlier with respect to other experimental measurements, just as one of their original measurements in 2001 was. If I were a peer reviewer of the Science article article that was published yesterday, I would have objected strenuously to that assertion.

Likewise, the paper's discussion early on of the mysteries of the Higgs mechanism, dark matter, and extensions of the Standard Model, while not quite as problematic, is likewise gratuitous window dressing and doesn't belong in a paper that is merely reporting an update of a Standard Model constant measurement from 11 year old data.

Additional Details From The New Paper

The body text of the newly announced CDF result clarifies that the bottom line number for their new W boson mass measurement is 80,433.5 ± 6.4 statistical ± 6.9 systemic MeV (a combined uncertainty of ± 9.4 MeV). 

According to the paper this implies a combined Tevaton of 80,427.4 ± 8.9 MeV, and a combined Tevatron and LEP of 80,424.2 ± 8.7 MeV. The new result is is exactly the same as one of the the 2001 measurement by CDF (which was also an outlier that was included in but diluted in the current global average) but with a claimed uncertainty of 9.4 MeV instead of 79 MeV.

According to Fermilab's press release related to the paper: 
This result uses the entire dataset collected from the Tevatron collider at Fermilab. It is based on the observation of 4.2 million W boson candidates, about four times the number used in the analysis the collaboration published in 2012.
But, to be honest, my intuition is that a claim to shift the combined average up by 50.4 MeV using four times as much data (all at least 11 years old and 25% of it exactly the same data) from the very same machine while reducing the uncertainty by 44% (7 MeV) raises yellow flags.

It is harder to tell than it should be if the newly calculated CDF number included both the D0 experiment data and the CDF data from Tevatron (as the press release seems to imply), or just the CDF data (as the way the data is talked about in the paper itself seems to imply), but as best as I can tell, except in the combined Tevatron number noted above, only the CDF data from Tevatron is being used.

The paper also provides an updated the Z boson measurement of:
91,192.0 ± 6.4 stat ± 4.0 syst MeV [ed. combined error 7.5 MeV] (stat, statistical uncertainty; syst, systematic uncertainty), which is consistent with the world average of 91,187.6 ± 2.1 MeV. 
This is also a source of doubt, rather than confirmation as claimed in the paper. My intuition is that the Z boson uncertainty should be lower than the W boson measurement uncertainty to a larger extent than it is, and instead it was only slightly smaller.

Rather than overturning the Standard Model, all this result should do, at most, is replace the old combined Tevatron value of 80,387 ± 16 MeV with a new combined Tevatron value of 80,427.4 ± 8.9 MeV which will pull the global average a little higher than it used to be and tweak the old global electroweak fit.

But, in addition to shifting up the global average, this result will probably actually increase rather than decrease the uncertainty in the overall global average because the contributing data points are now a lot less tightly clustered than they were before relative to their claimed uncertainties, which again undermines the credibility of the assertion that the claimed uncertainties of the new CDF value are correct.

Prior Experimental Data Compared

The disagreement with prior experiments is real. See the Particle Data Group's W Boson Mass entrySee also their narrative explanation

If I were inclined to attribute bad motives, which to some extent I am in this case, I'd say that spinning this result as a deviation from the Standard Model is an attempt to distract attention away from how badly their result deviated from other experimental measurements, which is the real story here.

When your result which claims to have only modestly less uncertainty than the prior measurements of the same quantity by multiple independent groups is a huge outlier with respect to everyone else; it is more likely that you or the scientists who are the source of your data, have done something wrong, than it is that you are right and they are wrong. Perhaps, for example, CDF is underestimating the true uncertainty of their measurement, which is very easy to do even for the most sophisticated High Energy Physics (HEP) scientists, since estimating systemic error is as much an art as it is a science (even though estimating statistical error is almost perfect except for issues related to your assumption that the true distribution of error is Gaussian when it in reality usually has fatter tails in studies of past HEP data gathering).

The inverse error weighted global average of best nine most recent independent measurements of the W boson mass prior to this paper is 80,379 ± 12 MeV.

Where does that come from?

Two of those nine measurements are from CDF (80,433 ± 79 MeV in 2001 and 80,387 ± 19 in 2012) and two more are from CDF's sister experiment from Tevatron called D0 (80,483 ± 84 from 2002 and 80,375 ± 23 from 2014), with the older values (in each case) made at 1.8 TeV and the newer values (in each case) made at 1.96 TeV. The four data point inverse error weighted combined Tevatron average was 80387 ± 16 MeV. Three more superseded W boson masses from CDF and D0 were ignored in the global average and ranged from 80,367 MeV to 80,413 MeV.

Another four measurements are from the defunct LEP (linear electron positron collider) from 2006 to 2008 at energies from 161-209 GeV with an error weighted average of 80,376 ± 33 MeV. The range of the LEP measurements was 80,270 MeV to 80,440 MeV.

Many far less precise measurements from 1983 to 2018 were ignored in determining the inverse error weighted world average.

One of the measurements is from ATLAS at the Large Hadron Collider (LHC) is 80,370 ± 18 MeV at an energy of 7 TeV and shares 7 MeV of systemic uncertainty with the Tevatron average.

We should be seeing a Run-2 W boson mass determination from ATLAS, and both Run-1 and Run-2 W boson mass determinations from CMS before too long.

My predisposition is to expect that those results will be more credible than this lagging Tevatron value because the actual experimental apparatus is more state of the art at LHC than it was at Tevatron. Also, fairly or not, the best scientists with the most rigorous quality control get assigned to the new shiny data, rather than analysis of eleven year old archived data from an experiment that is no longer operating.


Chart via this blog which also has quality commentary, which notes that:
The main problem . . . is that the new measurement is in disagreement with all other available measurements. I think this could have been presented better in their paper, mainly because the measurements of the LEP experiments have not been combined, secondly because they don't show the latest result from LHCb. Hence I created a new plot (below), which allows for a more fair judgement of the situation. I also made a back-of-envelope combination of all measurements except of CDF, yielding a value of 80371 ± 14 MeV. It should be pointed out that all these combined measurements rely partly on different methodologies as well as partly on different model uncertainties. The likelihood of the consistency of such a (simple) combination is 0.93. Depending (a bit) on the correlations you assume, this value has a discrepancy of about 4 sigma to the CDF value.

In fact, there are certainly some aspects of the measurement which need to be discussed in more detail (Sorry, now follow some technical aspects, which most likely only people from the field can fully understand): In the context of the LHC Electroweak Working Group, there are ongoing efforts to correctly combine all measurements of the W boson mass; in contrast to what I did above, this is in fact also a complicated business, if you want to do it really statistically sound. My colleague and friend Maarten Boonekamp pointed out in a recent presentation, that the Resbos generator (which was used by CDF) has potentially some problems when describing the spin-correlations in the W boson production in hadron collisions. In fact, there are remarkable changes in the predicted relevant spectra between the Resbos program and the new version of the program Resbos2 (and other generators) as seen in the plot below. On first sight, the differences might be small, but you should keep in mind, that these distributions are super sensitive to the W boson mass. I also attached a small PR plot from our last paper, which indicates the changes in those distributions when we change the W boson mass by 50 MeV, i.e. more than ten times than the uncertainty which is stated by CDF. I really don't want to say that this effect was not yet considered by CDF - most likely it was already fixed since my colleagues from CDF are very experienced physicists, who know what they do and it was just not detailed in the paper. I just want to make clear that there are many things to be discussed now within the community to investigate the cause of the tension between measurements.
Difference in the transverse mass spectrum between Resbos and Resbos2 (left); impact of different W boson mass values on the shapes of transverse mass.

And this brings me to another point, which I consider crucial: I must admit that I am quite disappointed that it was directly submitted to a journal, before uploading the results on a preprint server. We live in 2022 and I think it is by now good practice to do so, simply because the community could discuss these results beforehand - this allows a scientific scrutiny from many scientists which are directly working on similar topics.

More commentary from Matt Strassler is available at his blog (also here). The money quote is this one (emphasis in the original, paragraph breaks inserted editorially for ease of reading):

A natural and persistent question has been: 
“How likely do you think it is that this W boson mass result is wrong?” 
Obviously I can’t put a number on it, but I’d say the chance that it’s wrong is substantial. 
Why? 
This measurement, which took several many years of work, is probably among the most difficult ever performed in particle physics. Only first-rate physicists with complete dedication to the task could attempt it, carry it out, convince their many colleagues on the CDF experiment that they’d done it right, and get it through external peer review into Science magazine. But even first-rate physicists can get a measurement like this one wrong. The tiniest of subtle mistakes will undo it.

Physicist Tommaso Dorigo chimes in at his blog and he is firmly in the camp of measurement error and identifies some particularly notable technical issues that could cause the CDF number to be too high.

I already answered the question of whether in my opinion the new CDF measurement of the W boson mass, standing at seven standard deviations away from the predictions of the Standard Model, is a nail in the SM coffin. Now I will elaborate a little on part of the reasons why I have that conviction. I cannot completely spill my guts on the topic here though, as it would take too long - the discussion involves a number of factors that are heterogeneous and distant from the measurement we are discussing. Instead, let us look at the CDF result.

One thing I noticed is that the result with muons is higher than the result with electrons. This may be a fluctuation, of course (the two results are compatible within quoted uncertainties), but if for one second we neglected the muon result, we would get a much better agreement with theory: the electron W mass is measured to 80424.6+-13.2 MeV, which is some 4.5 sigmaish away from theory prediction of 80357+-6 MeV. Still quite a significant departure, but not yet an effect of unheard-of size for accidentals.

Then, another thing I notice is that CDF relied on a custom simulation for much of the phenomena involving the interaction of electrons and muons with the detector. That by itself is great work, but one wonders why not using the good old GEANT4 that all of us know and love for that purpose. It's not like they needed a fast simulation - they had the time!

A third thing I notice is that the knowledge of backgrounds accounts for a significant systematic effect - it is estimated in the paper as accounting for a potential shift of two to four MeV (but is that sampled from a Gaussian distribution or can there be fatter tails?). In fact, there is one nasty background that arises in the data when you have a decay of a Z to a pair of muons, and one muon gets lost for some reconstruction issue or by failing some quality criteria. The event, in that case, appears to be a genuine W boson decay: you see a muon, and the lack of a second leg causes an imbalance in transverse momentum that can be interpreted as the neutrino from W decay. This "one-legged-Z" background is of course accounted for in the analysis, but if it had been underestimated by even only a little bit, this would drive the W mass estimate up, as the Z has a mass larger than the W (so its muons are more energetic).

Connected to that note is the fact that CDF does show how their result can potentially shift significantly in the muon channel if they change the range of fitted transverse masses - something which you would indeed observe if you had underestimated the one-legged Z's in your data. This is shown in the two graphs below, where you see that the fitted result moves down by quite a few MeV if you change the upper and lower boundaries:


A fourth thing I notice is that the precision of the momentum scale determination, driven by studies of low-energy resonances (J/psi and Upsilon decays to muon pairs) is outstanding, but the graph that CDF shows to demonstrate it is a bit suspicious to my eyes - it is supposed to demonstrate a flat response as a function of inverse momentum, but to my eyes it in fact shows the opposite. Here is what I am talking about:

 

I took the liberty to take those data points and fit them with a different assumption - not a constant, but a linear slope, and not the full spectrum, but only up to inverse momenta of 0.3; and here is what I get:


Of course, nobody knows what the true model of the fitting function should be; but a Fisher F test would certainly prefer my slope fit to a constant fit. Yes, I have neglected the points above 0.3, but who on earth can tell me that all these points should line up in the same slope? So, what I conclude from my childish exercise is that the CDF calibration data points are not incompatible (but IMHO better compatible) with a slope of (-0.45+-0.1)*10^-3 GeV.

What that may mean, given that they take the calibration to be -1.4 from a constant fit, is to get the scale wrong by about a part in ten thousand at the momentum values of relevance for the W mass measurement in the muon channel. This is an effect of about 8 MeV, which I do not see accounted for in the list of systematics that CDF produced. [One caveat is that I have no idea whether the data points have correlated uncertainties among the uncertainty bars shown, which would invalidate my quick-and-dirty fit result.]

I could go on with other things I notice, and you clearly see we would not gain much. My assessment is that while this is a tremendously precise result, it is also tremendously ambitious. Taming systematic uncertainties down to effects of a part in ten thousand or less, for a subnuclear physics measurement, is a bit too much for my taste. What I am trying to say is that while we understand a great deal about elementary particles and their interaction with our detection apparatus, there is still a whole lot we don't fully understand, and many things we are assuming when we extract our measurements. . . .
I can also say that the CDF measurement is slamming a glove of challenge on ATLAS and CMS faces. Why, they are sitting on over 20 times as much data as CDF was able to analyze, and have detectors built with a technology that is 20 years more advanced than that of CDF - and their W mass measurements are either over two times less precise (!!, the case of ATLAS), or still missing in action (CMS)? I can't tell for sure, but I bet there are heated discussions going on at the upper floors of those experiments as we speak, because this is too big a spoonful of humble pie to take on.

He also reminds us of the fact that real world uncertainties don't have a Gaussian (i.e. "normal") distribution and instead have "fat tails" with extreme deviations from expected values being more common than expected in a Gaussian distribution.

Likewise physicist Sabine Hossenfelder tweets:

Could this mean the standard model is wrong? Yes. But more likely it's a problem with their data analysis. Fwiw, I don't think this is a case for theorists at all. Theorists will explain whatever data you throw at them.

Footnote Regarding Definitional Issues

The CDF value and all of the other values (except the global electroweak fits) are probably also all about 20 MeV too high due to a definitional issue in how the W boson mass is extracted from the experimental data. See Scott Willenbrock, "Mass and width of an unstable particle" arXiv:2203.11056 (March 21, 2022).

Has The Lithium Problem Been Resolved?

For the most part, the abundance of chemical elements in the universe closely matches the predictions of Big Bang Nucleosynthesis (BBN), providing one of the strongest points of "solid ground" in the history of the universe after which we can fairly claim a good understanding of much of the key physics driving everything that came afterwords. 

BBN supposes that shortly after the Big Bang, in a universe composed predominantly of free protons and neutrons, that these nucleons randomly collided giving rise to nuclear fusion reactions that produced a specific mix of atomic elements (including the mix of isotopes of those elements which are atoms with the the number of protons in their nuclei that makes them a particular kind of element, with varying numbers of neutrons that establish which isotope of the element it is, with an isotope generally identified by a whole number equal to the number of protons plus the number of neutrons in the atomic nucleus of the isotope). In the conventional chronology of the Universe, this even takes about twenty minutes and begins about ten seconds after the Big Bang.

Then, BBN theory adjusts this initial prediction to reflect known processes by which atomic nuclei decay in nuclear fission, experience nuclear decay, or are merged in nuclear fusion reactions, since the initial BBN, to produce the current mix of chemical elements (and isotopes) in the universe.

BBN is very successful at describing the observed mix of chemical elements and isotopes in the universe with reasonable precision (with relative errors of similar magnitude to the uncertainties in the measurements which for some of the key quantities are on the order of 1%), with one main exception, which is an incorrect proportion of the Lithium-7 isotope (which is present at a fraction of about 1.6 parts per 10,000,000,000 in metal poor stars, which is much smaller than any other of the other detectable BBN products) that is smaller by a factor of about three than the proportion of about 5 parts per 10,000,000,000 expected at the time of initial BBN. This discrepancy has historically been in strong (roughly 4 sigma) tension with the expected value. 

But there is only a discrepancy if the proportion of Lithium-7 in a star doesn't decline due to the effect of nuclear processes in the star over many billions of years, as assumed in the models giving rise to the Lithium Problem. This assumption was well supported by early astronomy observations, but those observations have now been superseded with larger volumes of higher precision data.

A new paper, reflecting an improved understanding of post-BBN processes and of the current mix of chemical elements and isotopes in the universe, as a result of a greater availability of more and more precision astronomy observations, concludes that there might not be a Lithium Problem with BBN after all.

In a nutshell, there is strong circumstantial evidence that proportion of Lithium-7 in metal poor stars falls over time due to nuclear reactions in situ within those stars which can explain the Lithium Problem, which would explain why the proportion of Lithium-7 in metal poor stars that is observed is lower than the unmodified BBN expected value.

Basically, almost no Lithium-6 should be created in BBN. Instead, it is created in metal poor stars by cosmic rays (mostly from supernova) in a well understood process that should produce more Lithium-6 than is observed. This is strong evidence of nuclear processes in metal poor stars that deplete Lithium-6 levels in those stars. This observation, in turn, allows astronomers to make rough estimates of the amount by which Lithium-7 levels in those stars was depleted, because Lithium-6 can be transformed into other isotopes more easily than Lithium-7 can. And, this analysis, combined with other data, makes it possible to make a ballpark estimate of the amount of Lithium-7 depletion from the expected BBN value that is consistent with the observed amount Lithium-7 in these stars (although it isn't precise enough to get an expect prediction for current Lithium-7 levels yet).

If this hypothesis is correct, one way to confirm it is to show that the proportion of Lithium-7 in the interstellar medium (ISM) should be higher than it is in the halo stars of typical galaxies, because the Lithium-7 deletion which the paper reasons occurs in stars should not occur in the ISM where the processes that could cause Lithium-7 depletion in stars are absent.

BBN predictions are one of the most powerful and robust constraints on a variety of speculative high energy physics theories beyond the Standard Model. If its predictions are on target, then it is a good global indicator that we have a pretty much complete understanding of the process of nuclear isotope formation in the period of time from ten seconds after the Big Bang to the present, leaving little or no room for a wide away of beyond the Standard Model physics proposals that would otherwise operate in that time frame. 

Complete confirmation of BBN predictions also pretty much fixes in place a set of "initial conditions" of the universe as of ten seconds after the Big Bang, putting a lower bound on the energy scales at which significant beyond the Standard Model physics can exist (i.e. 1,000,000,000 Kelvin a.k.a. 100 keV) and also placing a boundary condition on what the effect of those beyond the Standard Model physics theories can be, since those physics have to reproduce conditions at ten seconds after the Big Bang consistent with the assumptions of BBN.

The Large Hadron Collider (LHC) has also already explored circumstances up to about 1,000,000,000,000,000 Kelvin a.k.a. 150 GeV (temperature stated in eV units does not correspond exactly to the more familiar collision energies at the LHC which are currently up to about 13,000-14,000 GeV), largely ruling out new physics at those energy scales as well, which takes us back to the first 1/1,000,000,000,000th of a second or so after the Big Bang in which we know that the Standard Model of Particle Physics should apply, thus shrinking the proportion of the history of the universe where beyond the Standard Model physics aren't ruled out dramatically.

The paper and its abstract are as follows:

The primordial Lithium Problem is intimately connected to the assumption that 7Li observed in metal-poor halo stars retains its primordial abundance, which lies significantly below the predictions of standard big-bang nucleosynthesis.

Two key lines of evidence have argued that these stars have not significantly depleted their initial 7Li: i) the lack of dispersion in Li abundances measured at low metallicity; and ii) the detection of the more fragile 6Li isotope in at least two halo stars. The purported 6Li detections were in good agreement with predictions from cosmic-ray nucleosynthesis which is responsible for the origin of 6Li. This concordance left little room for depletion of 6Li depletion, and implied that the more robust 7Li largely evaded destruction. 
Recent (re)-observations of halo stars challenge the evidence against 7Li depletion: i) lithium abundances now show significant dispersion, and ii) sensitive 6Li searches now reveal only firm upper limits to the 6Li/7Li ratio. The tight new 6Li upper limits generally fall far below the predictions of cosmic-ray nucleosynthesis, implying that substantial 6Li depletion has occurred--by factors up to 50.

We show that in stars with 6Li limits and thus lower bounds on 6Li depletion, an equal amount of 7Li depletion is more than sufficient to resolve the primordial 7Li Problem. This picture is consistent with stellar models in which 7Li is less depleted than 6Li, and strengthen the case that the Lithium Problem has an astrophysical solution. We conclude by suggesting future observations that could test these ideas.
Brian D. Fields, Keith A. Olive, "Implications of the Non-Observation of 6Li in Halo Stars for the Primordial 7Li Problem" arXiv:2204.03167 (April 7, 2022) (UMN--TH--4118/22, FTPI--MINN--22/09).

Tuesday, April 5, 2022

A Seventh Quark?

I don't think this conjecture is correct, but it is an interesting idea. 

The existence of a fourth flavor of down-type quark with a mass of approximately 1.6 GeV is hypothesized. The right-handed component of this quark is assumed to decay to a right-handed charm quark and a virtual W boson. Many of the recently discovered exotic charged charmonium-like resonances are re-interpreted as mesons involving the new quark.
Scott Chapman, "Charmonium tetraquarks or a new light quark?" arXiv:2204.00913 (April 2, 2022). This paper is a sequel to a March 2022 preprint by the same author.  He also submitted an unrelated preprint in December of 2021.

He is affiliated with the Institute for Quantum Studies, Chapman University, Orange, CA 92866, which is a legitimate decent sized U.S. News and World Report ranked university in Orange County, California. 

It isn't clear if this Scott Chapman is related to the individual after whom the university is named, or if that is just a coincidence. There is a Scott Chapman who is the son of Trustee C. Stanley Chapman and great-grandson of university namesake C.C. Chapman, but it seems that this professor is probably not the same person, because as of 2019, the great-grandson of C.C. Chapman ran an IT consulting business. But on the other hand, the physics professor did join the Chapman University faculty in 2019.

It also appears that the Dr. Scott Chapman who taught astrophysics at the University of Victoria and the University of Cambridge, who now teaches at Dalhousie University, and moonlights at Eureka Scientific, Inc. is a different Scott Chapman than the high energy physics physicist the Chapman University who is the author of these papers.

It also isn't entirely clear to me if the Scott Chapman who worked at the ALPHA experiment at CERN and published mostly in the areas of QCD and supersymmetry starting from 1994 is actually the same person as the author of the current paper, as the name is a common enough one. The QCD physicist Scott Chapman does not appear to be the Scott Chapman at Dalhousie University.

Monday, April 4, 2022

Taming Dragons

The Dreamworks animated movie "How to Tame Your Dragon" (2010) is best understood as a well done allegory of the Neolithic revolution, in which the domestication of animals played a central part, or at least, as an allegory of the late Neolithic-Eneolithic domestication of the horse.

Japanese, Korean, and Manchu Are Probably Sister Languages

In a September 21, 2021 post at this blog, I noted Niall P. Cooke, et al., "Ancient genomics reveals tripartite origins of Japanese populations" 7(38) Science Advances (September 17, 2021) that based upon modern and ancient Japanese and East Asian DNA found that:
The big point is that after the Yayoi conquered the Jomon and admixed with them ca. 1000 BCE, there was a second wave of migration to Japan during the Kofun period of Japanese history, ca. 300 CE to 538 CE, by a population similar to the modern Han Chinese people that has the source of more than 60% of the resulting populations autosomal DNA. Since that second migration event, there has been only some modest introgression of additional Han Chinese-like admixture into the Japanese gene pool. . . .
we find genetic evidence that the agricultural transition in prehistoric Japan involved the process of assimilation, rather than replacement, with almost equal genetic contributions from the indigenous Jomon and new immigrants at the Kyushu site. This implies that at least some parts of the archipelago supported a Jomon population of comparable size to the agricultural immigrants at the beginning of the Yayoi period, as it is reflected in the high degree of sedentism practiced by some Jomon communities. . . .

Excess affinity to the Yayoi is observable in the individuals who are genetically close to ancient Amur River populations or present-day Tunguisic-speaking populations. Our findings imply that wet rice farming was introduced to the archipelago by people who lived somewhere around the Liaodong Peninsula but who derive a major component of their ancestry from populations further north, although the spread of rice agriculture originated south of the West Liao River basin.
Today, I came across J. Marshall Unger, "No Rush to Judgment: The Case against Japanese as an Isolate" 4(3) NINJAL Project Review 211-230 (February 2014), which anticipated this development. 

Unger reasons that while the development of a definite Japanese-Korean protolanguage was not sufficiently advanced to argue the hypothesis that they were sister languages on linguistic evidence alone, that the evidence taken as a whole argued strongly for this conclusion over any other. I agree with pretty much all of his reasoning, which Cooke (2021) above, closely matches. 

The narrative in Unger's paper is well articulated visually:







 
To just hit the major headings in the analysis (which has another layer of solid thought beneath it):
1) A major population replacement occurred in Japan starting during the 1st millennium BCE.

2) The population replacement was due to migrations from southern Korea that began after the Mumun cultural complex was well-established there.

3) The later transition from Yayoi to Kofun culture proceeded gradually and did not involve a single or sudden disruption of the Late Yayoi culture.

4) Place-names recorded in both logographic and phonographic forms show that a Japaneselike language was spoken on the Korean peninsula as late as ca. 700 CE.

5) The first variety of Japanese spoken in the islands, proto-Japanese (reconstructed through dialect comparisons), dates from the Yayoi period, and began to split into dialects at the time of the Yayoi expansion ca. 200 BCE.

6) Final Jomon languages influenced proto-Japanese only marginally.

7) If Korean and Japanese are genetically related languages, they must have separated before the rise of Megalithic culture on the peninsula.

8) East Asian languages typologically similar to Korean and Japanese of the 1st millennium CE were spoken only in the transfluvial region north of present-day Korea.

9) Conditions for METATYPY — one language adapting its gross syntactic structure to that of another — were not present on the Korean peninsula prior to the Yayoi migrations.

10) The proto-Korean-Japanese hypothesis is the best working hypothesis available.

11) The southward movement of para-Korean speakers spawned the Yayoi migrations.

12) The collapse of the Chinese commanderies doomed the survival of para-Japanese.

13) Speakers of late para-Japanese introduced Old Korean and Early Middle Chinese words to Japan during the Kofun period.

14) There was never a period of interaction between Japanese and Korean of sufficient duration to alter the Japanese lexicon radically.