Tuesday, November 30, 2021

Even Science Is Resistant To The Scientific Method

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it. . . . An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth.
— Max Planck, Scientific autobiography, at pgs. 33 and 97 (1950). Related:
Never trust an experimental result until it has been confirmed by theory.
- Astronomer Arthur Eddington who died on November 22, 1944 (discussed here noting that: "In general, Eddington’s advice is good: when an experiment contradicts theory, theory tends to win in the end." But acknowledging exceptions and discussing Hume's take on it).

A comment makes the good point, however, that ossification of views is less of a problem in fields that are new and rapidly emerging, rather than those that have settled down a bit with enough time for competing camps over unresolved issues to emerge.

But scientists are persuaded by new evidence sometimes, as documented in this paper.

A New Experimental Challenge To Lepton Universality Overstates Significance

The BESIII experiment reports the first case of strong experimental evidence for lepton flavor universality violation (i.e. electrons, muons and tau leptons having properties other than mass that differ from each other), outside semi-leptonic decays of B mesons. 

But, its claim of a strong departure from lepton flavor universality is, in fact, a rookie class misinterpretation of the reported data which the introductory text seems to acknowledge, which is, in fact, not inconsistent with lepton flavor universality at the two sigma level.


Instead of studying semi-leptonic B meson decays, this study looks at the decays of excited J/Psi mesons (J/ψ), a spin-1 (i.e. vector) charmonium meson (i.e. it has a charm quark and an anticharm quark as valence quarks), with a ground state mass of 3097 MeV and a mean lifetime in the ground state of 7.1(2)*10-21 seconds.

In contrast, the B mesons showing apparent lepton universality violations have a b quark and a non-b antiquark (or a non-b quark and a b antiquark) as valence quarks, ground state rest masses of 5297-5415 MeV. Pseudoscalar B mesons have a mean lifetime on the order of 1.5(1)*10-12 seconds.

The Results

The data sample consists of (448.1 ± 2.9) × 106 ψ(3686) events collected with the BESIII detector.

As in the semi-leptonic B meson decay case, in the fully leptonic decays of J/ψ mesons, the decays with muons are significantly less likely than decays with electrons, although, unlike the clean B meson decay case, much of this discrepancy is present in the Standard Model prediction.

The introduction to the paper provides important context:

On the one hand, lepton flavor universality (LFU) is expected to be obeyed in SM. In recent years, however, indications for violation of LFU have been reported in semileptonic decays of the kind b → s ℓ+ℓ −. In 2014, LHCb measured the ratio of branching fractions RK = B(B+ → K+µ +µ −)/B(B+ → K+e +e −), and found a deviation from the SM prediction by 2.6σ. The measurements have continuously been updated by LHCb and Belle. Very recently, LHCb reported their latest result with full Run I and Run II data, which deviates from SM prediction by more than 3σ. . . . It is therefore urgent to investigate the validity of LFU in other experiments. 
J/ψ → ℓ +ℓ −, where ℓ may be either e or µ, are two such precisely measured channels, and their measured branching fractions are consistent with Quantum Electrodynamics (QED) calculations. 
Other purely leptonic decays, which have never been studied experimentally, are J/ψ → ℓ + 1 ℓ − 1 ℓ + 2 ℓ − 2 ℓ, where ℓ1 = ℓ2 = e, ℓ1 = ℓ2 = µ or ℓ1 = e and ℓ2 = µ. For the first two cases, there is no special order for the four leptons. 
Recently, the branching fractions of J/ψ → ℓ + 1 ℓ − 1 ℓ + 2 ℓ − 2 ℓ decays were calculated at the lowest order in nonrelativistic Quantum Chromodynamics (NRQCD) factorization in the SM. Given the collinear enhancement when the lepton mass tends to zero, the predicted branching fraction of J/ψ → e+e−e+e- is (5.288 ± 0.028) × 10−5, significantly greater than that of J/ψ → e+e−µ+µ- ((3.763 ± 0.020) × 10^−5) and two orders of magnitude greater than that of J/ψ → µ+µ−µ+µ- ((0.0974 ± 0.0005) × 10^−5). 
Therefore, the ratio Beeee: Beeµµ: Bµµµµ provides a good opportunity to verify the validity of LFU.

(I omit a discussion of the muon g-2 anomaly as a motivation for new lepton coupling particles, which I personally think is due to a flawed theoretical prediction which is contradicted by a methodologically more sound theoretical prediction that matches the experimental result).

The paper and its abstract are as follows:

BESIII Collaboration, "Observation of J/ψ decays to e+e−e+e− and e+e−μ+μ" arXiv:2111.13881 (November 27, 2021).


The branching fraction of the four e decay, combining uncertainties in quadrature, is 432 ± 32 (times 10^-7), and for the mixed e and mu decay is 245 ± 32 (times 10^-7). 

The data sample consists of (448.1 ± 2.9) × 106 ψ(3686) events collected with the BESIII detector. So, the raw data is about 19,378 ± 1434 four e decays and 10,978 ± 1434 mixed e and mu decays. 

The theoretically predicted ratio of the decay branching fractions is 1.405, with the third possibility, which is not seen, predicted to be too small to detect. The ratio of the best fit measurements of the branching fractions experimentally is 1.76.

What is the two sigma range for the ratio of these two values?

The formula for the correct calculation is here with a good shortcut approximation here. Basically, if the denominator of z/w is always positive, you can used the approximation V(R) = V(y)/x^2 + V(x)y^2/x^4 where V equals standard deviation squared and the letter stands for the mean value, if you also assume the independence of the uncertainties (which probably isn't true but is close enough). Applying that very good approximation, the standard deviation of the experimental ratio is about 0.26476 with a two sigma range of about plus or minus 0.53, which is a range from 1.23 to 2.29. So, the experimental result is easily consistent with the lowest order QCD predicted ratio assuming lepton flavor universality of 1.405 at the two sigma level, even without considering the full theoretical uncertainty involved in using the lowest order QCD prediction for that ratio, which is less problematic when looking at ratios of branching fractions than it is in predicting absolute values, but still introduces meaningful uncertainty in the prediction that isn't cleanly quantified in the paper.

Sure, the lowest order QCD prediction was low in both cases, this is unsurprising because the theoretical prediction doesn't include the uncertainty introduced from not including higher order QCD terms. 

Lowest order QCD predictions are frequently far from the mark, especially in absolute, rather than relative terms. Higher order QCD predictions frequently differ significantly from lowest order QCD predictions and the tensions between the QCD prediction and the experimental result in this case, no doubt mostly represents shared theory error in the lowest order QCD prediction, rather than anything more profound.

As the introduction to the paper itself observes, it is the ratio of of the two branching fractions that is relevant to testing the lepton flavor universality hypothesis. 

Bottom Line

Rather than being a five sigma discovery class evidence in a new decay channel for LFU violation, this result is actually consistent with LFU at the two sigma level, and actually, as a result, if anything, it tends to disfavor the conclusion that LFU violation is present in any context outside of semi-leptonic B meson decays. But given the great uncertainty in the ratio of the two values, this new experiment, honestly, doesn't really tell us anything one way or the other.

To the extent it is a null result, however, contrary to its abstract, this, in turn, casts doubt on whether those LFU violating, new physics supporting tensions are really correct, or simply reflect an error in making a theoretical prediction or implementing a screening of events to consider somehow.

A Couple Of Notable Gravity Modification Theories

A couple of new papers have suggested very subtle modifications of General Relativity as conventionally applied, which is the paradigmatic core theory of gravity, that are worth noting. Both are driven by the flaws of the ΛCDM "Standard Model of Cosmology" in confronting the observational evidence in an elegant and satisfying manner.

One is more limited. Another, by Alexander Sobolev, is striking and bold. 

Honorable mention goes to a third paper entitled "Limiting curvature models of gravity", by Valeri P. Frolov, arXiv:2111.14318 (November 29, 2021), exploring the previously proposed idea that the singularities of General Relativity could be "cured" by imposing a maximum space-time curvature called LCG for limited curvature gravity.

General Relativistic Entropic Acceleration (GREA)

General Relativistic Entropic Acceleration hovers between being a modification of General Relativity and a novel way of operationalizing it that produces different predictions than the standard approach. Functionally, the main achievements of this approach are dispensing with the cosmological constant and resolving the Hubble anomaly, if it turns out that it is real and not merely a measurement error artifact. The introduction spells out what the authors are trying to do:
Our understanding of the expanding universe is anchored in the geometric description provided by Einstein’s theory of General Relativity (GR). On the one hand, its approximate symmetries, i.e., homogeneity and isotropy at large scales, determine its background space-time to be described by a Friedmann-Lemaître-Robertson-Walker (FLRW) metric. On the other hand, its matter content is responsible for the dynamics of the scale factor, which tracks the growth of length-scales in the geometric expansion, as described by the Friedmann equations. 

The currently accepted realization of FLRW cosmology is given by the Λ – Cold Dark Matter (ΛCDM) model. According to it, baryonic matter and radiation make up only a small portion of the present content of the universe. Instead, its expansion is dominated by two components which lack a fully satisfactory microscopic description. First, a cosmological constant, usually denoted by Λ, which is added to Einstein’s field equations to account for the observed late-time accelerated expansion of the universe. Second, cold (low temperature) dark (without electromagnetic interactions) matter, which was required originally to explain anomalies in the galactic rotation curves but is nowadays consistent with many other early- and late-time cosmological observables. 

Even though ΛCDM seems to be the best fit to observations, the existence of a cosmological constant has been challenged on theoretical grounds. Consequently, a plethora of alternatives haven been explored, which fall systematically into two groups. First, modified gravity (MG) theories attempt to deliver new dynamics at large, cosmological, scales, while leaving invariant smaller scales at which GR has been thoroughly probed. Second, dark energy (DE) models propose the addition of exotic matter, such as quintessence. 

Furthermore, in the last years there have been observational challenges to ΛCDM. Early- and late-time measurements of the present-value of the Hubble parameter (H0) seem to be inconsistent. This H0 tension signals a possible failure of the ΛCDM to describe our universe. However, no available alternative MG or DE seems to be able to resolve the tension between high and low redshift probes, while providing a fit to cosmological observations that is competitive with ΛCDM. Moreover, there have been recent model-independent analyses, using machine learning approaches, that suggest that there maybe hints of deviations from ΛCDM at high redshifts. 

Recently, a first-principles explanation of cosmic acceleration has been proposed by two of us. This is the General Relativistic Entropic Acceleration (GREA) theory. It is not based on MG or DE. Rather, it is based on the covariant formulation of non-equilibrium formulation of thermodynamics. Entropy production during irreversible processes necessarily has an impact on Einstein field equations. This suggests the idea that entropy production or, equivalently, information coarse graining, gravitates. As such, it affects the space-time geometry. 

In FLRW cosmology, irreversible processes inevitably contribute with an acceleration term to the Friedmann equations. In GREA, it is the sustained growth of the entropy associated with the cosmic horizon in open inflation scenarios that explains current cosmic acceleration. 

The goal of this paper is to test the full viability of the GREA theory at the background level and compare it with the ΛCDM, against available cosmological data.
The paper and its abstract are as follows:
Recently, a covariant formulation of non-equilibrium phenomena in the context of General Relativity was proposed in order to explain from first principles the observed accelerated expansion of the Universe, without the need for a cosmological constant, leading to the GREA theory. 
Here, we confront the GREA theory against the latest cosmological data, including type Ia supernovae, baryon acoustic oscillations, the cosmic microwave background (CMB) radiation, Hubble rate data from the cosmic chronometers and the recent H(0) measurements. 
We perform Markov Chain Monte Carlo analyses and a Bayesian model comparison, by estimating the evidence via thermodynamic integration, and find that when all the aforementioned data are included, but no prior on H(0), the difference in the log-evidence is ∼ −9 in favor of GREA, thus resulting in overwhelming support for the latter over the cosmological constant and cold dark matter model (ΛCDM). 
When we also include priors on H(0), either from Cepheids or the Tip of the Red Giant Branch measurements, then due to the tensions with CMB data the GREA theory is found to be statistically equivalent with ΛCDM.
Rubén Arjona, et al., "A GREAT model comparison against the cosmological constant" arXiv:2111.13083 (November 25, 2021). Report number: IFT-UAM/CSIC-2021-136.

Sobolev's Bold Theory

Sobolev's just released magnum opus is impressive and his modification of general relativity, discarding true general covariance, which even Einstein acknowledged was not observationally required, in favor of a subtle constraint on it, produces a stunning amount of weak field and cosmology phenomenology. His introduction sets the stage:

In the light of new experimental data, GR no longer seems as unshakeable as it once did. For an explanation of the results derived within the framework of this theory, it was necessary to introduce certain hypothetical entities (the ΛCDM model) the nature of which are still unclear. “Entia non sunt multiplicanda praeter necessitatem”; it is likely that the necessity for the introduction of inflations at first, and now of dark energy and dark matter in GR (with the development of new methods of astronomical observation), are symptoms of a defect in its fundamental basis. 
General relativity violates the unity of the material world. In GR, the gravitational field itself does not have the properties of a material medium; its energy–momentum density is zero. This is a direct consequence of the general covariance of the gravitational field equations. Attempts to introduce a non-general covariant energy–momentum density actually mean refuting the original axiom of general covariance. 
In my opinion, it is the general covariance of the equations that is the source of the troubles of GR. 
One possible way to construct a non-generally covariant theory of gravity without violating Hilbert’s axioms (as I see it) is the introduction of an a priori constraint that restricts the choice of coordinate system. Attempts of such a kind have been made previously, for example the unimodular theory of gravity, whose origins date back to Einstein. A consequence of the introduction of this constraint is the appearance of an edge in the space–time manifold. Therefore, restrictedly covariant geometric objects are defined only on manifolds with this edge. 
Under such an approach, the fundamental principle of the equivalence of all reference systems compatible with the pseudo-Riemannian metric, which underlies GR, is not violated. In addition, we do not put into doubt the principle of the invariance of matter action relative to arbitrary transformations of coordinates. At the same time, in contrast to GR, a covariance of the gravitational equations is restricted by the constraint. Thus, a priori, only the “medium-strong principle” of equivalence is met in this case. However, this cannot be grounds for rejecting the proposed approach as contradicting the experiments verifying the strong equivalence principle for bodies of cosmic scales. 
The fact is that already in GR, within the framework of the ΛCDM model, space itself is endowed with energy. The same thing occurs when an a priori constraint is introduced. Space becomes a self-gravitating object because of the nonlinearity of the gravitational equations. One can determine the inertial and gravitational masses of such an object. The solution of the gravitational equations has enough free parameters to not only ensure the requirement of the equality of the inertial mass of the gravitational field to its gravitational mass, but also to determine inertial mass in accordance with Mach’s principle (the latter problem has not been solved in GR). From this point of view, the results of experiments should be considered as an indication that only such (quasi) stationary self-gravitating objects exist for which inertial mass is equal to gravitational mass. 
Hilbert’s axioms are formulated in a coordinate language. The gravitational field was represented by the ten components gμν(xλ) of the metric tensor. In addition, it was assumed that derivatives of the metrics no higher than second order could enter into the gravitational equations.
There is no theorem prohibiting the existence of a constraint between the components of a metric in mathematical physics. However, the unimodular theory turned out to be unacceptable from a physical point of view, which prompted Einstein to abandon it in favor of the general covariant theory. Currently, such theories are considered as an approach to the construction of a quantum theory of gravity. Among the other possible approaches, a restriction of general covariance has the least effect on the concepts about the world around us that are dictated by common sense. Of course, there must be sufficiently substantial physical grounds to introduce the restrictions on the group of coordinate transformations.

There is a deep analogy between the mathematical description of gravitational interaction in GR and the description of gauge interactions in elementary particle physics. The only way to calibrate for the latter (due to the requirement for general covariance) is by imposing the condition that the 4-divergence of the gauge fields is equal to zero. A similar condition for the gravitational field would be the requirement for an equality to zero of a 4-divergence of the connectivity consistent with the metric, simplified by a pair of indices Γνρρ. However, due to the fact that GR is not a gauge theory, to avoid contradictions with the initial provisions, such a condition should be considered not as a gauge, but as a constraint. 

Jumping ahead to the conclusion:

The theory of gravity with a constraint, as the canonical theory, is based on the Hilbert action. Within the framework of the model proposed by the author, the fundamental differences from the standard cosmological model in a description of the evolution process of the Universe are as follows:

* The constraint defines an edge with a zero-world physical anisotropic time at the restriction of the group of admissible coordinate transformations.

* The gravitational field is endowed with all the properties of a material medium: energy, pressure, entropy and temperature.

* It is possible to construct a space–time manifold, in which only its boundary is singular.

* From the classical point of view, the process of the evolution of the Universe begins from a state with a minimum nonzero value of the scale factor and equal to zero energy.

* By virtue of the definition of an energy–momentum density tensor adopted in the paper, the pressure of the gravitational field at the initial moment turns out to be negative, as a result of which the growth of the scale factor begins. At the same time, the energy density of the gravitational field also grows in proportion to the growth rate squared of the volume factor. This process has an avalanche-like character (“big bang”) and will continue until the energy density reaches its maximum value and begins to decrease due to the energy consumption for the adiabatic expansion of the Universe.

* Despite the presence of a singularity at the boundary, the described classical model of the evolution of the Universe allows the construction of a canonical (or using path integrals) quantum theory of the gravitational field on its basis. The wave function of the very early Universe has been constructed.

* The available experimental data on the temperature of the CMB radiation allow us to conclude that the maximum global energy density in the Universe has never exceeded 1 × 1050−3 ~ (1.5 TeV)4, the maximum temperature of the matter fields has not exceed 1.230 × 1011 K, and the relative energy density of neutrinos is currently less than 1.061 × 10–4. 
[Ed. The maximum energy density of the theory naïvely suggests an initial Big Bang minimum size of a sphere with a radius of about 1088 km, about the size of the dwarf planet Pluto, although this back of napkin calculation derived by dividing the total mass-energy of the universe determined in a ΛCDM cosmology by the maximum energy density in this new theory, is probably model dependent. The maximum temperature is that associated with the "quark epoch" in the "textbook" chronology of a ΛCDM universe at roughly one millisecond after the Big Bang.]
* The global energy density of the Universe is currently 94.5% composed of the energy density of the gravitational field, and all known types of matter only contribute 5.5%.

* The accuracy of the available astronomical observations is still insufficient to choose between the predictions of GR and the proposed theory of gravity. However, over the past twenty years, the physical natures of dark energy, dark matter, and inflatons have not been established, and no new particles with suitable properties have been detected at the LHC. This is an essential argument for doubting their existence.

* From the point of view of the theory presented here, all observable effects associated with dark energy and dark matter are only manifestations of the material essence of the gravitational field. On the one hand, in the present era of the second acceleration, the gravitational field has a negative pressure; that is, it behaves like hypothetical dark energy. On the other hand, the energy density of the gravitational field exceeds the average energy density of matter on the large-scale structure of the Universe. This energy, which is not taken into account within the framework of GR and has properties attributed to dark matter, can contribute to an increase in the speed of the observed gravitationally bound objects. In addition, the pressure of the gravitational field was negative in the very early Universe also and, as mentioned above, already within the framework of the classical approach at zero initial energy density, this leads to the Big Bang, therefore there is no need for a hypothesis about the existence of any inflatons. 

The evolution of the observed Hubble constant in this theory is quite different. Another conclusion of this theory which doesn't make the conclusion section also deserves mention:

Thus, instead of the standard cosmological model (SCM), in this case, we have a continuum of cosmological models parameterized by the value of the maximum energy density ρgrmax. Comparison of the data in Tables 1 and 2 shows that the results of the calculation are in good agreement, at least up to redshift of the last-scattering surface, despite a difference in the value of the maximum energy density of more than 60 orders. This circumstance excludes doubts about the possibility of an unambiguous description of the evolution of space in this range of redshift variation. It should be noted that the “last scattering” occurred less than 100 years after the beginning of the evolution process, as opposed to 373,000 years in the ΛCDM model.

[Ed. This is the end of the photon epoch and the beginning of "recombination" in the "textbook" chronology of the universe.]  

The article and its citation are as follows: 

The gravitational equations were derived in general relativity (GR) using the assumption of their covariance relative to arbitrary transformations of coordinates. It has been repeatedly expressed an opinion over the past century that such equality of all coordinate systems may not correspond to reality. Nevertheless, no actual verification of the necessity of this assumption has been made to date. The paper proposes a theory of gravity with a constraint, the degenerate variants of which are general relativity (GR) and the unimodular theory of gravity. This constraint is interpreted from a physical point of view as a sufficient condition for the adiabaticity of the process of the evolution of the space-time metric. The original equations of the theory of gravity with the constraint are formulated. 

On this basis, a unified model of the evolution of the modern, early, and very early Universe is constructed that is consistent with the observational astronomical data but does not require the hypotheses of the existence of dark energy, dark matter or inflatons. It is claimed that: physical time is anisotropic, the gravitational field is the main source of energy of the Universe, the maximum global energy density in the Universe was 64 orders of magnitude smaller the Planckian one, and the entropy density is 18 orders of magnitude higher the value predicted by GR. The value of the relative density of neutrinos at the present time and the maximum temperature of matter in the early Universe are calculated. The wave equation of the gravitational field is formulated, its solution is found, and the nonstationary wave function of the very early Universe is constructed. It is shown that the birth of the Universe was random.
Alexander P. Sobolev, "Foundations of a Theory of Gravity with a Constraint and its Canonical Quantization" 52 Foundations of Physics Article number: 3 arXiv:2111.14612 (open access, pre-print November 25, 2021, publication date anticipated 2022) DOI: 10.1007/s10701-021-00521-1

Sunday, November 28, 2021

Dire Wolves Were Genetically Distinctive

Almost a year old, but this ancient DNA study is new to me. See also Twilight Beasts for more scholarly analysis, noting that: "To place this in context, jackals, coyotes, and gray wolves are more closely related to each other than to dire wolves."

[D]ire wolves lived in North America from about 250,000 to 13,000 years ago. They were about 20% bigger than today's gray wolves—the size of their skeletons often gives them away—and, like other wolves, they probably traveled in packs, hunting down bison, ancient horses, and perhaps even small mammoths and mastodons.
. . . 
Dire wolves would become Aenocyon dirus, a designation proposed in 1918, but that scientists largely disregarded. 
"The Aenocyon genus was left in the historical dust bin, but it can be resurrected," says Xiaoming Wang, a vertebrate paleontologist and expert on ancient canids at the Natural History Museum of Los Angeles County. "Based on the genetic data this team presents, I would support that reclassification."

Artists—and Game of Thrones creators—have often depicted the predators as large timber wolves: bulky, gray, and ferocious. But . . . living in the warmer latitudes of North America may have given them traits more common to canids and other animals in these climates, such as red fur, a bushy tail, and more rounded ears. As such . . . dire wolves may have resembled "a giant, reddish coyote."

Genetic analysis further revealed the predators probably evolved in the Americas, where they were the only wolflike species for hundreds of thousands—or perhaps millions—of years. When gray wolves and coyotes arrived from Eurasia, likely about 20,000 years ago, dire wolves were apparently unable to breed with them, as the researchers found no traces of genetic mixing.
Dire wolves are considered to be one of the most common and widespread large carnivores in Pleistocene America, yet relatively little is known about their evolution or extinction. Here, to reconstruct the evolutionary history of dire wolves, we sequenced five genomes from sub-fossil remains dating from 13,000 to more than 50,000 years ago. Our results indicate that although they were similar morphologically to the extant grey wolf, dire wolves were a highly divergent lineage that split from living canids around 5.7 million years ago. In contrast to numerous examples of hybridization across Canidae, there is no evidence for gene flow between dire wolves and either North American grey wolves or coyotes. This suggests that dire wolves evolved in isolation from the Pleistocene ancestors of these species. Our results also support an early New World origin of dire wolves, while the ancestors of grey wolves, coyotes and dholes evolved in Eurasia and colonized North America only relatively recently.
Angela R. Perri, et al.,"Dire wolves were the last of an ancient New World canid lineage" 591 Nature 87–91 (January 13, 2021).

Friday, November 26, 2021

Flooding Killed A Southern Chinese Neolithic Culture

Image from here.

This blurb isn't the most grammatically eloquent, but the Liangzhu culture really is a very significant prehistoric Chinese archeological culture, and knowing what ended it provides important insight. Wikipedia introduces the culture as follows:

The Liangzhu culture (/ˈljɑːŋˈdʒuː/; 3300–2300 BC) was the last Neolithic jade culture in the Yangtze River Delta of China. The culture was highly stratified, as jade, silk, ivory and lacquer artifacts were found exclusively in elite burials, while pottery was more commonly found in the burial plots of poorer individuals. This division of class indicates that the Liangzhu period was an early state, symbolized by the clear distinction drawn between social classes in funeral structures. A pan-regional urban center had emerged at the Liangzhu city-site and elite groups from this site presided over the local centers. The Liangzhu culture was extremely influential and its sphere of influence reached as far north as Shanxi and as far south as Guangdong. The primary Liangzhu site was perhaps among the oldest Neolithic sites in East Asia that would be considered a state society. The type site at Liangzhu was discovered in Yuhang County, Zhejiang and initially excavated by Shi Xingeng in 1936. A 2007 analysis of the DNA recovered from human remains shows high frequencies of Haplogroup O1 in Liangzhu culture linking this culture to modern Austronesian and Tai-Kadai populations. It is believed that the Liangzhu culture or other associated subtraditions are the ancestral homeland of Austronesian speakers. . . . 
The Liangzhu Culture entered its prime about 4000–5000 years ago, but suddenly disappeared from the Taihu Lake area about 4200 years ago when it reached the peak. Almost no traces of the culture were found from the following years in this area. 
Recent research has shown that rising waters interrupted the development of human settlements several times in this area. This led researchers to conclude the demise of the Liangzhu culture was brought about by extreme environmental changes such as floods, as the cultural layers are usually interrupted by muddy or marshy and sandy–gravelly layers with buried paleo trees.

Some evidence suggests that Lake Tai was formed as an impact crater only 4500 years ago, which could help explain the disappearance of the Liangzhu culture. However, other work does not find an impact crater structure or shocked minerals at Lake Tai.

The latest research is as follows:

Referred to as "China's Venice of the Stone Age", the Liangzhu excavation site in eastern China is considered one of the most significant testimonies of early Chinese advanced civilization. More than 5000 years ago, the city already had an elaborate water management system. Until now, it has been controversial what led to the sudden collapse. Massive flooding triggered by anomalously intense monsoon rains caused the collapse. . . .
Data from the stalagmites show that between 4345 and 4324 years ago there was a period of extremely high precipitation. . . . "The massive monsoon rains probably led to such severe flooding of the Yangtze and its branches that even the sophisticated dams and canals could no longer withstand these masses of water, destroying Liangzhu City and forcing people to flee." The very humid climatic conditions continued intermittently for another 300 years[.]
From here citing Haiwei Zhang, et al. "Collapse of the Liangzhu and other Neolithic cultures in the lower Yangtze region in response to climate change." Sci. Adv. (2021). DOI: 10.1126/sciadv.abi9275

Denisovan Remains Associated With Their Stone Tools

 More details will follow if I can sleuth out more.

Since the initial identification of the Denisovans a decade ago, only a handful of their physical remains have been discovered. Here we analysed ~3,800 non-diagnostic bone fragments using collagen peptide mass fingerprinting to locate new hominin remains from Denisova Cave (Siberia, Russia). 
We identified five new hominin bones, four of which contained sufficient DNA for mitochondrial analysis. Three carry mitochondrial DNA of the Denisovan type and one was found to carry mtDNA of the Neanderthal type. The former come from the same archaeological layer near the base of the cave’s sequence and are the oldest securely dated evidence of Denisovans at 200 ka (thousand years ago) (205–192 ka at 68.2% or 217–187 ka at 95% probability). 
The stratigraphic context in which they were located contains a wealth of archaeological material in the form of lithics and faunal remains, allowing us to determine the material culture associated with these early hominins and explore their behavioural and environmental adaptations. The combination of bone collagen fingerprinting and genetic analyses has so far more-than-doubled the number of hominin bones at Denisova Cave and has expanded our understanding of Denisovan and Neanderthal interactions, as well as their archaeological signatures.
Samantha Brown, et al., "The earliest Denisovans and their cultural adaptation" Nature Ecology & Evolution (November 25, 2021). DOI: https://doi.org/10.1038/s41559-021-01581-2

Saturday, November 20, 2021

Updated Physical Constant Measurements

There are two main scientific collaborations that summarize the state of current experimental data on the physical constants of the Standard Model and hadron physics: the Particle Data Group (PDG) and the Flavour Lattice Averaging Group (FLAG). 

The bottom lines of the two collaborations are largely consistent, with FLAG claiming smaller uncertainties, although there are some moderate tensions between the averages in CKM matrix element values.

FLAG has just published its 2021 update. Don't read the whole thing (it's 418 pages long).

The values of most of the Standard Model physical constants that aren't evaluated by FLAG (the three charged lepton masses, the W and Z boson masses, the Higgs boson mass, Fermi's constant, the electromagnetic force coupling constant, and Planck's constant) are already known to similar or higher precision than the most precisely determined physical constants that it evaluates. The top quark mass isn't very amenable to lattice QCD methods and is better measured directly. 

FLAG also doesn't evaluate the PMNS matrix elements (applicable to neutrino oscillation) and neutrino mass physical constants.


When more than one numbers is quoted, I use MS scheme values for the most complete model. I also quote the comparable PDG values, which are consistent with the FLAG values, except as noted in bold.

Quark Masses

* Up quark mass at 2 GeV = 2.14(8) MeV 

(PDG is 2.16 + 0.49-0.26 MeV)

3.7% relative uncertainty

* Down quark mass at 2 GeV = 4.70(5) MeV

1% relative uncertainty

(PDG is 4.67 + 0.48-0.17 MeV)

* Strange quark mass at 2 GeV = 93.44(68) MeV 

0.7% relative uncertainty 

(PDG is 93 + 11 - 5 MeV)

* Charm quark mass at c quark energy = 1278(13) MeV

0.1% relative uncertainty

(PDG is 1,270 ± 20 MeV) 

* Bottom quark mass at b quark energy = 4203(11) MeV

0.26% relative uncertainty

(PDG is 4,180 + 30 -20 MeV)

* Average of up and down quark mass at 2 GeV = 3.410(43) MeV

1.26% relative uncertainty

(PDG is 3.45 + 0.55 -0.15 MeV)


* Ratio of up quark to down quark mass = 0.465(24)

5.16% relative uncertainty

(PDG is 0.47 + 0.06 - 0.07)

* Ratio of strange quark mass to average of up and down quark mass = 27.23 (10)

0.37% relative uncertainty

(PDG is 27.3 + 0.7 -1.3)

* Ratio of c quark mass to s quark mass = 11.768(34)

0.29% relative uncertainty

(PDG is 11.72 ± 0.25)

A technical detail regarding the quark masses

FLAG also quotes masses for the Renormalization Group Independent (RGI) scheme, which is arguably a better benchmark. But this scheme is less familiar and less useful for comparisons from other sources, so I omit them here. Generally, speaking those results are approximately proportionately more massive.

CKM Matrix Elements

FLAG also updated is CKM matrix element values determined individually (as opposed to with global SM fits):

* V(us) = 0.2248(7)

The consistent stand alone PDG value is 0.2245(8). The PDG global fit value is 0.22650(48) which is in 2.4σ tension with the FLAG result.

Treating the discrepancy as true uncertainty, however, the relative uncertainty is still about 0.7%.

* V(ud) = 0.97440(17). 

This compared to a stand alone PDG value of 0.97370(14) which differs by ≈ 3.2σ from the FLAG value. The PDG global fit value is 0.97401(11) which is consistent with the FLAG value.

Treating the discrepancy as true uncertainty, however, the relative uncertainty is still about 0.07%.

Strong Force Coupling Constant

FLAG updates its estimate of the strong force coupling constant value, at the Z boson mass, in five flavor QCD:

* α(s) = 0.1184(8). 

0.68% relative uncertainty

The PDG is 0.1179(10).

My Summary From June 4, 2021

Thursday, November 18, 2021

Bell Beaker People And Steppe Ancestry In Spain

Bernard's blog discusses a new paper from Vanessa Villalba-Mouco et al., "Genomic transformation and social organization during the Copper Age – Bronze Age transition in southern Iberia" which documents the arrival of steppe ancestry to this region at that time.

Bernard observes as context (translated by Google with from French with my further refinements) that:
[P]revious paleogenetic results have shown a strong continuity of the population between the Neolithic and the Chalcolithic in the south of the Iberian Peninsula. However, the end of the Chalcolithic saw an important difference appearing between the north and the south of the Iberian Peninsula with the appearance of individuals, often linked to the Bell Beaker people, carrying a steppe ancestry in the north from 2400 BCE and their absence in the south.

The start of the Bronze Age in the Iberian Peninsula around 2200 BCE marks an important population change throughout the Iberian Peninsula with the omnipresence of individuals of steppe ancestry and the omnipresence of the Y chromosome haplogroup: R1b-P312 absent in the region before 2400 BCE.

The transition between the Chalcolithic and the Bronze Age in southern Spain saw the destruction of fortified settlements like those of Los Millares or sites surrounded by ditches like those of Valencina or Perdigões, as well as the appearance in the south of the El Argar culture characterized by perched habitats, a funeral rite, ceramics and specific metal objects. The origin of this culture is still obscure although certain elements are close to the Bell Beaker culture such as V-perforated buttons, Palmela points or archer's armbands. However, Bell Beaker pottery is absent from the El Argar culture.

The first archaeological antecedents of the Bell Beaker culture appear for the first time around 2900 BCE in Southern Iberia, but the earliest documented instance of steppe ancestry in Southern Iberia dates to 2200 BCE and is associated with the Bell Beaker culture (a.k.a. the "campaniforme" culture). 

Thus, the evidence in support of the model that the Bell Beaker culture involved the cultural diffusion of a Southern Iberian cultural movement into Europeans with steppe ancestry to their North in Western Europe is overwhelming.

Steppe ancestry is present at lower proportions in Southern Iberia, when it is found at the Bronze Age-Copper Age transition, than in Northern Iberia, but is ubiquitous at some level at least, in the El Argar culture of Southern Iberia by 2050 BCE.

Bronze Age Southern Iberians can generally be modeled well as admixtures of German Bell Beakers and Copper Age Iberians, with an Iranian ancestry component that is detected in Bronze Age Southern Iberians already present in Copper Age Southern Iberians. 

But one late El Argar culture outlier individual in a recent study also had one great-grandparent who was Moroccan in origin.

We care about all of this because the Bell Beaker culture was a probable first linguistically Indo-European culture in Western Europe, that is now overwhelmingly linguistically Indo-European, and was essentially the final step in bringing Western Europe to something closely approximating its current population genetic mix. Western Europe as we know it, genetically and linguistically, came to be to a great extent when the Bell Beaker people arrived there in the early Bronze Age around the time of the 4.2 kiloyear climate event.

Tuesday, November 16, 2021

Neutron Stars Aren't That Weird

Lots of the terminology in this new paper's title and abstract which considered methods of doing subatomic physics calculations with established physical theories from the Standard Model of Particle Physics to determine the properties of neutron stars would be challenging for my audience, but the bottom line conclusion isn't:

[N]eutron stars do not have quark matter cores in the light of all current astrophysical data.

From here

Thus, neutron stars probably really are just tightly packed neutrons, as hypothesized many decades ago, and not more exotic hypothetical kinds of condensed matter like "quark stars" or "boson stars".

Error In Astronomy Distances Is Systemically Understated

Singh's analysis isn't rocket science, just diligent hard work. But it also is robust, is not model dependent, and is frankly the kind of reality check that should be applied to stated error bars experimental and observational results more often.  

Simply put, the Hubble tension is an illusion of us thinking that our extragalactic distance measurements are more precise than they actually are. This also suggests that the values of the cosmological constant and dark energy proportion are far more uncertain than they are generally claimed to be. We are not yet in an era of true precision cosmology as much as we'd like to think that we are.

1. We find that any two distance moduli measurements for the same galaxy differ from each other by 2.07 times the reported one sigma uncertainty on average. 
2. This average difference between distance moduli measurements of the same galaxy as a multiple of reported uncertainty is growing with time of publication, rising to 3.00 times the reported one sigma uncertainty for all distances reported from 2014 to 2018. 
3. This average difference between distance moduli measurements of the same galaxy as a multiple of reported one sigma uncertainty is highest for the standard candles (3.01) including Cepheids (4.26), Type Ia Supernovae (2.85), and Tip of the Red Giant Branch (2.82). 
4. This data points to a possible systematic underestimation of uncertainties in extragalactic distances. 
5. The results also give a possible way out of the Hubble-Lemaitre tension by advocating for increasing the error bars on Hubble-Lemaitre constant measured via distance ladders of standard candles and rulers.
Ritesh Singh, "Evidence for possible systematic underestimation of uncertainties in extragalactic distances and its cosmological implications" arXiv:2111.07872 (November 15, 2021) (published in 366 Astrophys Space Sci 99 (2021) DOI: 10.1007/s10509-021-04006-5).

The Precursors In America Failed To Thrive

There is pretty strong evidence (especially, well dated footprints from 22,000 to 30,000 years ago) of modern humans in North America, at least prior to and during the Last Glacial Maximum ca. 20,000 years ago, long before the Founding population of the Americas should have arrived around 15,000 years ago.

West Hunter considers conditions that would fit their "failure to thrive" in the New World, as evidenced by a complete lack of human remains, few and marginal possibly human made tools, and a lack of mass extinctions, in "virgin territory" without competition from other hominins. He notes:

The problem with the idea of an early, pre-Amerindian settlement of the Americas is that (by hypothesis, and some evidence) it succeeded, but (from known evidence) it just barely succeeded, at best. Think like an epidemiologist (they’re not all stupid) – once humans managed to get past the ice, they must have had a growth factor greater than 1.0 per generation – but it seems that it can’t have been a lot larger than that . . . .

A saturated hunter-gatherer population inhabiting millions of square miles leaves a fair number of artifacts and skeletons per millennium – but we haven’t found much. We have, so far, found no skeletons that old. I don’t think we have a lot of totally convincing artifacts, although I’m no expert at distinguishing artifacts from geofacts. (But these were modern humans – how crude do we expect their artifacts to be?)

For-sure footprints we’ve got, and intriguing genetic data.

A priori, I would expect hunter-gatherers entering uninhabited America to have done pretty well, and have high population growth rates, especially after they become more familiar with the local ecology. There is good reason to think that early Amerindians did: Bayesian skyline analysis of their mtDNA indicates fast population growth. They were expert hunters before they ever arrived, and once they got rolling, they seem to have wiped out the megafauna quite rapidly.

But the Precursors do not seem to have become numerous, and did not cause a wave of extinctions (as far as I know. check giant turtles.).
I've underlined some thoughts that I don't agree with fully. It looks like the Younger Dryas was a much bigger factor in mass extinctions relative to the overkill hypothesis than we previously expected. 

And then there is the genetic evidence. Unlike Cochran, I am quite convinced that it is basically mathematically impossible for this pre-Founding population to have been the source of "Paleo-asian" autosomal DNA in South America, which is much more likely to have arrived ca. 1200 CE via Oceanians (although other possibilities in that time frame could work as well). The frequency of the Paleo-Asian component is just too variable. And, any population that stayed isolated population genetically for more than 14,000 years to finally burst out into South America would be incredibly genetically distinctive (much like the Kalash people who are incredibly genetically distinctive after their small population was isolated for much less long a time period than that).

Statistical artifacts of the methods used and/or selective pressure against some Oceanian distinctive genes that make the remaining component look Onge rather than Papuan-like, could explain the closer f-test statistical matches to these populations. A relict non-Oceanian Paleo-asian population from Northeast Asia arriving in that time frame could work also but seems less plausible. Loss of distinctive Oceanian autosomal DNA during periods of stable population with low effective population size or during periods of shrinking populations during times of adverse conditions could also purge so statistically important genetic identifiers in what is already a small share of the total genome in the people who have it.

Further, some of the more convincing earlier pre-14,000 year ago finds of arguable stone tool making and fire use in the Americas were in North America (including Mesoamerica) or northeastern South America, not in the greater Amazon jungle basin where Paleo-asian ancestry has been found. 

But the points he makes about a "Precursor" (I like Cochran's choice of words here) population's lack of success are still valid. There shouldn't have been Malthusian limits to their population in the New World that allowed them to sustain their population, but population growth would have had to be exceedingly slight in the long run to be consistent with the available evidence. He considers some possibilities and I underline the ones that I think are too implausible to take seriously:

What might have limited their biological success?

Maybe they didn’t have atlatls. The Amerindians certainly did.

Maybe they arrived as fishermen and didn’t have many hunting skills. Those could have been developed, but not instantaneously. An analogy: early Amerindians visited some West Coast islands and must have had boats. But after they crossed the continent and reached the Gulf of Mexico, they had lost that technology and took several thousand years to re-develop it and settle the Caribbean. Along this line, coastal fishing settlements back near the Glacial Maximum would all be under water today.

Maybe they fought among themselves to an unusual degree. I don’t really believe in this, am just throwing out notions.

Maybe their technology and skills set only worked in a limited set of situations, so that they could only successfully colonize certain niches. Neanderthals, for example, don’t seem to have flourished in plains, but instead in hilly country. On the other hand, we don’t tend to think of modern human having such limitations.

One can imagine some kind of infectious disease that made large areas uninhabitable. With the low human population density, most likely a zoonosis, perhaps carried by some component of the megafauna – which would also explain why it disappeared.

What do I think?

In a nutshell, the Precursor population was probably just slightly below the tipping point that they needed to thrive, in terms of population size, knowledge, and the resources they brought with them, but just large enough to establish a community that was marginally sustainable in the long run with inbreeding depression and degraded technology.

I think the that Precursors were probably derived from one or more small expeditions from a population close to the ultimate Founding population of the Americas rooted in Northeast Asia that survived where many who left no trace at all died, as something of a fluke. 

Kon-Tiki style transpacific route,  or trans-Atlantic Solutrean hypothesis route would be anachronistic technologically at 22,000 years ago or more and is disfavored by other evidence, including genetic evidence, as well. Claims of hominins in the Americas 130,000 years ago, are likewise not credible.

The Precursors probably had no dogs or other domesticated animals on their boat(s). But dogs were, in my opinion, probably a major fitness enhancing technology for the post-Papuan/Australian aborigine wave of modern humans in mainland Asia (the Papuan/Australian wave didn't have dogs), and for the Founding population of the Americas.

They probably had  a small founding population that suffered technological degradation similar to what Tasmania experienced when it separated from Australia for 8,000 years, including the loss of maritime travel technology needed to reunite with kin left behind in Asia or Beringia. See Joseph Henrich, "Demography and Cultural Evolution", 69(2) American Antiquity 197-214 (April 2004) but see Krist Vaesen, "Population size does not explain past changes in technological complexity" PNAS (April 4, 2016) (disputing this conclusion, unconvincingly IMHO).

For example, as Cochran notes, they may have known how to fish and hunt from boats, but not how to make boats, or how to hunt terrestrially, at first. 

The effective population size of the Founding population of the Americas was ca. 200-300 people; Henrich's hypothesis sees major degradations in technology as the population size falls significantly below 100, which would have been a typical size for a one-off exploratory expedition that was stranded and unable to return.  Once technology is lost, it can be recovered or rebuilt over time, but it takes much longer to innovate than to preserve culture transmitted from previous generations or to imitate neighboring civilizations. 

The loss of technology may not simply have been a matter of raw numbers either. Vaesen's counter-examples are very small, stable, complete hunter-gather communities. But an exploratory expedition may have consisted of bold young people not fully trained in reproducing their culture's technologies, even if they had enough raw numbers of people to do so, rather than the tribe or band's skill craftspeople.

They probably suffered inbreeding depression greater than the main founding population of the Americas due to a smaller founding population size. A scientific report in Nature (March 5, 2019) notes that:
Franklin has proposed the famous 50/500 rule for minimum effective population size, which has become the threshold to prevent inbreeding depression. This rule specifies that the genetic effective population size (N(e)) should not be less than 50 in a short term and 500 in a long term.
The possibility that the Precursors derived from one or more expeditions with an effective population size less than 50 in the short term, regardless of its long term size, seems plausible. Even if there were several dozen men on the expedition, it is very plausible that there could have been fewer than twenty-five reproductive age women on the expedition at the outset needed to avoid short run inbreeding depression, and many of them were probably cousin and/or siblings. 

These inbreeding depression effects could have lasted many, many generations without input from an outside population source, leaving a much less smart, much less fit group in the next few generations than in their first generation.

They probably faced challenges to thriving at hunting and gathering due to the ice age that caused the Last Glacial Maximum that their degraded technology didn't help them to overcome. They may have landed in North American fairly near to the glacial area that was particularly impaired.

They probably did go extinct in all, or almost all, of their range, after not too many generations. They may not have reached South America until after the Last Glacial Maximum at all, and if they did, may not have penetrated very far into it, sticking to the Gulf Coast.

To the extent that they didn't go extinct, they probably lost many of their uniparental genetic markers during sustained periods of stable low populations or population busts, as opposed to the preservation of the markers usually found in expanding populations. 

They also probably weren't all that genetically distinct from the Founding population of the Americas from which they were only separated for a few thousand years. A small effective population takes many generations to generate distinct mutations and both the Founding population of the Americas and the Precursors would have had small effective populations for most of the Last Glacial Maximum ice age.

The Founding population, due to a larger founding population, better technology retention, less inbreeding, dogs, and better climate conditions, expanded rapidly.  When the much more advanced Founding population arrived, the remnants of the Precursors that remained, if any, would have been diluted almost invisibly into the very genetically similar Founding population and may have died out from competition, or at least, shed any of its distinctive uniparental markers, with it as well.


"Failure to thrive" is a phrase most commonly used in medicine to describe phenomena without a specific and well-determined cause of a child's lack of development at the pace of normal children in health environments. Usually, it is attributed to poor nutrition either in quality or quantity.

Tuesday, November 9, 2021

Disciplinary Prejudices

Your priors on the likelihood an uncertain event depend upon where you are coming from.


Premed: "Does this count for a physics credit? Can we shorten the string so I can get it done faster? And can we do one where it hits me in the face? I gotta do a thing for first aid training right after?"

From here

Possible Planet 9 Detection

A new paper conducted a "needle in a haystack" type search for a hypothetical Planet 9, a hypothesis which Wikipedia explains as follows:
Planet Nine is a hypothetical planet in the outer region of the Solar System. Its gravitational effects could explain the unlikely clustering of orbits for a group of extreme trans-Neptunian objects (ETNOs), bodies beyond Neptune that orbit the Sun at distances averaging more than 250 times that of the Earth. These ETNOs tend to make their closest approaches to the Sun in one sector, and their orbits are similarly tilted. These alignments suggest that an undiscovered planet may be shepherding the orbits of the most distant known Solar System objects. Nonetheless, some astronomers question the idea that the hypothetical planet exists and instead assert that the clustering of the ETNOs orbits is due to observing biases, resulting from the difficulty of discovering and tracking these objects during much of the year.

Based on earlier considerations, this hypothetical super-Earth-sized planet would have had a predicted mass of five to ten times that of the Earth, and an elongated orbit 400 to 800 times as far from the Sun as the Earth. The orbit estimation was refined in 2021, resulting in a somewhat smaller semi-major axis of 380+140−80 AU. This was more recently updated to 460 +160−100 AU.  
Konstantin Batygin and Michael E. Brown suggested that Planet Nine could be the core of a giant planet that was ejected from its original orbit by Jupiter during the genesis of the Solar System. Others proposed that the planet was captured from another star, was once a rogue planet, or that it formed on a distant orbit and was pulled into an eccentric orbit by a passing star.

While sky surveys such as Wide-field Infrared Survey Explorer (WISE) and Pan-STARRS did not detect Planet Nine, they have not ruled out the existence of a Neptune-diameter object in the outer Solar System. The ability of these past sky surveys to detect Planet Nine was dependent on its location and characteristics. Further surveys of the remaining regions are ongoing using NEOWISE and the 8-meter Subaru Telescope. Unless Planet Nine is observed, its existence is purely conjectural. Several alternative hypotheses have been proposed to explain the observed clustering of trans-Neptunian objects (TNOs).
The paper narrowed the search down to one candidate for further examination, at the low end of the range of closeness to the Sun, and smaller, than some hypotheses. The paper and its abstract are as follows:
I have carried out a search for Planet 9 in the IRAS data. At the distance range proposed for Planet 9, the signature would be a 60 micron unidentified IRAS point source with an associated nearby source from the IRAS Reject File of sources which received only a single hours-confirmed (HCON) detection. The confirmed source should be detected on the first two HCON passes, but not on the third, while the single HCON should be detected only on the third HCON. I have examined the unidentified sources in three IRAS 60micron catalogues: some can be identified with 2MASS galaxies, Galactic sources or as cirrus. The remaining unidentified sources have been examined with the IRSA Scanpi tool to check for the signature missing HCONs, and for association with IRAS Reject File single HCONs. No matches of interest survive.

For a lower mass planet (< 5 earth masses) in the distance range 200-400 AU, we expect a pair or triplet of single HCONs with separations 2-35 arcmin. Several hundred candidate associations are found and have been examined with Scanpi. A single candidate for Planet 9 survives which satisfies the requirements for detected and non-detected HCON passes. A fitted orbit suggest a distance of 225+-15 AU and a mass of 3-5 earth masses. Dynamical simulations are needed to explore whether the candidate is consistent with existing planet ephemerides. If so, a search in an annulus of radius 2.5-4 deg centred on the 1983 position at visible and near infrared wavelengths would be worthwhile.
Michael Rowan-Robinson "A search for Planet 9 in the IRAS data" arXiv:2111.03831 (November 6, 2021) (Accepted for publication in MNRAS).

It is expected to be 600 times more faint for an Earth based observer than Pluto. The fact that it is a struggle to determine if there is a decent sized planet in the Solar System, while it is possible to see far more distant galaxies and stars, also illustrates the extent to which even state of the art astronomy techniques are far more limited in their ability to see objects in space that are not stars than they are in their ability to see stars and diffuse radiation or gravitational waves.