Friday, October 22, 2021

Filipino Ancestry And Four Historic Native American Subpopulations In Mexico

Native American genetic ancestry in Mexico is derived from three known ancestral genetic populations (west-central, Nahua, and Mayan) and a fourth "ghost population" originating in Sonora.

Researchers were also able to characterize a Filipino ancestry component derived from the historically attested 17th century Manila Galleon trade between the colonial Spanish Philippines and the Pacific port of Acapulco in Spanish Mexico which has not previously been identified as a contributor to Mexican ancestry. The body text notes that:
In order to pinpoint the origin of the Asian component in Mexico, a MAAS-MDS was performed with a reference panel of East Asian, Southeast Asian and Oceanian populations. Cosmopolitan Mexicans having more than 5% combined East Asian and Melanesian ancestry were included, resulting in one individual from Sonora, one from Oaxaca, one from Yucatan and twelve from Guerrero. 
Sonora and Yucatan grouped near Chinese reference populations; Oaxaca clustered broadly with maritime Southeast Asia; while Guerrero showed a heterogeneous profile. No cosmopolitan Mexican sample showed Melanesian variation; therefore, the MAAS-MDS plot was zoomed into, excluding populations with Melanesian contributions, for visibility. Most individuals from Guerrero clustered with maritime Southeast Asia, except for one individual positioned near southern China. Individuals from Guerrero resemble western Indonesian and non-Negrito Filipino populations, specifically those from Sumatra, Mindanao, Visayas and Luzon. Admixture dating of these Asian haplotypes in Guerrero using Tracts fit a single pulse admixture model at 13 generations ago, or in 1620 CE using 30 years per generation. . . .
This coincides with the Manila Galleon slave trade during the colony, which had a period of activity from 1565 to 1679 CE. This slave trade route originated after the need for additional labor arose due to the demographic collapse of the native populations, and ended when these Asian slaves, mostly residing in Spanish colonial Asia, were actively declared indigenous vassals of the crown and thus free. At that time the Atlantic slave trade from Africa became predominant over the Pacific route. This Southeast Asian component from the Manila Galleon trade could have extended to neighboring coastal Pacific areas of southern Mexico, as could be the case of the individual from Oaxaca. 
Moreover, although historical records report the residence of “Chinos” predominantly in Guerrero, smaller numbers are also recorded in places such as Colima, Guadalajara, Zacatecas, San Luis Potosi, Veracruz, Puebla, Toluca and, in particular, Mexico City. Thus, we do not rule out the presence of this component in the other populations from the study due to insufficient sampling or statistical power, as well as locations not considered in this study.

On the other hand, East Asian ancestry in Sonora and Yucatan, both distant locations from Guerrero, could possibly represent post-colonial migration events, such as Chinese immigration, mainly from the Guangdong Province, into northern Mexico and the immigration of Korean henequen workers into the Yucatan Peninsula, both occurring during and after the Porfiriato Period (between 1880 and 1910 CE). However, more extensive sampling across the country is needed to shed light on these genetic signals in order to associate them with these post-colonial historical events.

The discussion section of the paper is unusually multidisciplinary and thoughtful. Reads the whole thing. 

The paper and its abstract are as follows:
Mexico has considerable population substructure due to pre-Columbian diversity and subsequent variation in admixture levels from trans-oceanic migrations, primarily from Europe and Africa, but also, to a lesser extent, from Asia. Detailed analyses exploring sub-continental structure remain limited and post-Columbian demographic dynamics within Mexico have not been inferred with genomic data. We analyze the distribution of ancestry tracts to infer the timing and number of pulses of admixture in ten regions across Mexico, observing older admixture timings in the first colonial cities and more recent timings moving outward into southern and southeastern Mexico. 
We characterize the specific origin of the heterogeneous Native American ancestry in Mexico: a widespread western-central Native Mesoamerican component in northern Aridoamerican states and a central-eastern Nahua contribution in Guerrero (southern Mexico) and Veracruz to its north. Yucatan shows lowland Mayan ancestry, while Sonora exhibits a unique northwestern native Mexican ancestry matching no sampled reference, each consistent with localized indigenous cultures. 
Finally, in Acapulco, Guerrero a notable proportion of East Asian ancestry was observed, an understudied heritage in Mexico. We identified the source of this ancestry within Southeast Asia—specifically western Indonesian and non-Negrito Filipino—and dated its arrival to approximately thirteen generations ago (1620 CE). This points to a genetic legacy from the 17th century Manila Galleon trade between the colonial Spanish Philippines and the Pacific port of Acapulco in Spanish Mexico. Although this piece of the colonial Spanish trade route from China to Europe appears in historical records, it has been largely ignored as a source of genetic ancestry in Mexico, neglected due to slavery, assimilation as “Indios” and incomplete historical records.
Juan Esteban Rodríguez-Rodríguez, "Admixture dynamics in colonial Mexico and the genetic legacy of the Manila Galleon" bioRxiv (October 16, 2021) (open access). 

Monday, October 18, 2021

Another Search For Lepton Universality Violation Comes Up Empty

While there are tensions with the Standard Model prediction of lepton universality (i.e. that charged leptons have the same properties except mass), from a couple of kinds of B meson decay, only one of which exceeds three sigma, a test of isospin partners of those same decays (albeit with less precision) find no statistically significant deviation from the Standard Model prediction, making the anomaly less likely to be real.
Tests of lepton universality in B0→K0Sl+l− and B+→K∗+ℓ+ℓ− decays where l is either an electron or a muon are presented. The differential branching fractions of B0→K0Sl+l− and B+→K∗+ℓ+ℓ− decays are measured in intervals of the dilepton invariant mass squared. The measurements are performed using proton-proton collision data recorded by the LHCb experiment, corresponding to an integrated luminosity of 9fb−1. The results are consistent with the Standard Model and previous tests of lepton universality in related decay modes. The first observation of B0→K0Sl+l− and B+→K∗+ℓ+ℓ− decays is reported.
LHCb collaboration, "Tests of lepton universality using B0→K0Sl+l− and B+→K∗+ℓ+ℓ− decays" arXiv:2110.09501 (October 18, 2021).

This tends to support an interpretation of the prior lepton universality violation indications as a fluke or some sort of unrecognized systemic error because these are isospin partners of the cases where apparent violations were seen. The body text of the Letter explains this while also noting that the reduced precision of this measurement is also a factor:
Forces in the SM couple to the charged leptons with equal strength, which is referred to as lepton universality. Therefore, these ratios are predicted to be very close to unity, with small corrections due to the muon-electron mass difference. Furthermore, these ratios benefit from precise cancellation of the hadronic uncertainties that affect predictions of the branching fractions and angular observables. Significant deviation from unity in such ratios would therefore constitute unambiguous evidence of BSM physics.

The ratio RK∗0 , measured by the LHCb collaboration using the data collected in the q 2 regions 0.045 < q2 < 1.1 GeV2 /c4 and 1.1 < q2 < 6.0 GeV2 /c4 , is in tension with the SM predictions at 2.2–2.4 and 2.4–2.5 standard deviations (σ), respectively. A measurement of RK+ performed in the region 1.1 < q2 < 6.0 GeV2 /c4 deviates from the SM by 3.1 standard deviations. The analogous ratio measured using Λ 0 b → pK−` +` − decays, RpK, is consistent with the SM within one standard deviation. All four measurements show a deficit of b→ sµ+µ − decays with respect to b→ se+e − decays.

In addition, angular observables and branching fractions of b→ sµ+µ− decays have been measured, with several in tension with the SM. However, the extent to which they may be affected by residual quantum chromodynamics contributions remains uncertain. Intriguingly, it is possible to account for all these anomalies simultaneously through the modification of the b→ s coupling in a model-independent way. Such a modification can be generated by the presence of a heavy neutral boson or a leptoquark, as well as in models with supersymmetry, extra dimensions, and extended Higgs sectors.

The B0→ K0 S ` +` − and B+→ K∗+` +` − decays are the isospin partners of B+→ K+` +` − and B0→ K∗0 ` +` − decays and are expected to be affected by the same NP contributions. Testing lepton universality by measuring the ratios RK0 S and RK∗+ can therefore provide important additional evidence for or against NP. However, while these decays have similar branching fractions to their isospin partners, O(10−6 ) to O(10−7 ), they suffer from a reduced experimental efficiency at LHCb due to the presence of a long-lived K0 S or π 0 meson in the final state.

Proponents of the anomaly argue that the central values weren't that much different from previous ones despite a lack of statistical significance. 

It is also worth considering that a b-->s transition does not happen at tree level in the Standard Model. It requires a W- boson interaction from a b quark to an up-like quark followed by a transition from an up-like quark to an s quark in a W+ interaction at the one-loop level, or a further iteration of up-quark and down-quark pin pong at the three and/or five loop level. 

And, an sµ+µ− end product can decay to an se+e- end product plus a pair of neutrinos via an additional round of weak force interactions by the muons (along with additional decay products, some of which would cancel out virtually or be to invisible neutrinos), although these would usually be rather slow compared to the other decays (particularly if the muons are traveling at the relativistic speeds that they would be with these kinds of energies).

The complexity of the possible paths makes it harder to model correctly. 

Denisovan Ancient DNA Found In Tibet

I apparently missed this one when it came out last year. The existence of this ancient DNA was paradigm confirming as Tibetan modern DNA shows that high altitude adaptations in the region were introgressions from Denisovans. 
Two archaic lineages overlapped with modern humans outside of Africa: the well-studied Neanderthals and their more mysterious cousins, the Denisovans. Denisovan remains are rare, being limited to Denisovan Cave in Siberia and a putative, undated jaw from Tibet. However, there is evidence for multiple introgressions from Denisovans into modern-day humans, especially in Australasian populations. By examining the sediment of Baishiya Karst Cave located on a high plateau in Tibet, Zhang et al. identified ancient mitochondrial DNA from Denisovans indicating their presence at about 100 thousand, 60 thousand, and possibly 45 thousand years ago. This finding provides insight into the timing and distribution of Denisovans in Asia and extends the time of occupation of the Tibetan plateau by hominins.
A late Middle Pleistocene mandible from Baishiya Karst Cave (BKC) on the Tibetan Plateau has been inferred to be from a Denisovan, an Asian hominin related to Neanderthals, on the basis of an amino acid substitution in its collagen. Here we describe the stratigraphy, chronology, and mitochondrial DNA extracted from the sediments in BKC. We recover Denisovan mitochondrial DNA from sediments deposited ~100 thousand and ~60 thousand years ago (ka) and possibly as recently as ~45 ka. The long-term occupation of BKC by Denisovans suggests that they may have adapted to life at high altitudes and may have contributed such adaptations to modern humans on the Tibetan Plateau.
Dongju Zhang, et al., "Denisovan DNA in Late Pleistocene sediments from Baishiya Karst Cave on the Tibetan Plateau" 370 (6516) Science 584-587 (October 30, 2020).

Tuesday, October 12, 2021

New World Hepatitis B Virus Strain Diverged 17Kya to 20Kya

new study has examined ancient hepatitis B virus DNA. As Bernard's blog explains (in a translation from the original French by Google), because this virus is solely transmitted from humans to other humans (with some slight overlap with great apes), its evolution over time can be used as a proxy for ancestrally informative genetic information about people.

The hepatitis B virus affected in 2015, 257 million people around the world according to the WHO, of whom about a million died. It is transmitted through contact with body fluids mainly during sexual relations or in perinatal contexts. There is no environmental or animal reservoir. Therefore, this virus spreads with the dispersal of human beings. The latest paleogenetic studies have made it possible to find the hepatitis B virus in skeletons belonging to different periods, in particular in the Neolithic of Europe.
The Phylogeny Of Hepatitis B Virus Strains:

The figure below shows in A and B the position of the various old samples and in C the current distribution of the various genotypes of the virus:

The New World Strains And Their Implications

Particularly notable is the estimated genetic mutation rate divergence time of the Hep B virus strains found in the Americas (related to modern strains F and H) from the strains found in Eurasia (related to modern human strains A, B, C, D, E, G, and I, and to the modern great ape strain J), a split of 17,000 to 20,000 years, which is around the time of the Last Glacial Maximum.

Notably, the predominantly American strains of Hep B with this deep ancestral split are now largely confined to Western South America and Meso-america. Presumably American strains of the Hep B virus were replaced by Old World strains of Hep B, first by Paleo-Eskimo and Thule waves of migration in Arctic and Sub-Arctic North America (which were probably one source of strain B), and then by strains brought by post-Columbian migrants to the Americas, who probably initially brought strains A and D, with later immigration to North America bringing strains B and C from Asia.

The estimated timing of the genetic divergence of private American Hep B strains in modern and ancient DNA  is more recent that newly announced well-dated archaeological evidence (from footprints in New Mexico) of a modern human presence in the Americas back to at least 23,000 years ago. Thus, this adds to other evidence that suggests that the first wave of modern humans in the Americas had little ecological impact and made little, if any, genetic contribution to subsequent waves of modern humans on the two continents. 

The high levels of intra-population variation in "Paleo-Asian" ancestry in South American populations also suggests strongly that this Paleo-Asian ancestry cannot have origins in these earliest modern humans in the Americas. Contact between indigenous South Americans and Polynesian mariners ca. 1200 CE not far from where the first traces of Paleo-Asian ancestry were observed in indigenous American populations is a much more plausible explanation for the observed Paleo-Asian ancestry that is seen in low frequencies in South American tribes. 

The date of the Old World v. New World Hep B strain split also indicates that ancient strains of Hep B that were circulating in Asia in the Upper Paleolithic era before entering the Americas were entirely replaced in Asia in the Neolithic era or later. And, the timing of the split suggests that the private American strains of Hep B are more likely to have a source in the Northeast Asian/Siberian component of the founding population of the Americas (which has more recent genetic connections with West Eurasia) than from the more East Asian component of the founding population of the Americas (which may have been free of Hep B in the Upper Paleolithic era, although there is no way to confirm that earlier strains of Hep B there weren't merely replaced by later more virulent ones and the small founding population from each component means that strains of Hep B that were circulating in Asia at the time could have simply not been brought to the Americas due to random chance). 

The Phylogeny Of Old World Hep B Strains

The split between East and Southeast Asian (and Arctic North American) strains of the Hep B virus (B, C and I) and one found in Europe and Africa (A) dates to about 5500 BCE (which predates Austronesian maritime expansion in Oceania and island Southeast Asia), in the Middle Neolithic era but with the expansion of the current strains mostly dating to after 1000 BCE. 

The split between both of these strains and the strain with the most global distribution (D) as well as ancestor to a predominantly African strain (E) is to about 6500 BCE, which is early to middle Neolithic (again, with a mostly post-Bronze Age expansion).

The Hep B virus strains related to modern humans strain G and modern great ape strain J were predominant in the time period from the Mesolithic to the Bronze Age in Europe and Anatolia, but are now only a tiny share of the total mix, and might have been missed entirely but for the intensity of medical genetic research using European samples.

The fact that the non-human great ape strains of the Hep B virus (J) has a more recent mutation dated split in the Mesolithic era (ca. 13,000 BCE to 10,000 BCE) than the split between the New World and Old World strains is also notable. Given the predominance of human strains of modern Hep B and the timing, a human to great ape transmission in this time frame somewhere in tropical or subtropical sub-Saharan Africa on at least two separate instances in the Mesolithic era seems far more likely than the reverse scenario. These transmissions probably involved great ape exposure to the blood of an infected human rather than intercourse, although there is no way to tell definitively.

Wallacea and Sahul

We can't tell from the available data if Hep B was present and replaced by later strains in Wallacea or the Sahul region in the Upper Paleolithic era as it was in Asia and North America, or if it arrived later (at the time of the arrival of the dingo in a time period ca. 3000 BCE to 8000 BCE, via later Asian maritime travelers, or with later maritime contact with Europeans). 

Given the small founding populations of Wallacea and the Sahul region in the Upper Paleolithic era (ca. 50,000 to 70,000 years ago), which was probably in the low hundreds, it is quite possible that Hep B was absent at that time (although it was present in the Americas which has a similarly small founding population).

But it is also possible that the absence of evidence simply reflects a limited supply of ancient viral DNA from the region and limited whole genome sampling of Hep B cases among aboriginal Australians and indigenous peoples to the east of the Wallace line.

Monday, October 11, 2021

The Story Of A Random High Energy Physics Anomaly

One of the dirty little secrets of high energy physics is that there are an embarrassment of seemingly quite statistically significant discrepancies between theoretical expectations and experimental results (and not infrequently between different theoretical expectations). 

A typical paper describing such a discrepancy without unnecessary hype is this one from today's preprints.

It was made possible because the measurements of the decays at the Belle experiment was significantly more precise than the measurements made at the previous ALICE experiment which weren't sufficiently precise to cleanly distinguish experimental uncertainty from substantive differences between theoretical predictions and the experimental results at a sufficiently high statistical significance.

For the most part, the discrepancies usually arise because doing quantum chromodynamics (QCD) calculations and predictions (i.e. calculations involving the strong force of the Standard Model) is almost as much of an art as a science, since the only calculations that can be done involve approximations of the largely intractable exact equations of QCD, let alone the full Standard Model with all electroweak corrections as well. 

It is possible that some results may point to "new physics" beyond the Standard Model, but it is hard to know which discrepancies are real and which are flawed predictions. But, figuring out which oversimplifications do and don't matter is critical for distinguishing true anomalies from mere oversimplified methods for making theoretical calculations without undue computational effort.

It discusses the fraction of decays of some short lived spin-1/2 baryons (the analogs of protons and neutrons with heavier valence quarks) to particular decay products. There are more than six dozen possible decays of these charmed baryons (meaning that they have exactly one valence charm quark). 

This study focuses on two or three such possibilities in each case, with moderate frequency, in which a charm quark decays to an strange quark while emitting a W+ boson that decays to a charged anti-lepton and a neutrino of the same flavor as the charged anti-lepton. 

The positively charged charmed lambda baryons and neutral charmed chi baryons differ in that the former has an up quark in a position where the latter has a strange quark.

(1) positively charged charmed lambda baryons (valence quarks udc, a mass of about 2286 MeV, and a mean lifetime of about 202 femtoseconds) to a neutral lambda baryon (valence quarks uds, a mass of about 1116 MeV, and a mean lifetime of about 26,320 femtoseconds) together with a charged anti-lepton (i.e. a positron, anti-muon or anti-tau lepton) with the counterpart neutrino, and 

(2) the decay of a neutral charmed chi baryon (valence quarks dsc, a mass of about 2468 MeV with a mean lifetime of about 456 femtoseconds) to a negatively charged chi baryon (valence quarks dss, a mass of about 1322 MeV, and a mean lifetime of about 16,390 femtoseconds) together with together with a charged anti-lepton (i.e. a positron or anti-muon) with the counterpart neutrino. 

The strong parallels between these two kinds of decays suggests that their proportion of total decays should be similar to each other. But this naive observation wasn't as successful as expected.

The observed proportion of decays of these types is about a third of the theoretically predicted values in each case using a particularly simplified model (one based on SU(3) symmetry disregarding differences between the three lighter quark masses and using lambda decays as a benchmark), which is nominally well over five sigma in significance in two different related decays. But this is not very exceptional because this probably simply means that the theoretical calculation is off by a factor of three for some reason relating to its oversimplification.

The authors then speculate on what oversimplification used in their theoretical prediction was most likely to be the source of the problem with their theoretical prediction. 

After considering several possibilities, they suggest in their conclusion that the most likely issue was their initial neglect of the mass difference between the strange quark and the first generation up and down quark masses especially in the third valence quark position, while other oversimplifications of their theoretical prediction probably weren't nearly as important.

From the authors' perspectives, the paper's take away point is to recognize that there is a factor of three in the end result outcome in the predictions that flows from including this one additional complicating factor in the calculations. The implication is that this is a factor that one can't afford to ignore.

Another element of the analysis not focused upon by the authors is that both the theoretical result and the experimental data support the Standard Model model physics rules of "lepton universality" which means that electrons, muons and tau leptons differ from each other in only one property: mass, and that their antiparticle counterparts behave the same way, in a decay system with significant similarities to the small number of B meson decays in which lepton universality seems to be violated. In this case, there is no statistically significant difference between the rates of decays into different flavors of charged leptons.

Any theory explaining lepton universality violations has to explain why the vast majority of particle decays are consistent with lepton universality, while a few special cases seemingly are not. In other words, it has to figure out what makes the exceptions special.

The preprint and its abstract are as follows:

Xiao-Gang He, et al., "SU(3) symmetry and its breaking effects in semileptonic heavy baryon decays" arXiv:2110.04179 (October 8, 2021).

Monday, October 4, 2021

Constraints On Primordial Gravitational Waves Strengthened

The latest BICEP experiment analysis, using data through 2018, further constrains the magnitude of any primordial gravitational waves (strictly speaking the "scalar to tensor ratio").

This further narrows the parameters space allowed for cosmological inflation theories and the data are consistent with the non-existence of primordial gravitational waves. It also continues to be consistent with a universe that isn't quite scale free.

The abstract of the preprint states: 

The likelihood analysis yields the constraint 

r0.05<0.036at 95% confidence. 
Running maximum likelihood search on simulations we obtain unbiased results and find that σ(r)=0.009.
These are the strongest constraints to date on primordial gravitational waves.

The body text notes that:

The BKP analysis yielded a 95% confidence constraint r0.05 < 0.12, which BK14 improved to r0.05 < 0.09, and BK15 improved to r0.05 < 0.07. The BK18 result described in this letter, r0.05 < 0.036, represents a fractional improvement equivalent to the two previous steps combined. The BK18 simulations have a median 95% upper limit of r0.05 < 0.019. 
The distributions of maximum likelihood r values in simulations where the true value of r is zero gave σ(r0.05) = 0.020 for BK15 which is reduced to σ(r0.05) = 0.009 for BK18. . . .

The system is projected to reach σ(r) ∼ 0.003 within five years with delensing in conjunction with SPT3G.

Previous coverage at this blog on October 6, 2020,  July 24, 2017 and March 18, 2014. The chart from October 6, 2020 was as follows:

A short 2019 paper reviewed some of the cosmological inflation models that are still consistent with this data.

Old Modern Human Remains In Indonesia

There aren't a lot of modern human remains in Wallacea that date from prehistoric times that have been recovered, so even a relatively unexceptional find is notable. This report didn't or hasn't yet been able to successfully, sequence ancient DNA.

Major gaps remain in our knowledge of the early history of Homo sapiens in Wallacea. By 70–60 thousand years ago (ka), modern humans appear to have entered this distinct biogeographical zone between continental Asia and Australia. Despite this, there are relatively few Late Pleistocene sites attributed to our species in Wallacea. H. sapiens fossil remains are also rare. Previously, only one island in Wallacea (Alor in the southeastern part of the archipelago) had yielded skeletal evidence for pre-Holocene modern humans. 
Here we report on the first Pleistocene human skeletal remains from the largest Wallacean island, Sulawesi. The recovered elements consist of a nearly complete palate and frontal process of a modern human right maxilla excavated from Leang Bulu Bettue in the southwestern peninsula of the island. Dated by several different methods to between 25 and 16 ka, the maxilla belongs to an elderly individual of unknown age and sex, with small teeth (only M1 to M3 are extant) that exhibit severe occlusal wear and related dental pathologies. The dental wear pattern is unusual. This fragmentary specimen, though largely undiagnostic with regards to morphological affinity, provides the only direct insight we currently have from the fossil record into the identity of the Late Pleistocene people of Sulawesi.
Ada Brumm, et al., "Skeletal remains of a Pleistocene modern human (Homo sapiens) from Sulawesi" PLOS One (September 29, 2021).

Wednesday, September 29, 2021

GR v. Newtonian Gravity

Another paper (alas, poorly written due to ESL issues), by an independent author examines the difference between General Relativity and Newtonian gravity in the case of two massive bodies rather than one massive body and a test particle of negligible mass.

The result, analytically determined, confirms Deur's analysis that the distinction is not immaterial in some circumstances, and that this could be the underlying mechanism of modified Newtonian dynamics.

The metric tensor in the four dimensional flat space-time is represented as the matrix form and then the transformation is performed for successive Lorentz boost. After extending or more generalizations the transformation of metric is derived for the curved space-time, manifested after the synergy of different sources of mass. The transformed metric in linear perturbation interestingly reveals a shift from Newtonian gravity for two or more than two body system.
Shubhen Biswas, "The metric transformations and modified Newtonian gravity" arXiv:2109.13515 (September 28, 2021).

Tuesday, September 28, 2021

Quick Miscellaneous Physics Results

Neutrino Physics 

A new paper from the Neutrino-4 experiment makes the case of a sterile neutrino and also estimates the neutrino masses. It also predicts very high neutrino masses compared to other experiments. With an electron neutrino mass of 0.8 eV, a muon neutrino mass of 0.4 eV, a tau neutrino mass of less than 0.6 eV, and a sterile neutrino mass of 2.7 eV. I am highly skeptical of the result, not least because the mass predictions are also out of line with other results. This is a screenshot of the abstract in the paper itself which is used to preserve the fussy formatting:

* Another new review of the sterile neutrino question can be found here (the updated added September 29, 2021):
Two anomalies at nuclear reactors, one related to the absolute antineutrino flux, one related to the antineutrino spectral shape, have drawn special attention to the field of reactor neutrino physics during the past decade. Numerous experimental efforts have been launched to investigate the reliability of flux models and to explore whether sterile neutrino oscillations are at the base of the experimental findings. This review aims to provide an overview on the status of experimental searches at reactors for sterile neutrino oscillations and measurements of the antineutrino spectral shape in mid-2021. 
The individual experimental approaches and results are reviewed. Moreover, global and joint oscillation and spectral shape analyses are discussed. 
Many experiments allow setting constraints on sterile oscillation parameters, but cannot yet cover the entire relevant parameter space. Others find evidence in favour of certain parameter space regions. In contrast, findings on the spectral shape appear to give an overall consistent picture across experiments and allow narrowing down contributions of certain isotopes.

* Neutrino-nucleon collision models still have kinks to be worked out in the low energy, forward muon angle regime where models fail to adequately account for the extent to which events in this part of the parameter space are suppressed. The authors speculate on what might be missing from the models but aren't really sure why the discrepancy arises.

* Neutrino data from experiments and neutrino data from cosmic ray observations are reasonably consistent with each other.

Other Physics

* The charge radius of the proton is measured to be 0.840(4) fm (with conservative rounding assumptions), consistent with prior experimental measurements from muonic hydrogen of 0.840 87(39) fm, and with the better recent measurements using ordinary hydrogen such as a 2019 measurement that found a radius of 0.833(10) fm. 

In 2014, the CODATA average measurement had stated that the charge radius of the proton was  0.8751(61) fm, which has subsequently been determined to be too large due to reliance on older, less accurate experiments with ordinary hydrogen, and less correct theoretical analysis of their results. Correctly theoretically analyzing the old data would have produced a result of 0.844(7) fm.

* Non-perturbative and perturbative QCD models need to be used together to get more precise determinations of the QCD coupling constant. Perturbative QCD methods alone have hit their limits.

* Someone argues for a better way to do renormalization (really a better way to apply existing methods) in QCD.

* Someone makes a more accurate prediction of how many Higgs bosons the LHC should produce at its highest energies. This still has more than a 5% uncertainty, however.

* The Paul Scherrer Institute in Switzerland does mesoscale particle physics experiments at lower energies than the LHC. It has a nice brief review of the relevant Standard Model Physics of the interactions it studies and potential beyond the Standard Model tweaks to it in this regime that it is studying using lower energies but greater precision to study more practically relevant parts of the Standard Model. The abstract of the article is useless, so I quote from the introduction.

These experiments either lead to precise determinations of physical parameters required as input for other experiments (e.g., muon life time, pion mass), or search for physics beyond the Standard Model (BSM). The BSM searches proceed along different frontiers. 

One way to search for new physics is to consider physical observables whose Standard Model (SM) contributions either vanish or are too small to be experimentally accessible. In other words, they are identical to zero for practical purposes. Examples are charged lepton-flavor violating (cLFV) muon decays or a permanent neutron electric dipole moment (EDM). To put constraints on the branching ratios of BSM decays, one has to observe a large number of decays. This is, thus, called a search at the intensity frontier. 

Another way to search for new physics is to consider precision observables and search for deviations from the SM expectations. Prominent examples are the precision QED tests with muonium, as well as the precision laser spectroscopy experiments with muonic atoms. These are, thus, called searches at the precision frontier. The low-energy experiments at PSI are complementary to the experiments at LHC, which sit at the energy frontier.

After a general overview of the theoretical methods applied to describe the processes and bound states in Table 5.1, we will, in turn, consider the muon, the proton, nucleons and nuclei, the free neutron, and the pions.

* The significance of the 151 GeV anomaly at the LHC is overstated.

* Experimental evidence continues to disfavor the existence of a light pseudoscalar Higgs boson "A", which is a generic prediction of models like supersymmetric with multiple Higgs doublets.

* A group of scientists try to explain the charged and neutrino mass hierarchies, muon g-2, electron g-2, leptogenesis, and dark matter with a inverse seesaw model, which is usually only used to attempt to explain neutrino masses and sometimes dark matter. The effort is notable for its breadth, although I very much doubt that it is a correct explanation. A similar model is proposed here.

* Someone proposes a non-SUSY E6 GUT to explain various outstanding physics anomalies consistent with experimental constraints. It is probably wrong.

* Experimental constraints on the proton lifetime (which the Standard Model assumes is stable) are close to ruling out the simplest supersymmetric SU(5) GUT theory.

Friday, September 24, 2021

Reminder That XENON1T Was A Fail

A new study, led by researchers at the University of Cambridge and reported in the journal Physical Review D, suggests that some unexplained results from the XENON1T experiment in Italy may have been caused by dark energy, and not the dark matter the experiment was designed to detect.
From here.

This post is a friendly reminder that any "New Physics" findings based upon the anomalous results from the XENON1T experiment should not be taken seriously. 

It is known that there were material sources of background noise that were ignored in the XENON1T data analysis that could have impacted the result. And, the experimental apparatus was dismantled before it was possible to analyze it in order to determine if those ignored background sources were creating false positives that looked like New Physics.

Its results remain reliable to the extent that it ruled out New Physics (since those results would merely be weakened by false positives). But, some or all of its results that have been attributed to beyond the Standard Model physics were almost surely false positives. So, it is useless for purpose of proving the existence of New Physics.

The Legacy Of Herding

The Legacy Of Herding

Historical food productions practices influence culture and morality long after those food production practices are long gone.
According to the widely known ‘culture of honor’ hypothesis from social psychology, traditional herding practices are believed to have generated a value system that is conducive to revenge-taking and violence. 
We test this idea at a global scale using a combination of ethnographic records, historical folklore information, global data on contemporary conflict events, and large-scale surveys. 
The data show systematic links between traditional herding practices and a culture of honor. First, the culture of pre-industrial societies that relied on animal herding emphasizes violence, punishment, and revenge-taking. Second, contemporary ethnolinguistic groups that historically subsisted more strongly on herding have more frequent and severe conflict today. Third, the contemporary descendants of herders report being more willing to take revenge and punish unfair behavior in the globally representative Global Preferences Survey. In all, the evidence supports the idea that this form of economic subsistence generated a functional psychology that has persisted until today and plays a role in shaping conflict across the globe.
Yiming Cao, et al., "Herding, Warfare, and a Culture of Honor" NBER (September 2021).

Another paper fleshes out the concept a bit more (and has a nice literature review), although its description of the southern United States as historically a herding culture is doubtful. Appalachia was indeed settled by Scotch-Irish herders and does have a culture of honor, but, the lowlands of the American South (which also has a culture of honor), where plantation farming became predominant, was settled by lesser English gentry farmers, not by descendants of herders.
A key element of cultures of honor is that men in these cultures are prepared to protect with violence the reputation for strength and toughness. Such cultures are likely to develop where (1) a man's resources can be thieved in full by other men and (2) the governing body is weak and thus cannot prevent or punish theft. 
Todd K. Shackelford, "An Evolutionary Psychological Perspective on Cultures of Honor" Evolutionary Psychology (January 1, 2005) (open access). DOI:

The example of the Southern United States suggests that a weak state may be as important a factor in the development of a culture of honor as a herding economy.

The Legacy Of Plough v. Hoe Farming

Parallel hypotheses from the same disciplines associate ancestral heavy plough farming with strongly patriarchal societies with strong differentiation in gender roles, and ancestral hoe farming with less patriarchal and sometimes even matrilineal societies.

The Legacy Of Clan Based Societies

It has also become common in modern political theory to associate weak government approaching anarchy with clan based societies in which women are forced into highly subordinated roles, somewhat in the tradition of Thomas Hobbes ("nasty, brutish and short") as opposed to those who idealize an Eden-like "state of nature." See, e.g., Valerie M. Hudson, et al., "Clan Governance and State Stability: The Relationship between Female Subordination and Political Order" 109(3) American Political Science Review 535-555 (August 2015).

The Legacy Of Cousin Marriage

Also along the same lines, cousin marriage (often common in clan based societies and also among feudal aristocrats) tends to be a practice the undermines democratic government:

Image from here.
How might consanguinity affect democracy? 
Cousin marriages create extended families that are much more closely related than is the case where such marriages are not practiced. To illustrate, if a man’s daughter marries his brother’s son, the latter is then not only his nephew but also his son-in-law, and any children born of that union are more genetically similar to the two grandfathers than would be the case with non-consanguineous marriages. Following the principles of kin selection (Hamilton, 1964) and genetic similarity theory (Rushton, 1989, 2005), the high level of genetic similarity creates extended families with exceptionally close bonds. Kurtz succinctly illustrates this idea in his description of Middle Eastern educational practices:

If, for example, a child shows a special aptitude in school, his siblings might willingly sacrifice their personal chances for advancement simply to support his education. Yet once that child becomes a professional, his income will help to support his siblings, while his prestige will enhance their marriage prospects. (Kurtz, 2002, p. 37).

Such kin groupings may be extremely nepotistic and distrusting of non-family members in the larger society. In this context, non-democratic regimes emerge as a consequence of individuals turning to reliable kinship groupings for support rather than to the state or the free market. It has been found, for example, that societies having high levels of familism tend to have low levels of generalized trust and civic engagement (Realo, Allik, & Greenfield, 2008), two important correlates of democracy. Moreover, to people in closely related kin groups, individualism and the recognition of individual rights, which are part of the cultural idiom of democracy, are perceived as strange and counterintuitive ideological abstractions (Sailer, 2004).

From the body text of the following article whose abstract is also set forth below: 

This article examines the hypothesis that although the level of democracy in a society is a complex phenomenon involving many antecedents, consanguinity (marriage and subsequent mating between second cousins or closer relatives) is an important though often overlooked predictor of it. Measures of the two variables correlate substantially in a sample of 70 nations (r = −0.632, p < 0.001), and consanguinity remains a significant predictor of democracy in multiple regression and path analyses involving several additional independent variables
The data suggest that where consanguineous kinship networks are numerically predominant and have been made to share a common statehood, democracy is unlikely to develop
Possible explanations for these findings include the idea that restricted gene flow arising from consanguineous marriage facilitates a rigid collectivism that is inimical to individualism and the recognition of individual rights, which are key elements of the democratic ethos. Furthermore, high levels of within-group genetic similarity may discourage cooperation between different large-scale kin groupings sharing the same nation, inhibiting democracy. Finally, genetic similarity stemming from consanguinity may encourage resource predation by members of socially elite kinship networks as an inclusive fitness enhancing behavior.
Michael A. Woodley, Edward Bell, "Consanguinity as a Major Predictor of Levels of Democracy: A Study of 70 Nations" 44(2) Journal of Cross-Cultural Psychology (2013). 

Thursday, September 23, 2021

Another Problem With LambdaCDM

The latest issue with the standard LambdaCDM cosmology is a subtle one, relating to the location and character of the galaxies in parts of the universe that are mostly void. But, it is notable because it is largely independent of our problems identified with LambdaCDM and because it involves the large scale cosmology scale where LambdaCDM has historically been seen as being more successful.

We extract void catalogs from the Sloan Digital Sky Survey Data Release 16 (SDSS DR16) survey and also from the Millennium simulation. We focus our comparison on distribution of galaxies brighter than M(r)<−18 inside voids and study the mean separation of void galaxies, distance from the void center, and the radial density profile.  
We find that mean separation of void galaxies depends on void size, as bigger voids have lower mean separation in both samples. However, void galaxies in the observation sample seem to have generally larger mean-distance than simulated ones at any given void size. In addition, observed void galaxies tend to reside closer to the void center than those in the simulation. This discrepancy is also shown in the density profile of voids. Regardless of the void size, the central densities of real void profiles are higher than the ones in the predicted simulated catalog.
Saeed Tavasoli, "Void Galaxy Distribution: A Challenge for ΛCDM" arXiv:2109.10369 (September 21, 2021) (Accepted in ApJ Letter) DOI: 10.3847/2041-8213/ac1357.

Wednesday, September 22, 2021

A Grab Bag Paper On East Asian Historical Genetics

In the course of looking into the three component story of the formation of the Japanese people that I posted yesterday, I came across a gem of a preprint from March 25, 2020 covering all manner of only vaguely related subjects. I may have previously blogged some of its findings, but it really is all over the place and could have legitimately spawned five distinct articles.
The deep population history of East Asia remains poorly understood due to a lack of ancient DNA data and sparse sampling of present-day people. We report genome-wide data from 191 individuals from Mongolia, northern China, Taiwan, the Amur River Basin and Japan dating to 6000 BCE – 1000 CE, many from contexts never previously analyzed with ancient DNA. We also report 383 present-day individuals from 46 groups mostly from the Tibetan Plateau and southern China. 
We document how 6000-3600 BCE people of Mongolia and the Amur River Basin were from populations that expanded over Northeast Asia, likely dispersing the ancestors of Mongolic and Tungusic languages. 
In a time transect of 89 Mongolians, we reveal how Yamnaya steppe pastoralist spread from the west by 3300-2900 BCE in association with the Afanasievo culture, although we also document a boy buried in an Afanasievo barrow with ancestry entirely from local Mongolian hunter-gatherers, representing a unique case of someone of entirely non-Yamnaya ancestry interred in this way. The second spread of Yamnaya-derived ancestry came via groups that harbored about a third of their ancestry from European farmers, which nearly completely displaced unmixed Yamnaya-related lineages in Mongolia in the second millennium BCE, but did not replace Afanasievo lineages in western China where Afanasievo ancestry persisted, plausibly acting as the source of the early-splitting Tocharian branch of Indo-European languages. 
Analyzing 20 Yellow River Basin farmers dating to ∼3000 BCE, we document a population that was a plausible vector for the spread of Sino-Tibetan languages both to the Tibetan Plateau and to the central plain where they mixed with southern agriculturalists to form the ancestors of Han Chinese. 
We show that the individuals in a time transect of 52 ancient Taiwan individuals spanning at least 1400 BCE to 600 CE were consistent with being nearly direct descendants of Yangtze Valley first farmers who likely spread Austronesian, Tai-Kadai and Austroasiatic languages across Southeast and South Asia and mixing with the people they encountered, contributing to a four-fold reduction of genetic differentiation during the emergence of complex societies. 
We finally report data from Jomon hunter-gatherers from Japan who harbored one of the earliest splitting branches of East Eurasian variation, and show an affinity among Jomon, Amur River Basin, ancient Taiwan, and Austronesian-speakers, as expected for ancestry if they all had contributions from a Late Pleistocene coastal route migration to East Asia.

Tuesday, September 21, 2021

Penrose's Model For Gravitational Collapse Of Quantum Superpositions Doesn't Work

The way that an observer making an observation triggers a collapse of a quantum physical wave function is a longstanding unsolved problem in physics. 

A recent experimental effort to see if quantum gravity effects triggered this in a theory promoted by Roger Penrose, but first proposed by Lajos Diósi, turns out not to be the answer to this question, which remains unsolved.

Roger Penrose proposed that a spatial quantum superposition collapses as a back-reaction from spacetime, which is curved in different ways by each branch of the superposition. In this sense, one speaks of gravity-related wave function collapse. He also provided a heuristic formula to compute the decay time of the superposition—similar to that suggested earlier by Lajos Diósi, hence the name Diósi–Penrose model. The collapse depends on the effective size of the mass density of particles in the superposition, and is random: this randomness shows up as a diffusion of the particles’ motion, resulting, if charged, in the emission of radiation. Here, we compute the radiation emission rate, which is faint but detectable. We then report the results of a dedicated experiment at the Gran Sasso underground laboratory to measure this radiation emission rate. Our result sets a lower bound on the effective size of the mass density of nuclei, which is about three orders of magnitude larger than previous bounds. This rules out the natural parameter-free version of the Diósi–Penrose model.

From Nature Physics via which explains the results as follows:

It's one of the oddest tenets of quantum theory: a particle can be in two places at once—yet we only ever see it here or there. Textbooks state that the act of observing the particle "collapses" it, such that it appears at random in only one of its two locations. But physicists quarrel over why that would happen, if indeed it does. Now, one of the most plausible mechanisms for quantum collapse—gravity—has suffered a setback.

The gravity hypothesis traces its origins to Hungarian physicists Károlyházy Frigyes in the 1960s and Lajos Diósi in the 1980s. The basic idea is that the gravitational field of any object stands outside quantum theory. It resists being placed into awkward combinations, or "superpositions," of different states. So if a particle is made to be both here and there, its gravitational field tries to do the same—but the field cannot endure the tension for long; it collapses and takes the particle with it.

Renowned University of Oxford mathematician Roger Penrose championed the hypothesis in the late 1980s because, he says, it removes the anthropocentric notion that the measurement itself somehow causes the collapse. "It takes place in the physics, and it's not because somebody comes and looks at it." . . . 

In the new study, Diósi and other scientists looked for one of the many ways, whether by gravity or some other mechanism, that a quantum collapse would reveal itself: A particle that collapses would swerve randomly, heating up the system of which it is part. "It is as if you gave a kick to a particle," says co-author Sandro Donadi of the Frankfurt Institute for Advanced Studies.

If the particle is charged, it will emit a photon of radiation as it swerves. And multiple particles subject to the same gravitational lurch will emit in unison. "You have an amplified effect," says co-author Cătălina Curceanu of National Institute for Nuclear Physics in Rome.

To test this idea, the researchers built a detector out of a crystal of germanium the size of a coffee cup. They looked for excess x-ray and gamma ray emissions from protons in the germanium nuclei, which create electrical pulses in the material. The scientists chose this portion of the spectrum to maximize the amplification. They then wrapped the crystal in lead and placed it 1.4 kilometers underground in the Gran Sasso National Laboratory in central Italy to shield it from other radiation sources. Over 2 months in 2014 and 2015, they saw 576 photons, close to the 506 expected from naturally occurring radioactivity, they report today in Nature Physics.

By comparison, Penrose's model predicted 70,000 such photons. "You should see some collapse effect in the germanium experiment, but we don't," Curceanu says. That suggests gravity is not, in fact, shaking particles out of their quantum superpositions. (The experiment also constrained, though did not rule out, collapse mechanisms that do not involve gravity.)

Why The Sterile Neutrino Anomaly Isn't A Big Deal

Sabine Hoffenfelder's latest blog post talks about the sterile neutrino anomaly seen at the Liquid Scintillator Neutrino Detector, LSND for short, which ran from 1993 to 98 and again at the Mini Booster Neutrino Experiment experiment at Fermilab since 2003 seeming to show a six sigma anomaly by 2018. She wonders why it isn't a big deal now.

While it is common to talk about a five sigma threshold for discovery of new physics, there are really two more parts of that test: the result needs to be replicated rather than being contradicted by other experiments, and there has to be a plausible physics based theory to explain the result. 

Usually, Sabine is a voice of reason and spot on (I bought her book "Lost in Math" and agree with almost everything that she says in it). But on this score, I don't agree with her.  She states that:
15 years ago, I worked on neutrino mixing for a while, and in my impression back then most physicists thought the LSND data was just wrong and it’d not be reproduced.
But, most physicists still think that the LSND/MiniBooNE data is wrong, and it wasn't reproduced by other experiments. Instead, multiple experiments and astronomy observations using different methods that make their results robust contradict the LSND/MiniBooNE result. 

Equally important, several independent important sources of systemic error were identified with the LSND data and its successor MiniBooNE experiment's data. Basically, these experiments failed to consider the mix of fuels in the nuclear reactors they were modeling, used a wrong oscillation parameter, and failed to correlate their near and far detector results in ways that overestimated the number of neutrinos that should appear which made it look like there were more neutrinos disappearing than there actually were.

Thus, there is very strong evidence that the LSND/MiniBooNE apparent detection of a sterile neutrino was wrong. 

Instead, there is strong evidence that there are no sterile neutrinos that oscillate with ordinary neutrinos that have masses of under 10 eV. 

For what it is worth, searches for non-standard neutrino interactions (other than CP violation) have also come up empty so far and severely constrained that possibility. See, e.g., a paper from IceCube, a paper from ANTARES, an analysis of data from Daya Bay, and a summary of results from six other experiments.

Furthermore, there are no beyond the Standard Model active neutrinos with masses of under 10 TeV. This is also an important part of the argument that there are also no fourth generation quarks or charged leptons, because, for reasons of theoretical consistency, each generation of Standard Model fundamental fermions must be complete.

Other Experiments Contradict LSND/MiniBooNE And There Are Plausible Sources Of Systemic Error

The big problem with the reactor anomaly is that these two sets of results rather than being replicated, were repeatedly contradicted, and instead a plausible physics based explanation for why it was wrong was established.

Three different recent experiments (STEREO, PROSPECT and DANSS) have contradicted the LSND/MiniBooNE result. And, the anomalies seen at LSND/MiniBooNE were determined to most likely be due to a failure to model the mix of reactor fuels between Uranium-235 and Plutonium-239 properly, resulting in an error in the predicted number of neutrino events that the actual detections were compared to in determining that there was a deficit of neutrinos that could be explained by an oscillation to one or more sterile neutrino flavors. See Matthieu Licciardi "Results of STEREO and PROSPECT, and status of sterile neutrino searches" (May 28, 2021) (Contribution to the 2021 EW session of the 55th Rencontres de Moriond). See also additional analysis of the fuel mix issue, additional results from Moriond 2021 (including IceCube), and the results from the MINOS, MINOS+, Daya Bay, and Bugey-3 Experiments (these may be the same experiments mentioned above with different names) which found in a preprint that was subsequently published in a peer reviewed journal:
Searches for electron antineutrino, muon neutrino, and muon antineutrino disappearance driven by sterile neutrino mixing have been carried out by the Daya Bay and MINOS+ collaborations. This Letter presents the combined results of these searches, along with exclusion results from the Bugey-3 reactor experiment, framed in a minimally extended four-neutrino scenario. Significantly improved constraints on the θμe mixing angle are derived that constitute the most stringent limits to date over five orders of magnitude in the sterile mass-squared splitting Δm241, excluding the 90% C.L. sterile-neutrino parameter space allowed by the LSND and MiniBooNE observations at 90% CLs for Δm241<5eV^2. Furthermore, the LSND and MiniBooNE 99% C.L. allowed regions are excluded at 99% CLs for Δm241 < 1.2 eV^2.

A similar conclusion was reached using overlapping data but also data from the Planck cosmic microwave background observations here.

In addition to these issues, an analysis back in 2014 already noticed data contradicting the sterile neutrino hypothesis at the ICARUS and OPERA, and observed that some of the parameters used to make the estimates were off and that using the right ones greatly reduced the statistical significance of the anomaly. See Boris Kayser "Are There Sterile Neutrinos" (February 13, 2014). MINOS and Daya Bay had already contradicted the reactor anomaly back in 2014 as well. More recent analysis has likewise downgraded the statistical significance of the anomalies previously reported, although it has not entirely eliminated it.

Cosmology Data Strongly Disfavors Sterile Neutrinos

Cosmology measures also place a cap on neutrino mass including the sum of the neutrino masses of about 0.087 eV or less, in a manner indifferent between sterile neutrinos of less than about 10 eV, and active neutrinos, which doesn't leave room for a reactor anomaly sterile neutrino. See Eleonora Di Valentino, Stefano Gariazzo, Olga Mena "On the most constraining cosmological neutrino mass bounds" arXiv:2106.16267 (June 29, 2021).

A far heavier sterile neutrino would not be discernible as a neutrino from cosmology data and instead would look like a type of dark matter particle. But, the LSND/MiniBooNE result was pointing to a sterile neutrino with a mass of under 5 eV, so it would be subject to the cosmology bounds.

Also, there are strict direct detection exclusions on heavier dark matter particles as well, although none of those would bar a truly sterile neutrino with no interactions with ordinary matter other than oscillations with active neutrinos.

The main criticism of reliance on cosmology data is that it is highly model dependent, even though this particular conclusion is quite robust to different cosmology models.

Limits On Active Neutrinos

We can also be comfortable that there are no active neutrinos (e.g. a fourth generation neutrino otherwise identical to the three Standard Model neutrinos) with masses of less than about 10 TeV, when direct measurements paired with oscillation data limit the most massive of the three Standard Model neutrino masses to not more than 0.9 eV, and cosmology data limits the most massive of the three Standard Model neutrino masses to not more than 0.09 eV.

Data from W and Z boson decays likewise tightly constrain the number of active neutrinos with masses of less than 45,000,000,000,000 meV/c^2 to exactly three.

Dark dark matter detection experiments have ruled out particles that make up most of hypothetical dark matter particles having weak force interaction coupling constants equal to Standard Model neutrinos at masses of up to about 10 TeV (i.e. 10,000 GeV). In the chart below, that cross section is the blue dotted line marked "Z portal C(x)=1" by a factor of 1,000,000. So, even if the flux of 45 GeV+ Standard Model neutrinos were a million times smaller than the hypothetical flux of dark matter particles through Earth, they would be ruled out by the direct detection experiments up to about 10 TeV.

Direct measurement of the lightest neutrino mass from the Katrin experiment of about 0.8 eV, which means that all of the active neutrino masses have to be less than about 0.9 eV based upon the oscillation data. This means that the sterile neutrino mass predicted by the LSND/MiniBooNE result relative to the active neutrino masses still couldn't have been so massive that it would have evaded cosmology bounds.

Neutrinoless double beta decay results rule out Majorana mass neutrinos above about 180 meV (according to the body text of the linked paper). The same experiments will soon be able to confirm or rule out the scenario of sterile neutrinos heavier than 10 eV that cosmology tools cannot constrain.