Wednesday, September 29, 2021

GR v. Newtonian Gravity

Another paper (alas, poorly written due to ESL issues), by an independent author examines the difference between General Relativity and Newtonian gravity in the case of two massive bodies rather than one massive body and a test particle of negligible mass.

The result, analytically determined, confirms Deur's analysis that the distinction is not immaterial in some circumstances, and that this could be the underlying mechanism of modified Newtonian dynamics.

The metric tensor in the four dimensional flat space-time is represented as the matrix form and then the transformation is performed for successive Lorentz boost. After extending or more generalizations the transformation of metric is derived for the curved space-time, manifested after the synergy of different sources of mass. The transformed metric in linear perturbation interestingly reveals a shift from Newtonian gravity for two or more than two body system.
Shubhen Biswas, "The metric transformations and modified Newtonian gravity" arXiv:2109.13515 (September 28, 2021).

Tuesday, September 28, 2021

Quick Miscellaneous Physics Results

Neutrino Physics 

A new paper from the Neutrino-4 experiment makes the case of a sterile neutrino and also estimates the neutrino masses. It also predicts very high neutrino masses compared to other experiments. With an electron neutrino mass of 0.8 eV, a muon neutrino mass of 0.4 eV, a tau neutrino mass of less than 0.6 eV, and a sterile neutrino mass of 2.7 eV. I am highly skeptical of the result, not least because the mass predictions are also out of line with other results. This is a screenshot of the abstract in the paper itself which is used to preserve the fussy formatting:


* Another new review of the sterile neutrino question can be found here (the updated added September 29, 2021):
Two anomalies at nuclear reactors, one related to the absolute antineutrino flux, one related to the antineutrino spectral shape, have drawn special attention to the field of reactor neutrino physics during the past decade. Numerous experimental efforts have been launched to investigate the reliability of flux models and to explore whether sterile neutrino oscillations are at the base of the experimental findings. This review aims to provide an overview on the status of experimental searches at reactors for sterile neutrino oscillations and measurements of the antineutrino spectral shape in mid-2021. 
The individual experimental approaches and results are reviewed. Moreover, global and joint oscillation and spectral shape analyses are discussed. 
Many experiments allow setting constraints on sterile oscillation parameters, but cannot yet cover the entire relevant parameter space. Others find evidence in favour of certain parameter space regions. In contrast, findings on the spectral shape appear to give an overall consistent picture across experiments and allow narrowing down contributions of certain isotopes.

* Neutrino-nucleon collision models still have kinks to be worked out in the low energy, forward muon angle regime where models fail to adequately account for the extent to which events in this part of the parameter space are suppressed. The authors speculate on what might be missing from the models but aren't really sure why the discrepancy arises.

* Neutrino data from experiments and neutrino data from cosmic ray observations are reasonably consistent with each other.

Other Physics

* The charge radius of the proton is measured to be 0.840(4) fm (with conservative rounding assumptions), consistent with prior experimental measurements from muonic hydrogen of 0.840 87(39) fm, and with the better recent measurements using ordinary hydrogen such as a 2019 measurement that found a radius of 0.833(10) fm. 

In 2014, the CODATA average measurement had stated that the charge radius of the proton was  0.8751(61) fm, which has subsequently been determined to be too large due to reliance on older, less accurate experiments with ordinary hydrogen, and less correct theoretical analysis of their results. Correctly theoretically analyzing the old data would have produced a result of 0.844(7) fm.

* Non-perturbative and perturbative QCD models need to be used together to get more precise determinations of the QCD coupling constant. Perturbative QCD methods alone have hit their limits.

* Someone argues for a better way to do renormalization (really a better way to apply existing methods) in QCD.

* Someone makes a more accurate prediction of how many Higgs bosons the LHC should produce at its highest energies. This still has more than a 5% uncertainty, however.

* The Paul Scherrer Institute in Switzerland does mesoscale particle physics experiments at lower energies than the LHC. It has a nice brief review of the relevant Standard Model Physics of the interactions it studies and potential beyond the Standard Model tweaks to it in this regime that it is studying using lower energies but greater precision to study more practically relevant parts of the Standard Model. The abstract of the article is useless, so I quote from the introduction.

These experiments either lead to precise determinations of physical parameters required as input for other experiments (e.g., muon life time, pion mass), or search for physics beyond the Standard Model (BSM). The BSM searches proceed along different frontiers. 

One way to search for new physics is to consider physical observables whose Standard Model (SM) contributions either vanish or are too small to be experimentally accessible. In other words, they are identical to zero for practical purposes. Examples are charged lepton-flavor violating (cLFV) muon decays or a permanent neutron electric dipole moment (EDM). To put constraints on the branching ratios of BSM decays, one has to observe a large number of decays. This is, thus, called a search at the intensity frontier. 

Another way to search for new physics is to consider precision observables and search for deviations from the SM expectations. Prominent examples are the precision QED tests with muonium, as well as the precision laser spectroscopy experiments with muonic atoms. These are, thus, called searches at the precision frontier. The low-energy experiments at PSI are complementary to the experiments at LHC, which sit at the energy frontier.

After a general overview of the theoretical methods applied to describe the processes and bound states in Table 5.1, we will, in turn, consider the muon, the proton, nucleons and nuclei, the free neutron, and the pions.

* The significance of the 151 GeV anomaly at the LHC is overstated.

* Experimental evidence continues to disfavor the existence of a light pseudoscalar Higgs boson "A", which is a generic prediction of models like supersymmetric with multiple Higgs doublets.

* A group of scientists try to explain the charged and neutrino mass hierarchies, muon g-2, electron g-2, leptogenesis, and dark matter with a inverse seesaw model, which is usually only used to attempt to explain neutrino masses and sometimes dark matter. The effort is notable for its breadth, although I very much doubt that it is a correct explanation. A similar model is proposed here.

* Someone proposes a non-SUSY E6 GUT to explain various outstanding physics anomalies consistent with experimental constraints. It is probably wrong.

* Experimental constraints on the proton lifetime (which the Standard Model assumes is stable) are close to ruling out the simplest supersymmetric SU(5) GUT theory.

Friday, September 24, 2021

Reminder That XENON1T Was A Fail

A new study, led by researchers at the University of Cambridge and reported in the journal Physical Review D, suggests that some unexplained results from the XENON1T experiment in Italy may have been caused by dark energy, and not the dark matter the experiment was designed to detect.
From here.

This post is a friendly reminder that any "New Physics" findings based upon the anomalous results from the XENON1T experiment should not be taken seriously. 

It is known that there were material sources of background noise that were ignored in the XENON1T data analysis that could have impacted the result. And, the experimental apparatus was dismantled before it was possible to analyze it in order to determine if those ignored background sources were creating false positives that looked like New Physics.

Its results remain reliable to the extent that it ruled out New Physics (since those results would merely be weakened by false positives). But, some or all of its results that have been attributed to beyond the Standard Model physics were almost surely false positives. So, it is useless for purpose of proving the existence of New Physics.

The Legacy Of Herding

The Legacy Of Herding

Historical food productions practices influence culture and morality long after those food production practices are long gone.
According to the widely known ‘culture of honor’ hypothesis from social psychology, traditional herding practices are believed to have generated a value system that is conducive to revenge-taking and violence. 
We test this idea at a global scale using a combination of ethnographic records, historical folklore information, global data on contemporary conflict events, and large-scale surveys. 
The data show systematic links between traditional herding practices and a culture of honor. First, the culture of pre-industrial societies that relied on animal herding emphasizes violence, punishment, and revenge-taking. Second, contemporary ethnolinguistic groups that historically subsisted more strongly on herding have more frequent and severe conflict today. Third, the contemporary descendants of herders report being more willing to take revenge and punish unfair behavior in the globally representative Global Preferences Survey. In all, the evidence supports the idea that this form of economic subsistence generated a functional psychology that has persisted until today and plays a role in shaping conflict across the globe.
Yiming Cao, et al., "Herding, Warfare, and a Culture of Honor" NBER (September 2021).

Another paper fleshes out the concept a bit more (and has a nice literature review), although its description of the southern United States as historically a herding culture is doubtful. Appalachia was indeed settled by Scotch-Irish herders and does have a culture of honor, but, the lowlands of the American South (which also has a culture of honor), where plantation farming became predominant, was settled by lesser English gentry farmers, not by descendants of herders.
A key element of cultures of honor is that men in these cultures are prepared to protect with violence the reputation for strength and toughness. Such cultures are likely to develop where (1) a man's resources can be thieved in full by other men and (2) the governing body is weak and thus cannot prevent or punish theft. 
Todd K. Shackelford, "An Evolutionary Psychological Perspective on Cultures of Honor" Evolutionary Psychology (January 1, 2005) (open access). DOI: https://doi.org/10.1177/147470490500300126

The example of the Southern United States suggests that a weak state may be as important a factor in the development of a culture of honor as a herding economy.

The Legacy Of Plough v. Hoe Farming

Parallel hypotheses from the same disciplines associate ancestral heavy plough farming with strongly patriarchal societies with strong differentiation in gender roles, and ancestral hoe farming with less patriarchal and sometimes even matrilineal societies.

The Legacy Of Clan Based Societies

It has also become common in modern political theory to associate weak government approaching anarchy with clan based societies in which women are forced into highly subordinated roles, somewhat in the tradition of Thomas Hobbes ("nasty, brutish and short") as opposed to those who idealize an Eden-like "state of nature." See, e.g., Valerie M. Hudson, et al., "Clan Governance and State Stability: The Relationship between Female Subordination and Political Order" 109(3) American Political Science Review 535-555 (August 2015).

The Legacy Of Cousin Marriage

Also along the same lines, cousin marriage (often common in clan based societies and also among feudal aristocrats) tends to be a practice the undermines democratic government:



Image from here.
How might consanguinity affect democracy? 
Cousin marriages create extended families that are much more closely related than is the case where such marriages are not practiced. To illustrate, if a man’s daughter marries his brother’s son, the latter is then not only his nephew but also his son-in-law, and any children born of that union are more genetically similar to the two grandfathers than would be the case with non-consanguineous marriages. Following the principles of kin selection (Hamilton, 1964) and genetic similarity theory (Rushton, 1989, 2005), the high level of genetic similarity creates extended families with exceptionally close bonds. Kurtz succinctly illustrates this idea in his description of Middle Eastern educational practices:

If, for example, a child shows a special aptitude in school, his siblings might willingly sacrifice their personal chances for advancement simply to support his education. Yet once that child becomes a professional, his income will help to support his siblings, while his prestige will enhance their marriage prospects. (Kurtz, 2002, p. 37).

Such kin groupings may be extremely nepotistic and distrusting of non-family members in the larger society. In this context, non-democratic regimes emerge as a consequence of individuals turning to reliable kinship groupings for support rather than to the state or the free market. It has been found, for example, that societies having high levels of familism tend to have low levels of generalized trust and civic engagement (Realo, Allik, & Greenfield, 2008), two important correlates of democracy. Moreover, to people in closely related kin groups, individualism and the recognition of individual rights, which are part of the cultural idiom of democracy, are perceived as strange and counterintuitive ideological abstractions (Sailer, 2004).

From the body text of the following article whose abstract is also set forth below: 

This article examines the hypothesis that although the level of democracy in a society is a complex phenomenon involving many antecedents, consanguinity (marriage and subsequent mating between second cousins or closer relatives) is an important though often overlooked predictor of it. Measures of the two variables correlate substantially in a sample of 70 nations (r = −0.632, p < 0.001), and consanguinity remains a significant predictor of democracy in multiple regression and path analyses involving several additional independent variables
The data suggest that where consanguineous kinship networks are numerically predominant and have been made to share a common statehood, democracy is unlikely to develop
Possible explanations for these findings include the idea that restricted gene flow arising from consanguineous marriage facilitates a rigid collectivism that is inimical to individualism and the recognition of individual rights, which are key elements of the democratic ethos. Furthermore, high levels of within-group genetic similarity may discourage cooperation between different large-scale kin groupings sharing the same nation, inhibiting democracy. Finally, genetic similarity stemming from consanguinity may encourage resource predation by members of socially elite kinship networks as an inclusive fitness enhancing behavior.
Michael A. Woodley, Edward Bell, "Consanguinity as a Major Predictor of Levels of Democracy: A Study of 70 Nations" 44(2) Journal of Cross-Cultural Psychology (2013). 

Thursday, September 23, 2021

Another Problem With LambdaCDM

The latest issue with the standard LambdaCDM cosmology is a subtle one, relating to the location and character of the galaxies in parts of the universe that are mostly void. But, it is notable because it is largely independent of our problems identified with LambdaCDM and because it involves the large scale cosmology scale where LambdaCDM has historically been seen as being more successful.

We extract void catalogs from the Sloan Digital Sky Survey Data Release 16 (SDSS DR16) survey and also from the Millennium simulation. We focus our comparison on distribution of galaxies brighter than M(r)<−18 inside voids and study the mean separation of void galaxies, distance from the void center, and the radial density profile.  
We find that mean separation of void galaxies depends on void size, as bigger voids have lower mean separation in both samples. However, void galaxies in the observation sample seem to have generally larger mean-distance than simulated ones at any given void size. In addition, observed void galaxies tend to reside closer to the void center than those in the simulation. This discrepancy is also shown in the density profile of voids. Regardless of the void size, the central densities of real void profiles are higher than the ones in the predicted simulated catalog.
Saeed Tavasoli, "Void Galaxy Distribution: A Challenge for ΛCDM" arXiv:2109.10369 (September 21, 2021) (Accepted in ApJ Letter) DOI: 10.3847/2041-8213/ac1357.

Wednesday, September 22, 2021

A Grab Bag Paper On East Asian Historical Genetics

In the course of looking into the three component story of the formation of the Japanese people that I posted yesterday, I came across a gem of a preprint from March 25, 2020 covering all manner of only vaguely related subjects. I may have previously blogged some of its findings, but it really is all over the place and could have legitimately spawned five distinct articles.
The deep population history of East Asia remains poorly understood due to a lack of ancient DNA data and sparse sampling of present-day people. We report genome-wide data from 191 individuals from Mongolia, northern China, Taiwan, the Amur River Basin and Japan dating to 6000 BCE – 1000 CE, many from contexts never previously analyzed with ancient DNA. We also report 383 present-day individuals from 46 groups mostly from the Tibetan Plateau and southern China. 
We document how 6000-3600 BCE people of Mongolia and the Amur River Basin were from populations that expanded over Northeast Asia, likely dispersing the ancestors of Mongolic and Tungusic languages. 
In a time transect of 89 Mongolians, we reveal how Yamnaya steppe pastoralist spread from the west by 3300-2900 BCE in association with the Afanasievo culture, although we also document a boy buried in an Afanasievo barrow with ancestry entirely from local Mongolian hunter-gatherers, representing a unique case of someone of entirely non-Yamnaya ancestry interred in this way. The second spread of Yamnaya-derived ancestry came via groups that harbored about a third of their ancestry from European farmers, which nearly completely displaced unmixed Yamnaya-related lineages in Mongolia in the second millennium BCE, but did not replace Afanasievo lineages in western China where Afanasievo ancestry persisted, plausibly acting as the source of the early-splitting Tocharian branch of Indo-European languages. 
Analyzing 20 Yellow River Basin farmers dating to ∼3000 BCE, we document a population that was a plausible vector for the spread of Sino-Tibetan languages both to the Tibetan Plateau and to the central plain where they mixed with southern agriculturalists to form the ancestors of Han Chinese. 
We show that the individuals in a time transect of 52 ancient Taiwan individuals spanning at least 1400 BCE to 600 CE were consistent with being nearly direct descendants of Yangtze Valley first farmers who likely spread Austronesian, Tai-Kadai and Austroasiatic languages across Southeast and South Asia and mixing with the people they encountered, contributing to a four-fold reduction of genetic differentiation during the emergence of complex societies. 
We finally report data from Jomon hunter-gatherers from Japan who harbored one of the earliest splitting branches of East Eurasian variation, and show an affinity among Jomon, Amur River Basin, ancient Taiwan, and Austronesian-speakers, as expected for ancestry if they all had contributions from a Late Pleistocene coastal route migration to East Asia.

Tuesday, September 21, 2021

Penrose's Model For Gravitational Collapse Of Quantum Superpositions Doesn't Work

The way that an observer making an observation triggers a collapse of a quantum physical wave function is a longstanding unsolved problem in physics. 

A recent experimental effort to see if quantum gravity effects triggered this in a theory promoted by Roger Penrose, but first proposed by Lajos Diósi, turns out not to be the answer to this question, which remains unsolved.

Roger Penrose proposed that a spatial quantum superposition collapses as a back-reaction from spacetime, which is curved in different ways by each branch of the superposition. In this sense, one speaks of gravity-related wave function collapse. He also provided a heuristic formula to compute the decay time of the superposition—similar to that suggested earlier by Lajos Diósi, hence the name Diósi–Penrose model. The collapse depends on the effective size of the mass density of particles in the superposition, and is random: this randomness shows up as a diffusion of the particles’ motion, resulting, if charged, in the emission of radiation. Here, we compute the radiation emission rate, which is faint but detectable. We then report the results of a dedicated experiment at the Gran Sasso underground laboratory to measure this radiation emission rate. Our result sets a lower bound on the effective size of the mass density of nuclei, which is about three orders of magnitude larger than previous bounds. This rules out the natural parameter-free version of the Diósi–Penrose model.

From Nature Physics via Science.org which explains the results as follows:

It's one of the oddest tenets of quantum theory: a particle can be in two places at once—yet we only ever see it here or there. Textbooks state that the act of observing the particle "collapses" it, such that it appears at random in only one of its two locations. But physicists quarrel over why that would happen, if indeed it does. Now, one of the most plausible mechanisms for quantum collapse—gravity—has suffered a setback.

The gravity hypothesis traces its origins to Hungarian physicists Károlyházy Frigyes in the 1960s and Lajos Diósi in the 1980s. The basic idea is that the gravitational field of any object stands outside quantum theory. It resists being placed into awkward combinations, or "superpositions," of different states. So if a particle is made to be both here and there, its gravitational field tries to do the same—but the field cannot endure the tension for long; it collapses and takes the particle with it.

Renowned University of Oxford mathematician Roger Penrose championed the hypothesis in the late 1980s because, he says, it removes the anthropocentric notion that the measurement itself somehow causes the collapse. "It takes place in the physics, and it's not because somebody comes and looks at it." . . . 

In the new study, Diósi and other scientists looked for one of the many ways, whether by gravity or some other mechanism, that a quantum collapse would reveal itself: A particle that collapses would swerve randomly, heating up the system of which it is part. "It is as if you gave a kick to a particle," says co-author Sandro Donadi of the Frankfurt Institute for Advanced Studies.

If the particle is charged, it will emit a photon of radiation as it swerves. And multiple particles subject to the same gravitational lurch will emit in unison. "You have an amplified effect," says co-author Cătălina Curceanu of National Institute for Nuclear Physics in Rome.

To test this idea, the researchers built a detector out of a crystal of germanium the size of a coffee cup. They looked for excess x-ray and gamma ray emissions from protons in the germanium nuclei, which create electrical pulses in the material. The scientists chose this portion of the spectrum to maximize the amplification. They then wrapped the crystal in lead and placed it 1.4 kilometers underground in the Gran Sasso National Laboratory in central Italy to shield it from other radiation sources. Over 2 months in 2014 and 2015, they saw 576 photons, close to the 506 expected from naturally occurring radioactivity, they report today in Nature Physics.

By comparison, Penrose's model predicted 70,000 such photons. "You should see some collapse effect in the germanium experiment, but we don't," Curceanu says. That suggests gravity is not, in fact, shaking particles out of their quantum superpositions. (The experiment also constrained, though did not rule out, collapse mechanisms that do not involve gravity.)

Why The Sterile Neutrino Anomaly Isn't A Big Deal

Sabine Hoffenfelder's latest blog post talks about the sterile neutrino anomaly seen at the Liquid Scintillator Neutrino Detector, LSND for short, which ran from 1993 to 98 and again at the Mini Booster Neutrino Experiment experiment at Fermilab since 2003 seeming to show a six sigma anomaly by 2018. She wonders why it isn't a big deal now.

While it is common to talk about a five sigma threshold for discovery of new physics, there are really two more parts of that test: the result needs to be replicated rather than being contradicted by other experiments, and there has to be a plausible physics based theory to explain the result. 

Usually, Sabine is a voice of reason and spot on (I bought her book "Lost in Math" and agree with almost everything that she says in it). But on this score, I don't agree with her.  She states that:
15 years ago, I worked on neutrino mixing for a while, and in my impression back then most physicists thought the LSND data was just wrong and it’d not be reproduced.
But, most physicists still think that the LSND/MiniBooNE data is wrong, and it wasn't reproduced by other experiments. Instead, multiple experiments and astronomy observations using different methods that make their results robust contradict the LSND/MiniBooNE result. 

Equally important, several independent important sources of systemic error were identified with the LSND data and its successor MiniBooNE experiment's data. Basically, these experiments failed to consider the mix of fuels in the nuclear reactors they were modeling, used a wrong oscillation parameter, and failed to correlate their near and far detector results in ways that overestimated the number of neutrinos that should appear which made it look like there were more neutrinos disappearing than there actually were.

Thus, there is very strong evidence that the LSND/MiniBooNE apparent detection of a sterile neutrino was wrong. 

Instead, there is strong evidence that there are no sterile neutrinos that oscillate with ordinary neutrinos that have masses of under 10 eV. 

For what it is worth, searches for non-standard neutrino interactions (other than CP violation) have also come up empty so far and severely constrained that possibility. See, e.g., a paper from IceCube, a paper from ANTARES, an analysis of data from Daya Bay, and a summary of results from six other experiments.

Furthermore, there are no beyond the Standard Model active neutrinos with masses of under 10 TeV. This is also an important part of the argument that there are also no fourth generation quarks or charged leptons, because, for reasons of theoretical consistency, each generation of Standard Model fundamental fermions must be complete.

Other Experiments Contradict LSND/MiniBooNE And There Are Plausible Sources Of Systemic Error

The big problem with the reactor anomaly is that these two sets of results rather than being replicated, were repeatedly contradicted, and instead a plausible physics based explanation for why it was wrong was established.

Three different recent experiments (STEREO, PROSPECT and DANSS) have contradicted the LSND/MiniBooNE result. And, the anomalies seen at LSND/MiniBooNE were determined to most likely be due to a failure to model the mix of reactor fuels between Uranium-235 and Plutonium-239 properly, resulting in an error in the predicted number of neutrino events that the actual detections were compared to in determining that there was a deficit of neutrinos that could be explained by an oscillation to one or more sterile neutrino flavors. See Matthieu Licciardi "Results of STEREO and PROSPECT, and status of sterile neutrino searches" arxiv.org (May 28, 2021) (Contribution to the 2021 EW session of the 55th Rencontres de Moriond). See also additional analysis of the fuel mix issue, additional results from Moriond 2021 (including IceCube), and the results from the MINOS, MINOS+, Daya Bay, and Bugey-3 Experiments (these may be the same experiments mentioned above with different names) which found in a preprint that was subsequently published in a peer reviewed journal:
Searches for electron antineutrino, muon neutrino, and muon antineutrino disappearance driven by sterile neutrino mixing have been carried out by the Daya Bay and MINOS+ collaborations. This Letter presents the combined results of these searches, along with exclusion results from the Bugey-3 reactor experiment, framed in a minimally extended four-neutrino scenario. Significantly improved constraints on the θμe mixing angle are derived that constitute the most stringent limits to date over five orders of magnitude in the sterile mass-squared splitting Δm241, excluding the 90% C.L. sterile-neutrino parameter space allowed by the LSND and MiniBooNE observations at 90% CLs for Δm241<5eV^2. Furthermore, the LSND and MiniBooNE 99% C.L. allowed regions are excluded at 99% CLs for Δm241 < 1.2 eV^2.

A similar conclusion was reached using overlapping data but also data from the Planck cosmic microwave background observations here.


In addition to these issues, an analysis back in 2014 already noticed data contradicting the sterile neutrino hypothesis at the ICARUS and OPERA, and observed that some of the parameters used to make the estimates were off and that using the right ones greatly reduced the statistical significance of the anomaly. See Boris Kayser "Are There Sterile Neutrinos" (February 13, 2014). MINOS and Daya Bay had already contradicted the reactor anomaly back in 2014 as well. More recent analysis has likewise downgraded the statistical significance of the anomalies previously reported, although it has not entirely eliminated it.

Cosmology Data Strongly Disfavors Sterile Neutrinos


Cosmology measures also place a cap on neutrino mass including the sum of the neutrino masses of about 0.087 eV or less, in a manner indifferent between sterile neutrinos of less than about 10 eV, and active neutrinos, which doesn't leave room for a reactor anomaly sterile neutrino. See Eleonora Di Valentino, Stefano Gariazzo, Olga Mena "On the most constraining cosmological neutrino mass bounds" arXiv:2106.16267 (June 29, 2021).

A far heavier sterile neutrino would not be discernible as a neutrino from cosmology data and instead would look like a type of dark matter particle. But, the LSND/MiniBooNE result was pointing to a sterile neutrino with a mass of under 5 eV, so it would be subject to the cosmology bounds.

Also, there are strict direct detection exclusions on heavier dark matter particles as well, although none of those would bar a truly sterile neutrino with no interactions with ordinary matter other than oscillations with active neutrinos.

The main criticism of reliance on cosmology data is that it is highly model dependent, even though this particular conclusion is quite robust to different cosmology models.

Limits On Active Neutrinos

We can also be comfortable that there are no active neutrinos (e.g. a fourth generation neutrino otherwise identical to the three Standard Model neutrinos) with masses of less than about 10 TeV, when direct measurements paired with oscillation data limit the most massive of the three Standard Model neutrino masses to not more than 0.9 eV, and cosmology data limits the most massive of the three Standard Model neutrino masses to not more than 0.09 eV.

Data from W and Z boson decays likewise tightly constrain the number of active neutrinos with masses of less than 45,000,000,000,000 meV/c^2 to exactly three.

Dark dark matter detection experiments have ruled out particles that make up most of hypothetical dark matter particles having weak force interaction coupling constants equal to Standard Model neutrinos at masses of up to about 10 TeV (i.e. 10,000 GeV). In the chart below, that cross section is the blue dotted line marked "Z portal C(x)=1" by a factor of 1,000,000. So, even if the flux of 45 GeV+ Standard Model neutrinos were a million times smaller than the hypothetical flux of dark matter particles through Earth, they would be ruled out by the direct detection experiments up to about 10 TeV.


Direct measurement of the lightest neutrino mass from the Katrin experiment of about 0.8 eV, which means that all of the active neutrino masses have to be less than about 0.9 eV based upon the oscillation data. This means that the sterile neutrino mass predicted by the LSND/MiniBooNE result relative to the active neutrino masses still couldn't have been so massive that it would have evaded cosmology bounds.

Neutrinoless double beta decay results rule out Majorana mass neutrinos above about 180 meV (according to the body text of the linked paper). The same experiments will soon be able to confirm or rule out the scenario of sterile neutrinos heavier than 10 eV that cosmology tools cannot constrain.

Japanese Ethnogenesis Arose From Three Components Not Two

Background

The conventional story of Japanese ethnogenesis is that the Jomon hunter-gatherer-fishing culture arrived in Japan in the Upper Paleolithic era, a few thousand years after the Last Glacial Maximum, until around 1000 BCE when the Yayoi wet rice farming, cavalry warrior people arrived from Korea and conquered the Jomon, receiving substantial genetic admixture from them but little linguistic or cultural influence. The admixture event was notable particularly for having more male admixture into Yayoi society than female admixture, which is the opposite of the usually pattern when one people conquer another.

The Main Findings Of A New Paper

While this story pieced together from archaeology, linguistics and ancient and modern population genetic analysis isn't wrong, it misses two key points which a new paper studying a small but representative set of ancient DNA samples with state of the art methods and large comparison data sets reveal.

The big point is that after the Yayoi conquered the Jomon and admixed with them ca. 1000 BCE, there was a second wave of migration to Japan during the Kofun period of Japanese history, ca. 300 CE to 538 CE, by a population similar to the modern Han Chinese people that has the source of more than 60% of the resulting populations autosomal DNA. Since that second migration event, there has been only some modest introgression of additional Han Chinese-like admixture into the Japanese gene pool.


As Wikipedia explains at the link above:
The Kofun period (古墳時代, Kofun jidai) is an era in the history of Japan from about 300 to 538 AD (the date of the introduction of Buddhism), following the Yayoi period. The Kofun and the subsequent Asuka periods are sometimes collectively called the Yamato period. This period is the earliest era of recorded history in Japan, but studies depend heavily on archaeology since the chronology of historical sources tends to be distorted.

It was a period of cultural import. Continuing from the Yayoi period, the Kofun period is characterized by a strong influence from the Korean Peninsula; archaeologists consider it a shared culture across the southern Korean Peninsula, Kyūshū and Honshū.  
The word kofun is Japanese for the type of burial mound dating from this era, and archaeology indicates that the mound tombs and material culture of the elite were similar throughout the region. 
From China, Buddhism and the Chinese writing system were introduced near the end of the period. The Kofun period recorded Japan's earliest political centralization, when the Yamato clan rose to power in southwestern Japan, established the Imperial House, and helped control trade routes across the region.
Incidentally, this influx of East Asians into Japan started at fall of Han dynasty (three kingdoms) during a chaotic time of northern barbarian uprisings/invasions when dynasties were being formed in the north and the south (in addition to the Yamato dynasty of Japan).

The second point is relatively minor. The Yayoi people had a Northeast Asian genetic affinity akin, for example, to Manchurians, rather than to Han Chinese people. This insight emerges from the big discovery that the non-Jomon component of Japanese population genetics can be broken down into two distinct waves of migrations, which, when separated, are clearly derived from genetically distinct populations.

The timing of the waves of migration is supported by genetic admixture data estimates. As the body text of the new paper explains:
[W]e find support for a two-pulse model from our dating of the admixture in the Kofun individuals by DATES.

A single admixture event with the intermediate population (i.e., YR_LBIA) is estimated to have occurred 1840 ± 213 years before the present (B.P.), which is much later than the onset of the Yayoi period (~3 ka ago).

In contrast, if two separate admixture events with two distinct sources are assumed, the resulting estimates reasonably fit the timings consistent with the beginning of the Yayoi and Kofun periods (3448 ± 825 years B.P. for the admixture between Jomon and Northeast Asian ancestry and 1748 ± 175 years B.P. for Jomon and East Asian ancestry). These genetic findings are further supported by both the archaeological evidence and the historical records, which document the arrival of new people from the continent during the period.
Historical Context

The background material in the introduction, is, as is often the case, very helpful:
The Japanese archipelago has been occupied by humans for at least 38,000 years. However, its most radical cultural transformations have only occurred within the past 3000 years, during which time its inhabitants quickly transitioned from foraging to widespread rice farming to a technologically advanced imperial state. These rapid changes, coupled with geographical isolation from continental Eurasia, make Japan a unique microcosm in which to study the migratory patterns that accompanied agricultural spread and economic intensification in Asia. Before the arrival of farming cultures, the archipelago was occupied by diverse hunter-gatherer-fisher groups belonging to the Jomon culture, characterized by their use of pottery. The Jomon period began during the Oldest Dryas that followed the Last Glacial Maximum (LGM), with the earliest pottery shards dating to ~16,500 years ago (ka ago), making these populations some of the oldest users of ceramics in the world. Jomon subsistence strategies varied and population densities fluctuated through space and time, with trends toward sedentism. This culture continued until the beginning of the Yayoi period (~3 ka ago), when the arrival of paddy field rice cultivation led to an agricultural revolution in the archipelago. This was followed by the Kofun period, starting ~1.7 ka ago, which saw the emergence of political centralization and the imperial reign that came to define the region. 
An enduring hypothesis on the origin of modern Japanese populations proposes a dual-structure model, in which Japanese populations are the admixed descendants of the indigenous Jomon and later arrivals from the East Eurasian continent during the Yayoi period. This hypothesis was originally proposed on the basis of morphological data but has been widely tested and evaluated across disciplines. Genetic studies have identified population stratifications within present-day Japanese populations, supporting at least two waves of migrations to the Japanese archipelago. Previous ancient DNA studies have also illustrated the genetic affinity of Jomon and Yayoi individuals to Japanese populations today. Still, the demographic origins and impact of the agricultural transition and later state formation phase are largely unknown. From a historical linguistic standpoint, the arrival of proto-Japonic language is theorized to map to the development of Yayoi culture and the spread of wet rice cultivation. However, archaeological contexts and their continental affiliations are distinct between the Yayoi and Kofun periods; whether the spread of knowledge and technology was accompanied by major genetic exchange remains elusive.

Significance 

The mechanism of this paper does a lot to explain why the Jomon weren't simply crushed demographically the way hunter-gatherers in other parts of the world were upon encountering farming societies. 

More generally, knowing that there were sequential Manchurian and Chinese contributions to the formation of the Japanese People, and their order and distance from each other in time, also helps in reconstructing puzzles about how Japanese culture came to be the way that it is now. 

The paper demonstrates that the current population genetic mix in Japan is very recent, on the order of 1600 years old. In contrast, the population genetic mix of Europe was largely fixed by the end of the Bronze Age about 3200 years ago.

This paper sheds far more specific light on the context in which the Japanese language arose which strengthens the Altaic hypothesis and more generally favors some theories about historical Japanese linguistics while disfavoring others.

The paper calls attention to a larger Tungusic linguistic and cultural area that was a social and political and military reality and peaked at a time on the boundary between history and prehistory, that is now almost forgotten.

The paper's findings regarding the relative homogeneity of Jomon genetics over thousands of years and the entire expanse of Japanese territory supports a hypothesis that the Jomon may have shared a single language family, presumably one close to Ainu, while disfavoring deep schisms adding structure that divides this population.

The Paper

The paper and its abstract are:
Prehistoric Japan underwent rapid transformations in the past 3000 years, first from foraging to wet rice farming and then to state formation. A long-standing hypothesis posits that mainland Japanese populations derive dual ancestry from indigenous Jomon hunter-gatherer-fishers and succeeding Yayoi farmers. However, the genomic impact of agricultural migration and subsequent sociocultural changes remains unclear. We report 12 ancient Japanese genomes from pre- and postfarming periods. Our analysis finds that the Jomon maintained a small effective population size of ~1000 over several millennia, with a deep divergence from continental populations dated to 20,000 to 15,000 years ago, a period that saw the insularization of Japan through rising sea levels. Rice cultivation was introduced by people with Northeast Asian ancestry. Unexpectedly, we identify a later influx of East Asian ancestry during the imperial Kofun period. These three ancestral components continue to characterize present-day populations, supporting a tripartite model of Japanese genomic origins.
Niall P. Cooke, et al., "Ancient genomics reveals tripartite origins of Japanese populations" 7(38) Science Advances (September 17, 2021) DOI: 10.1126/sciadv.abh2419

Genetic Details

The genetic data was summarized as follows in the body text with uniparental genetic haplogroups considered:
Mitochondrial haplogroups for all Jomon individuals belong to the N9b or M7a clades, which are strongly associated with this population and rare outside of Japan today. The three Jomon males belong to the Y chromosome haplogroup D1b1, which is present in modern Japanese populations but almost absent in other East Asians. 

In contrast, the Kofun individuals all belong to mitochondrial haplogroups that are common in present-day East Asians, while the single Kofun male has the O3a2c Y chromosome haplogroup, which is also found throughout East Asia, particularly in mainland China.



Further Analysis Of Each Of The Three Components

The discussion section of the paper spells out a narrative and clarifies the geographic origin and timing of each population genetic wave of migration to Japan.

The Jomon
The lineage ancestral to Jomon is proposed to have originated in Southeast Asia with a deep divergence from other ancient and present-day East Asians. The timing of this divergence was previously estimated to be between 18 and 38 ka ago; our modeling with the ROH profile of the 8.8-ka-old Jomon individual narrows this date to a lower limit within the range of 20 to 15 ka ago. The Japanese archipelago had become accessible through the Korean Peninsula at the beginning of the LGM (28 ka ago), enabling population movements between the continent and archipelago. The subsequent widening of the Korea Strait 17 to 16 ka ago due to rising sea levels may have led to the isolation of the Jomon lineage from the rest of the continent and also coincides with the oldest evidence of Jomon pottery production. Our ROH modeling also shows that the Jomon maintained a small effective population size of ~1000 during the Initial Jomon period, and we observe very little changes to their genomic profile in subsequent periods or across the different islands of the archipelago.

A TreeMix analysis places the Jomon as an offshoot of the Hoabinhian people (a Mesolithic wave of people in Southeast Asia and Southern China ca. 12,000 to 10,000 BCE), with the Kusunda people (who are hunter-gathers in Western Nepal who historically spoke a language that is an isolate and were animistic religiously) as an intermediate population. 

Y-DNA haplogroup D has a cryptic distribution found in isolated pockets across Asia including Siberia and Tibet that tends to favor a Northern route origin. 

The mtDNA haplogroups N9b and M7a also tell story so deep in history (both are very basal in the Eurasian mtDNA tree and derived from African  mtDNA haplogroup L3) that it is hard to reconstruct. Both mtDNA M and mtDNA N show distributions that tend to favor a Southeast Asian route to Japan, but perhaps this is because the northern bearers of this haplogroup went extinct, and were then almost fully replaced in the Last Glacial Maximum. 

See also this paper on Jomon and Ainu mtDNA, noting that by the Edo period, 29/94 of Ainu people in Hokkaido had characteristically Jomon mtDNA i.e. mtDNA N9b1, M7a2, G1b*; 33/94 has Okhotsk mtDNA (a NE Asian population with Y1 and C5a2b); 6/94 had Siberian mtDNA (D4o1, G1b1 and Z1a), and 26/94 has "mainland" Japan mtDNA (D4xD4o1, M7b1a1a1, F1b1a, N9a, M7a1a7, A5a, and A5c) (a classification I'm not inclined to fully agree with).

The Yayoi 

The Yayoi described as a distinct wave tends to support the hypothesis that Japanese is part of an Altaic macro-language family, with the Korean and Japanese languages probably having the most close affinity to the Tungusic languages of Eastern Siberia and Manchuria (shown in red below) prior to heavy borrowing from Chinese (each of the three maps below are from Wikipedia). As the Altaic language link above explains:

With fewer speakers than Mongolic or Turkic languages, Tungusic languages are distributed across most of Eastern Siberia (including the Sakhalin Island), northern Manchuria and extending into some parts of Xinjiang and Mongolia. Some Tungusic languages are extinct or endangered languages as a consequence of language shift to Chinese and Russian. In China, where the Tungusic population is over 10 million, just 46,000 still retain knowledge of their ethnic languages.

Scholars have yet to reach agreement on how to classify the Tungusic languages but two subfamilies have been proposed: South Tungusic (or Manchu) and North Tungusic (Tungus). Jurchen (now extinct; Da Jin 大金), Manchu (critically endangered; Da Qing 大清), Sibe (Xibo 锡伯) and other minor languages comprise the Manchu group.

The Northern Tungusic languages can be reclassified even further into the Siberian Tungusic languages (Evenki, Lamut, Solon and Negidal) and the Lower Amur Tungusic languages (Nanai, Ulcha, Orok to name a few).

Significant disagreements remain, not only about the linguistic sub-classifications but also some controversy around the Chinese names of some ethnic groups, like the use of Hezhe (赫哲) for the Nanai people.




The spread of agriculture is often marked by population replacement, as documented in the Neolithic transition throughout most of Europe, with only minimal contributions from hunter-gatherer populations observed in many regions. However, we find genetic evidence that the agricultural transition in prehistoric Japan involved the process of assimilation, rather than replacement, with almost equal genetic contributions from the indigenous Jomon and new immigrants at the Kyushu site. This implies that at least some parts of the archipelago supported a Jomon population of comparable size to the agricultural immigrants at the beginning of the Yayoi period, as it is reflected in the high degree of sedentism practiced by some Jomon communities. 
The continental component inherited by the Yayoi is best represented in our dataset by the Middle Neolithic and Bronze Age individuals from the West Liao River basin with a high level of Amur River ancestry (i.e., WRL_BA_o and HMMH_MN). Populations from this region are genetically heterogeneous in time and space. The Middle-to-Late Neolithic transition (i.e., between 6.5 and 3.5 ka ago) is characterized with an increase in Yellow River ancestry from 25 to 92% but a decrease in Amur River ancestry from 75 to 8% over time, which can be linked to an intensification of millet farming. However, the population structure changes again in the Bronze Age, which started around 3.5 ka ago, due to an apparent influx of people from the Amur River basin. This coincides with the beginning of intensive language borrowing between Transeurasian and Sinitic linguistic subgroups. Excess affinity to the Yayoi is observable in the individuals who are genetically close to ancient Amur River populations or present-day Tunguisic-speaking populations. Our findings imply that wet rice farming was introduced to the archipelago by people who lived somewhere around the Liaodong Peninsula but who derive a major component of their ancestry from populations further north, although the spread of rice agriculture originated south of the West Liao River basin. 

Further linguistic analysis can be found in another recent paper referenced in this one, which argues that the original macro-Altaic homeland was an early Neolithic one in the West Liao River basin.

The Kofun

The Kofun wave of migration was Han Chinese-like and from the Southern Korean peninsula. 

The most noticeable archaeological characteristic of Kofun culture is the custom of burying the elite in keyhole-shaped mounds, the size of which reflect hierarchical rank and political power. The three Kofun individuals sequenced in this study were not buried in those tumuli, which suggests that they were lower-ranking people. Their genomes document the arrival of people with majority East Asian ancestry to Japan and their admixture with the Yayoi population. This additional ancestry is best represented in our analysis by Han, who have multiple ancestral components. A recent study has reported that people became morphologically homogeneous in the continent from the Neolithic onward, which implies that migrants during the Kofun period were already highly admixed. 
Several lines of archaeological evidence support the introduction of new large settlements to Japan, most likely from the southern Korean peninsula, during the Yayoi-Kofun transition. Strong cultural and political affinity between Japan, Korea, and China is also observable from several imports, including Chinese mirrors and coins, Korean raw materials for iron production, and Chinese characters inscribed on metal implements (e.g., swords). Access to these resources from overseas brought about intensive competition between communities within the archipelago; this facilitated political contact with polities in the continent, such as the Yellow Sea coast, for dominance. Therefore, continuous migration and continental impacts are evident throughout the Kofun period. Our findings provide strong support for the genetic exchange involved in the appearance of new social, cultural, and political traits in this state formation phase.
The Ancient DNA Samples Used

The paper looks at nine new Jomon ancient DNA samples in addition to three old Jomon ancient DNA samples (in all four men and eight women from ca. 6819 BCE to 569 BCE), two old Yayoi ancient DNA samples (a man from ca. 44 CE and a woman), and three new Kofun ancient DNA samples (a man and two women from ca. 622 CE to 675 CE) (although the dating would suggest that they are actually from the Asuka era).

The paper further explains its samples as follows:

Here, we report 12 newly sequenced ancient Japanese genomes spanning 8000 years of the archipelago’s pre- and protohistory. To our knowledge, this is the largest set of time-stamped genomes from the archipelago, including the oldest Jomon individual and the first genomic data from the imperial Kofun period. We also include five published prehistoric Japanese genomes in our analysis: three Jomon individuals (F5 and F23 from the Late Jomon period and IK002 from the Final Jomon period), as well as two 2000-year-old individuals associated with the Yayoi culture from the northwestern part of Kyushu Island, where skeletal remains exhibit Jomon-like characters rather than immigrant types but other archaeological materials clearly support their association with the Yayoi culture. Despite this morphological assessment, these two Yayoi individuals show an increased genetic affinity to present-day Japanese populations compared with the Jomon, implying that admixture with continental groups was already advanced by the Late Yayoi period.

The paper later notes that:

Our kinship analysis confirms that all pairs of individuals are unrelated. 

The conclusion of the paper notes that the main limitation of its findings is the small size of its ancient DNA samples for each period which may not reflect geographic diversity and population structure in each time period (especially the Yayoi and Kofun periods).

Monday, September 20, 2021

Another Voice In the Gravity And Tully-Fischer Conversation (And More)

None of this is unfamiliar to me, but it is nice to see more people having this epiphany. Of course, the next step that this author needs to take after this initial baby step is to imagine what kind of physics would be necessary to produce this kind of structure.
The flattening of spiral-galaxy rotation curves is unnatural in view of the expectations from Kepler's third law and a central mass. It is interesting, however, that the radius-independence velocity is what one expects in one less dimension. In our three-dimensional space, the rotation curve is natural if, outside the galaxy's center, the gravitational potential corresponds to that of a very prolate ellipsoid, filament, string, or otherwise cylindrical structure perpendicular to the galactic plane. While there is observational evidence (and numerical simulations) for filamentary structure at large scales, this has not been discussed at scales commensurable with galactic sizes. If, nevertheless, the hypothesis is tentatively adopted, the scaling exponent of the baryonic Tully--Fisher relation due to accretion of visible matter by the halo comes out to reasonably be 4. At a minimum, this analytical limit would suggest that simulations yielding prolate haloes would provide a better overall fit to small-scale galaxy data.

UPDATE September 21, 2021

A couple more articles in the same vein. 

The first is very akin to Deur's effort to infer dark sector phenomena from an analysis of General Relativity that grapples with the removal of simplifying assumptions often used to make it possible to obtain a clean analytic solution, although this approach is inspired by statistical mechanics rather than by quantum chromodynamics.
Inspired by the statistical mechanics of an ensemble of interacting particles (BBGKY hierarchy), we propose to account for small-scale inhomogeneities in self-gravitating astrophysical fluids by deriving a non-ideal Virial theorem and non-ideal NavierStokes equations. These equations involve the pair radial distribution function (similar to the two-point correlation function used to characterize the large-scale structures of the Universe), similarly to the interaction energy and equation of state in liquids. Within this framework, small-scale correlations lead to a non-ideal amplification of the gravitational interaction energy, whose omission leads to a missing mass problem, e.g., in galaxies and galaxy clusters. 
We propose to use a decomposition of the gravitational potential into a near- and far-field component in order to account for the gravitational force and correlations in the thermodynamics properties of the fluid. Based on the non-ideal Virial theorem, we also propose an extension of the Friedmann equations in the non-ideal regime and use numerical simulations to constrain the contribution of these correlations to the expansion and acceleration of the Universe. 
We estimate the non-ideal amplification factor of the gravitational interaction energy of the baryons to lie between 5 and 20, potentially explaining the observed value of the Hubble parameter (since the uncorrelated energy account for ∼ 5%). Within this framework, the acceleration of the expansion emerges naturally because of the increasing number of sub-structures induced by gravitational collapse, which increases their contribution to the total gravitational energy. A simple estimate predicts a non-ideal deceleration parameter qni ' -1; this is potentially the first determination of the observed value based on an intuitively physical argument. We show that another consequence of the small-scale gravitational interactions in bound structures (spiral arms or local clustering) yields a transition to a viscous regime that can lead to flat rotation curves. This transition can also explain the dichotomy between (Keplerian) LSB elliptical galaxy and (non-Keplerian) spiral galaxy rotation profiles. Overall, our results demonstrate that non-ideal effects induced by inhomogeneities must be taken into account, potentially with our formalism, in order to properly determine the gravitational dynamics of galaxies and the larger scale universe. 
P. Tremblin, et al., "Non-ideal self-gravity and cosmology: the importance of correlations in the dynamics of the large-scale structures of the Universe" arXiv:2109.09087 (September 19, 2021) (submitted to A&A, original version submitted in 2019).

The second generalized modified gravity approaches to explaining dark matter in galaxies in the traditional geometric paradigm.
We obtain more straightforwardly some features of dark matter distribution in the halos of galaxies by considering the spherically symmetric space-time, which satisfies the flat rotational curve condition, and the geometric equation of state resulting from the modified gravity theory. In order to measure the equation of state for dark matter in the galactic halo, we provide a general formalism taking into account the modified f(X) gravity theories. Here, f(X) is a general function of X∈{R,,T}, where R, and T are the Ricci scalar, the Gauss-Bonnet scalar and the torsion scalar, respectively. These theories yield that the flat rotation curves appear as a consequence of the additional geometric structure accommodated by those of modified gravity theories. Constructing a geometric equation of state wX≡pX/ρX and inspiring by some values of the equation of state for the ordinary matter, we infer some properties of dark matter in galactic halos of galaxies.
Ugur Camci, "On Dark Matter As A Geometric Effect in the Galactic HaloarXiv:2109.09466 366 Astrophys. Space. Sci. 91 (September 17, 2021) DOI: 10.1007/s10509-021-03997-5

While not in the same vein, so it doesn't get lost in the shuffle, I also note a new paper which identifies a potential source of systemic error that could help explain the discrepancy in the Hubble constant measurements at high z and low z. 

While papers with new physics explanations for the Hubble constant anomaly abound, given the history of prior Hubble constant anomalies, and prior anomalies in fundamental physics generally, papers identifying potential sources of systemic error deserve outsized attention, because most anomalies in fundamental physics are resolved by discovering them.
The bias in the determination of the Hubble parameter and the Hubble constant in the modern Universe is discussed. It could appear due to statistical processing of data on galaxies redshifts and estimated distances based on some statistical relations with limited accuracy. This causes a number of effects leading to either underestimation or overestimation of the Hubble parameter when using any methods of statistical processing, primarily the least squares method (LSM). The value of the Hubble constant is underestimated when processing a whole sample; when the sample is constrained by distance, especially when constrained from above, it is significantly overestimated due to data selection. The bias significantly exceeds the values of the error the Hubble constant calculated by the LSM formulae.

These effects are demonstrated both analytically and using Monte Carlo simulations, which introduce deviations in both velocities and estimated distances to the original dataset described by the Hubble law. The characteristics of the deviations are similar to real observations. Errors in estimated distances are up to 20%. They lead to the fact that when processing the same mock sample using LSM, it is possible to obtain an estimate of the Hubble constant from 96% of the true value when processing the entire sample to 110% when processing the subsample with distances limited from above.

The impact of these effects can lead to a bias in the Hubble constant obtained from real data and an overestimation of the accuracy of determining this value. This may call into question the accuracy of determining the Hubble constant and significantly reduce the tension between the values obtained from the observations in the early and modern Universe, which were actively discussed during the last year.
S.L.Parnovsky "Bias of the Hubble constant value caused by errors in galactic distance indicatorsarXiv:2109.09645 (September 20, 2021) (Accepted for publication at Ukr. J. Phys).

Sunday, September 19, 2021

More Galaxy Dynamics Data And Modified Gravity Theories

There are several new papers examining galactic dynamics which are influenced by dark matter and/or gravity modification (or novel ways of operationalizing non-linear general relativity effects).

The most notable one finds that the external field effect which would be expected to cause ultra-diffuse galaxies in relatively strong gravitational fields within galaxy clusters to behave more like they had no dark matter or gravity modification in the MOND paradigm, actually seem to behave like ordinary galaxies, yet another quirk of galaxy cluster behavior where the MOND paradigm is missing something.

The next four explore methods to extend analysis of the baryonic Tully-Fischer relation (that MOND predicts but which could also arise by other means), and is observed for all larger galaxies, in the context of galaxies that are small, remote in time, or otherwise hard to measure. The relation holds in this circumstances once the challenges of dealing with this observations are addressed properly.

The last measures an apparent radial acceleration relation (RAR) in galaxy clusters with a tight fit but a different constant that is a factor of ten higher and not perfectly consistent between that paper and other prior work in the field.

The Missing EFE In The Coma Cluster

The external field effect is not showing up in ultra-diffuse galaxies in the Coma cluster, a galaxy cluster, which is a context in which MOND has previously underestimated dark matter effects. 
The tight radial acceleration relation (RAR) obeyed by rotationally supported disk galaxies is one of the most successful a priori prediction of the modified Newtonian dynamics (MOND) paradigm on galaxy scales. 
Another important consequence of MOND as a classical modification of gravity is that the strong equivalence principle (SEP) - which requires the dynamics of a small free-falling self-gravitating system to not depend on the external gravitational field in which it is embedded - should be broken. Multiple tentative detections of this so-called external field effect (EFE) of MOND have been made in the past, but the systems that should be most sensitive to it are galaxies with low internal gravitational accelerations residing in galaxy clusters, within a strong external field. 
Here, we show that ultra-diffuse galaxies (UDGs) in the Coma cluster do lie on the RAR, and that their velocity dispersion profiles are in full agreement with isolated MOND predictions, especially when including some degree of radial anisotropy. However, including a breaking of the SEP via the EFE seriously deteriorates this agreement. 
We discuss various possibilities to explain this within the context of MOND, including a combination of tidal heating and higher baryonic masses. We also speculate that our results could mean that the EFE is screened in cluster UDGs. The fact that this would happen precisely within galaxy clusters, where classical MOND fails, could be especially relevant for the nature of the residual MOND missing mass in clusters of galaxies.

The account of MOND in Section 1.1 and 1.3 of the paper is fairly well done:
This approach, known as Modified Newtonian Dynamics (MOND; see, e.g., Sanders & McGaugh 2002; Famaey & McGaugh 2012; Milgrom 2014, for reviews) postulates that the gravitational acceleration g approaches √ g(N)a(0) when the Newtonian gravitational acceleration g(N) falls below a characteristic acceleration scale a(0) ≈ 10^−10 m s^−2 , but remains Newtonian above this threshold. This allows one to directly predict the dynamics of galaxies from their baryonic mass distribution alone. This empirical modification of the gravitational law was initially proposed (Milgrom 1983a,b,c) to solve the missing mass problem in the high surface brightness galaxies known at the time, in particular their asymptotically flat circular velocity curves (e.g., Bosma 1978; Rubin et al. 1978; Faber & Gallagher 1979). It is particularly intriguing that this simple recipe has survived almost 40 years of scrutiny at galactic scales, as it has been able to predict the dynamics of a wide variety of galaxies (e.g., Begeman et al. 1991; Sanders 1996; McGaugh & de Blok 1998; Sanders & Verheijen 1998; de Blok & McGaugh 1998; Sanders & Noordermeer 2007; Gentile et al. 2007b; Swaters et al. 2010; Gentile et al. 2011; Famaey & McGaugh 2012; Milgrom 2012; McGaugh & Milgrom 2013a,b; Sanders 2019), including low surface brightness and dwarf galaxies where internal accelerations can be well below a0 such that the MOND acceleration should a priori deviate significantly from the Newtonian acceleration. This was a core prediction of the original MOND papers, and one of its most intriguing successes. MOND can also provide possible answers to various other puzzles in galaxy dynamics, such as the prevalence of bulgeless disks (Combes 2014) and of fast bars (Tiret & Combes 2007, 2008; Roshan et al. 2021), or the detailed kinematics of polar ring galaxies (Lüghausen et al. 2013). Generally speaking, there now appears to be a clear and direct connection between the baryonic mass distribution and the rotation curve in most disk galaxies, known as the radial acceleration relation (RAR; McGaugh et al. 2016; Lelli et al. 2017), and this empirical relation is actually indistinguishable from the original MOND prescription (Li et al. 2018). Evaluating whether the RAR holds for all types of galaxies and in all environments is thus of high importance to assess the viability of MOND as an alternative to particle DM in galaxies.

However, it is important to note that it was also originally predicted that the non-linearity of the MOND acceleration should typically lead to a violation of the strong equivalence principle of GR, according to which the internal dynamics of a self-gravitating system embedded in a constant gravitational field should not depend on the external field strength. Within MOND, systems embedded in an external field stronger than their internal one should experience an ‘external field effect’ (EFE; Milgrom 1983c; Bekenstein & Milgrom 1984; Famaey & McGaugh 2012; McGaugh & Milgrom 2013a,b; Milgrom 2014; Wu & Kroupa 2015; Haghi et al. 2019b) whose consequence is notably that the deviations from Newtonian dynamics are suppressed if the external field is strong enough, and in particular if it is larger than a(0). Its influence can be important for the stability and secular evolution of galaxies even when it is weak (Banik et al. 2020), and it can create interesting features such as asymmetric tidal tails of globular clusters (Thomas et al. 2018). The EFE is an observational necessity to allow, e.g., the dynamics of wide binary stars to remain consistent with MOND (Pittordis & Sutherland 2019, Banik 2019, although see also Hernandez et al. 2021). Because of this EFE, a rotationally supported (pressure-supported) system in isolation is expected to have a higher rotational velocity (velocity dispersion) than the same system around a massive host (e.g. Wu et al. 2007; Gentile et al. 2007a; McGaugh & Milgrom 2013a,b; Pawlowski & McGaugh 2014; Pawlowski et al. 2015; McGaugh 2016; Hees et al. 2016; Haghi et al. 2016; Müller et al. 2019; Chae et al. 2020). In particular, the latter should not follow the RAR, contrary to the more isolated systems that should lie on the RAR. This breaking of the strong equivalence principle should be a smoking gun of MOND, and it is therefore important to test it for galaxies with internal gravitational accelerations lower than the external field in which they are embedded: this is the focus of the present work.

The need for DM within GR is of course not limited to galaxies. Expanding MOND predictions to the cosmological regime needs a relativistic framework for the paradigm. In order to retain the success of the standard ΛCDM cosmological model on large scales, some hybrid models have for instance been proposed, where GR is retained but gravity is effectively modified in galaxies through some exotic properties of DM itself, such as in dipolar DM (Blanchet & Le Tiec 2009; Bernard & Blanchet 2015; Blanchet & Heisenberg 2015) or superfluid DM (Khoury 2015; Berezhiani & Khoury 2015; Berezhiani et al. 2018, 2019). More traditional relativistic MOND theories rely on a multi-field framework (typically with a scalar and a vector field in addition to the metric), as originally proposed by Bekenstein (2004), but adapted to pass the most recent constraints from gravitational waves (Skordis & Złosnik 2019). It has recently been shown, as a proof of concept, how the angular power spectrum of the Cosmic Microwave Background (CMB) could be reproduced in such a framework (Skordis & Złosnik 2020): the scalar field, which gives rise to the MOND behaviour in the quasi-static limit, also plays the role of DM in the time-dependent cosmological regime, thereby providing an analogue to cosmological DM for the CMB.

However, the real challenge for such an approach, and for MOND in general, is to explain the mass discrepancy in galaxy clusters. It has indeed long been known that applying the MOND recipe to galaxy clusters yields a residual missing mass problem in these objects (e.g., Sanders 1999, 2003; Pointecouteau & Silk 2005; Natarajan & Zhao 2008; Angus et al. 2008). This is essentially because, contrary to the case of galaxies, there is observationally a need for DM even where the observed acceleration is larger than a(0), meaning that the MOND prescription is not enough to explain the observed discrepancy. In the central parts of clusters, the ratio of MOND dynamical mass to observed baryonic mass can reach a value of 10. This cluster missing mass problem extends to giant ellipticals residing at the center of clusters (Bílek et al. 2019b). It is also clear that this residual missing mass must be collisionless (Clowe et al. 2006; Angus et al. 2007), and it has hence been proposed that it could be made of cold, dense molecular gas clouds (Milgrom 2008) or some form of hot dark matter (HDM) such as sterile neutrinos, which would not condense on galaxy scales (Angus et al. 2010; Haslbauer et al. 2020). In such cases, the residual missing mass should be an important gravitational source contributing to the EFE acting on galaxies residing in clusters. On the other hand, if the residual MOND missing mass problem would itself be a gravitational phenomenon, it would then not necessarily contribute as a source to the EFE. Therefore, studying the dynamics of galaxies residing in galaxy clusters, and in particular whether the EFE can be detected there, should provide powerful constraints for relativistic model-building in the MOND context, as well as illuminate our understanding of scaling relations with environment in the cold dark matter (CDM) paradigm. Galaxies with a very low internal gravity, hence ultra-diffuse ones, are best suited for such a study. . . . 
1.3. UDGs in MOND

UDGs in clusters provide a testing ground for MOND and the EFE given the singularly low internal accelerations stemming from their low surface brightness and the strong external field. The small velocity dispersion observed in the two group UDGs NGC 1052-DF2 and NGC 1052-DF4, inferring dynamical masses close to their stellar masses, was initially interpreted as a challenge for MOND (van Dokkum et al. 2018, 2019a). Indeed, the dynamical effect attributed to DM in the CDM model, and to a modification of the gravitational law within MOND in isolation, would be absent. But taking the EFE into account removes or significantly lessens the tension (Famaey et al. 2018; Kroupa et al. 2018; Müller et al. 2019; Haghi et al. 2019b). On the other hand, the large velocity dispersion of the Coma cluster UDG DF44 (van Dokkum et al. 2016, 2019b) and its relative agreement with the isolated MOND prediction without EFE has been used to place constraints on its distance from the cluster center within MOND, or on a potential need for an additional baryonic mass (Bílek et al. 2019a; Haghi et al. 2019a).

Different approaches have been used to take the EFE into account in this context: Kroupa et al. (2018) and Haghi et al. (2019b) use fitting functions for the one-dimensional line-of-sight velocity dispersion in an external field stemming from MOND N-body simulations by Haghi et al. (2009); Famaey et al. (2018) and Müller et al. (2019) use the one-dimensional analytical expression for the acceleration field in the presence of an external field from Famaey & McGaugh (2012), Eq. (59), together with the Wolf et al. (2010) relation for the line-of-sight velocity dispersion. Bílek et al. (2019a) and Haghi et al. (2019a) do not quantitatively assess the EFE on the velocity dispersion for DF44, since the data required as little of it as possible. We propose here to examine more quantitatively the question of the EFE in this galaxy, and to expand the study to a larger sample of UDGs.
The paper considers several ways to resolve the discrepancy it observes. The highlighted fifth possibility, which is also identified in the abstract, is particular notable.
We discuss hereafter different possible interpretations for the tension between the measurements and the MOND prediction with EFE, constraining either MOND itself or the formation and evolution of UDGs within this theory:

(1) the observed UDGs are further away from the cluster center than they seem, have fallen inside the cluster relatively recently, and/or are disrupted by tides in the cluster environment;

(2) they have higher stellar mass-to-light ratios than assumed here or are surrounded by additional baryonic dark matter haloes;

(3) the EFE varies from one galaxy to another depending on its individual history;

(4) the characteristic acceleration scale of MOND varies with the environment, being higher in clusters;

(5) the cluster environment shuts down the EFE within the parent relativistic theory of MOND.

Alternatively, MOND being an effective dark matter scaling relation of course also remains a serious possibility: in that context, the fact that cluster UDGs obey the same scaling relation as field spirals, despite their very different environments and likely different formation scenarios, is still particularly intriguing, irrespective of the underlying theoretical framework.
Another possibility not discussed is that the method used to convert velocity dispersion to equivalent circular velocity is flawed and uses to low of a conversion coefficient.

The discussion of some of the fifth possibility is interesting:
5.5. Screening the EFE in galaxy clusters? 

We note that the apparent absence of EFE happens precisely within galaxy clusters, where classical MOND fails to explain the overall dynamics of the cluster, and we conjecture here that these two facts might possibly be related. In the case of EMOND, this would be explained by an effective increase of the MOND acceleration constant, not by a screening of the EFE itself. But another possibility is that the EFE is severely damped in galaxy clusters. 
In a theory like that of Skordis & Złosnik (2020), the action harbours a free-function, playing the role of the MOND interpolating function, depending both on the spatial gradient squared of the scalar field |∇ϕ| 2 (with a 3/2 exponent, characteristic of MOND actions) and on its temporal derivative having a non-zero minimum leading to gravitating “dust". It is this time dependent term which allows to reproduce a reasonable angular power spectrum for the CMB, and one could therefore speculate that it can possibly also give rise to additional gravitating “dust" inside galaxy clusters, to explain the residual missing mass of MOND. However, it is not clear that, if the scalar field is dominated by this “dust" component inside the cluster itself, it would couple to the scalar field within the UDG in the same way as in the fully quasi-static limit. Therefore, one could imagine that, precisely because the residual missing mass in galaxy clusters would be caused by the same scalar field as that creating the MOND effect inside the UDG, the EFE could be effectively screened within clusters. Note that this is especially relevant for any model that would try to explain away the residual MOND missing mass in clusters of galaxies, as such an explanation would not work if the residual missing mass is made of additional hot DM like light sterile neutrinos. 
In this context of EFE screening, one could imagine two possibilities: one where the EFE would be solely produced by the baryonic mass of the cluster, and one where it would be almost fully screened, the UDG living in its MOND bubble effectively decorrelated from the dynamics of the cluster itself. We test here the first hypothesis by redoing the analysis of Section 4 using only the Coma cluster hot gas mass distribution M(gas) (Eq. (33)) derived from the β-model of Eq. (16) as a source of EFE, instead of the mass distribution MC(r) inferred from hydrostatic equilibrium. This mass is about 1 dex below MC at a distance of 1 Mpc but reaches MC at 10 Mpc. As a consequence, the resulting velocity dispersions at distances smaller than 10 Mpc are higher than in Fig. 9, as shown in Fig. E.1. However, the difference is not sufficient to significantly alter our conclusion on the mismatch between the observed velocity dispersions and the predictions with EFE at the average d(mean). 
This means that, to explain our results with the nominal values of the stellar mass-to-light ratios, the EFE should be almost fully screened for the UDGs residing inside clusters. This is actually also the case in some hybrid versions of MOND such as the superfluid DM theory (Khoury 2015; Berezhiani & Khoury 2015; Berezhiani et al. 2018, 2019). As discussed in detail in Sect. IX.B of Berezhiani et al. (2018), the superfluid core would be rather small in galaxy clusters (of the order of a few hundreds kpc at most) and no EFE would be expected for cluster UDGs, contrary to the case of satellite galaxies orbiting within the superfluid core of their host, where the EFE would be expected to be similar to the MOND case.

Deur Compared

The authors hypothesize a screening mechanism which isn't too far from how Alexandre Deur has models gravitational field self-interactions that can look like dark matter effects in clusters (e.g. to explain the Bullet Cluster). I'll recap my summaries (quoting myself from the sidebar permanent page on Deur) of that analysis below.
Isolated Point Masses

For two significant point masses with nothing else nearby, self-interactions cause the system to reduce from a three dimensional one to a flux tube causing the force between them to remain nearly constant without regard to distance.

Disk-Like Masses

If the mass is confined to a disk, the self-interactions cause the system to reduce from a three dimensional one to a two dimensional one, causing the force to have a 1/r form that we see in the MONDian regime of spiral galaxies.

In the geometries where Deur's approach approximate's MOND, the following formula approximate's the self-interaction term:

FG = GNM/r2 + c2(aπGNM)1/2/(2√2)r

where Fis the effective gravitational force, GN is Newton's constant, c is the speed of light, M is ordinary baryonic mass of the gravitational source, r is the distance between the source mass and the place that the gravitational force is measured, and a is a physical constant that is the counterpart of a0  in MOND (that should in principle be possible to derive from Newton's constant) which is equal to 4*10−44 m−3s2.

Thus, the self-interaction term that modifies is proportionate to (GNM)1/2/r. So, it is initially much smaller that the first order Newtonian gravity term, but it declines more slowly than the Newtonian term until it is predominant.

Spherically Symmetric Masses

If the mass is spherically symmetric, the self-interactions cancel out and the system remains three dimensional causing the force to have the 1/r2 form that we associate with Newtonian gravity.
Why do galactic clusters have so much more apparent dark matter than spiral galaxies?

Because geometrically, they are closer to the two point particle scenario, in which galaxies within the cluster are the point particles that exert a distance independent force upon each other (analogous to flux tubes in QCD), rather than being spherically symmetric or disk-like.

Why does the Bullet Cluster behave as it does?

Since gas dominates the visible mass of a cluster, the observation that most of the total (dark) mass did not stay with the gas appears to rule out modifications of gravity as an alternative to dark matter. But, actually, this isn't the case in a self-interacting graviton scenario.

Because it has a gaseous component that is more or less spherically symmetric, that component has little apparent dark matter, while the galaxy components, which come close to the two point mass flux tube paradigm which is equivalent to a great amount of inferred dark matter. So, the gaseous portion and the core galaxy components are offset from each other. The apparent dark matter tracks the galaxy cores and not the interstellar gas medium between them.

A good place to review this analysis from the source is A. Deur, “Implications of Graviton-Graviton Interaction to Dark Matter” (May 6, 2009) (published at 676 Phys. Lett. B 21 (2009)).

The Makeup Of The Coma Cluster

Applying Deur's analysis to the Coma Cluster requires more of an understanding of its geometry that is commonly considered relevant. Here is what Wikipedia has to say about the Coma Cluster (emphasis mine):

The Coma Cluster (Abell 1656) is a large cluster of galaxies that contains over 1,000 identified galaxies. Along with the Leo Cluster (Abell 1367), it is one of the two major clusters comprising the Coma Supercluster. It is located in and takes its name from the constellation Coma Berenices.

The cluster's mean distance from Earth is 99 Mpc (321 million light years). Its ten brightest spiral galaxies have apparent magnitudes of 12–14 that are observable with amateur telescopes larger than 20 cm. The central region is dominated by two supergiant elliptical galaxies: NGC 4874 and NGC 4889. The cluster is within a few degrees of the north galactic pole on the sky. Most of the galaxies that inhabit the central portion of the Coma Cluster are ellipticals. Both dwarf and giant ellipticals are found in abundance in the Coma Cluster.

As is usual for clusters of this richness, the galaxies are overwhelmingly elliptical and S0 galaxies, with only a few spirals of younger age, and many of them probably near the outskirts of the cluster.

The full extent of the cluster was not understood until it was more thoroughly studied in the 1950s by astronomers at Mount Palomar Observatory, although many of the individual galaxies in the cluster had been identified previously.

The Coma Cluster is one of the first places where observed gravitational anomalies were considered to be indicative of unobserved mass. In 1933 Fritz Zwicky showed that the galaxies of the Coma Cluster were moving too fast for the cluster to be bound together by the visible matter of its galaxies. Though the idea of dark matter would not be accepted for another fifty years, Zwicky wrote that the galaxies must be held together by "...some dunkle Materie."

About 90% of the mass of the Coma cluster is believed to be in the form of dark matter. The distribution of dark matter throughout the cluster, however, is poorly constrained. . . . 

The Coma cluster contains about 800 galaxies within a 100 x 100 arc-min area of the celestial sphere.

NASA in an explanation of a Hubble image of the Coma cluster has more to say (emphasis mine):

The Hubble's Advanced Camera for Surveys viewed a large portion of the cluster, spanning several million light-years across. The entire cluster contains thousands of galaxies in a spherical shape more than 20 million light-years in diameter.

Also known as "Abell 1656," the Coma Cluster is more than 300 million light-years away. The cluster, named after its parent constellation Coma Berenices, is near the Milky Way's north pole. This places the Coma Cluster in an area unobscured by dust and gas from the plane of the Milky Way, and easily visible by Earth viewers.

Most of the galaxies that inhabit the central portion of the Coma Cluster are ellipticals. These featureless "fuzz-balls" are pale goldish brown in color and contain populations of old stars. Both dwarf, as well as giant ellipticals, are found in abundance in the Coma Cluster.

Farther out from the center of the cluster are several spiral galaxies. These galaxies have clouds of cold gas that are giving birth to new stars. Spiral arms and dust lanes "accessorize" these bright bluish-white galaxies that show a distinctive disk structure.

In between the ellipticals and spirals is a morphological class of objects known as S0 (S-zero) galaxies. They are made up of older stars and show little evidence of recent star formation, however, they do show some assemblage of structure -- perhaps a bar or a ring, which may give rise to a more disk-like feature.

The Coma cluster has a volume of about 4.2*10^21 cubic light years, for a mean density of about one galaxy per 4.2*10^18 light years (although much higher towards the core and lower at the fringes), suggesting a mean separation of on the order of one to a few million light years from each other (by comparison the Milky Way galaxy has a radius of about 50,000 light years).

Applying Deur's Analysis of the Coma cluster 

If Deur's analysis is right, the predominantly elliptical galaxies of the Coma cluster have little apparently internal dark matter (which MOND would also predict) but unlike MOND wouldn't be producing abnormally large external to the galaxy fields either. 

The external diffuse gravitational fields of the 1000 or so galaxies in the Coma cluster may also largely cancel each other out since they are approximately spherically symmetric and have so many sources. So, there may indeed, be less of a net external gravitational field on ultra diffuse galaxies in clusters than one might expect (particularly in a MOND regime).

Instead, the apparent dark matter phenomena in clusters like the Coma cluster might largely arise from flux tubes of enhanced gravity in a spiderweb of enhanced gravitational lines between galaxies that are close to each other within the cluster.

Figuring Out How To Compare Dwarf Spheroidal Pressure Supported Galaxies With Other Galaxies

Stacy McGaugh, a major researcher in the MOND paradigm and lead author of the paper below, also discusses it at his Triton Station blog. His latest paper is about extending the scope of observational confirmation of the Baryonic Tully-Fischer relationship (which is implied by MOND but possible to develop by other means as well) to a class of galaxies where the usual way of measuring it doesn't work very well.

In particular, the paper tries to figure out how to use velocity dispersion values (sigma), which should be proportionate to the circular velocity of stars around a galaxy (V) under certain weak assumptions, because it is easier to measure velocity dispersion than circular velocity in dwarf spheroidal pressure supported galaxies whose stars have highly eccentric orbits.

Under idealized circumstances the conversion factor ß would be 1.73 in Newtonian gravity, and 2.12 in MOND. Dark matter particle theories would also elevate the conversion factor ß above 1.73. (Some systemic error is also present because the radius of the pressure supported dwarf spheroidal galaxies where the measurement takes place studied is at a place where the flat rotation curve is emerging but is not fully in place.)

With the conversion factor determined, the fit to the Baryonic Tully-Fisher Relation remains firmly in tact for these small galaxies:

We explore the Baryonic Tully-Fisher Relation in the Local Group. Rotationally supported Local Group galaxies adhere precisely to the relation defined by more distant galaxies. For pressure supported dwarf galaxies, we determine the scaling factor βc that relates their observed velocity dispersion to the equivalent circular velocity of rotationally supported galaxies of the same mass such that Vo=βcσ. For a typical mass-to-light ratio Υ=2M/L in the V-band, we find that βc=2. More generally, logβc=0.25logΥ+0.226. This provides a common kinematic scale relating pressure and rotationally supported dwarf galaxies.
Stacy McGaugh, et al., "The Baryonic Tully-Fisher Relation in the Local Group and the Equivalent Circular Velocity of Pressure Supported DwarfsarXiv:2109.03251 (September 7, 2021) (Accepted for publication in the Astronomical Journal).

RAR in Small Galaxies

This isn't the only recent study to confirm the radial acceleration relation in smaller galaxies. The paper below did so this past June using KIDS-1000 data, but does also find a split between two different kinds of galaxies. My intuition is that the distinction is probably related to the different geometries of older and newer type galaxies.
We present measurements of the radial gravitational acceleration around isolated galaxies, comparing the expected gravitational acceleration given the baryonic matter with the observed gravitational acceleration, using weak lensing measurements from the fourth data release of the Kilo-Degree Survey
These measurements extend the radial acceleration relation (RAR) by 2 decades into the low-acceleration regime beyond the outskirts of the observable galaxy. We compare our RAR measurements to the predictions of two modified gravity (MG) theories: MOND and Verlinde's emergent gravity. 
We find that the measured RAR agrees well with the MG predictions. 
In addition, we find a difference of at least 6σ between the RARs of early- and late-type galaxies (split by Sérsic index and u−r colour) with the same stellar mass. Current MG theories involve a gravity modification that is independent of other galaxy properties, which would be unable to explain this behaviour. The difference might be explained if only the early-type galaxies have significant (Mgas≈M∗) circumgalactic gaseous haloes. The observed behaviour is also expected in ΛCDM models where the galaxy-to-halo mass relation depends on the galaxy formation history. 
We find that MICE, a ΛCDM simulation with hybrid halo occupation distribution modelling and abundance matching, reproduces the observed RAR but significantly differs from BAHAMAS, a hydrodynamical cosmological galaxy formation simulation. Our results are sensitive to the amount of circumgalactic gas; current observational constraints indicate that the resulting corrections are likely moderate. Measurements of the lensing RAR with future cosmological surveys will be able to further distinguish between MG and ΛCDM models if systematic uncertainties in the baryonic mass distribution around galaxies are reduced.
Margot M. Brouwer, et al., "The Weak Lensing Radial Acceleration Relation: Constraining Modified Gravity and Cold Dark Matter theories with KiDS-1000" arXiv:2106.11677 650 Astronomy & Astrophysics ArticleID A113 (June 22 2021) DOI: 10.1051/0004-6361/202040108

A Billion Years Of Tully-Fischer

The baryonic Tully-Fisher relation which is implied by MOND shows no sign of evolving in a sample of galaxies over the last billion years, and tightly fits the relationship.
Using a sample of 67 galaxies from the MIGHTEE Survey Early Science data we study the HI-based baryonic Tully-Fisher relation (bTFr), covering a period of ∼one billion years (0≤z≤0.081). We consider the bTFr based on two different rotational velocity measures: the width of the global HI profile and V(out), measured as the outermost rotational velocity from the resolved HI rotation curves. 
Both relations exhibit very low intrinsic scatter orthogonal to the best-fit relation (σ⊥=0.07±0.01), comparable to the SPARC sample at z≃0. The slopes of the relations are similar and consistent with the z≃0 studies (3.66+0.35−0.29 for W50 and 3.47+0.37−0.30 for V(out))
We find no evidence that the bTFr has evolved over the last billion years, and all galaxies in our sample are consistent with the same relation independent of redshift and the rotational velocity measure. Our results set up a reference for all future studies of the HI-based bTFr as a function of redshift that will be conducted with the ongoing deep SKA pathfinders surveys.
Anastasia A. Ponomareva, et al. "MIGHTEE-HI: The baryonic Tully-Fisher relation over the last billion years" arXiv:2109.04992 (September 10, 2021) (accepted for publication in MNRAS).

How To Analyze Very Faint Galaxies With Limited Data

Finally, there is this new paper, which examines a way to bootstrap limited data to measure the baryonic Tully-Fischer relation in very faintly detected galaxies.
We present a novel 2D flux density model for observed HI emission lines combined with a Bayesian stacking technique to measure the baryonic Tully-Fisher relation below the nominal detection threshold. We simulate a galaxy catalogue, which includes HI lines described either with Gaussian or busy function profiles, and HI data cubes with a range of noise and survey areas similar to the MeerKAT International Giga-Hertz Tiered Extragalactic Exploration (MIGHTEE) survey. 
With prior knowledge of redshifts, stellar masses and inclinations of spiral galaxies, we find that our model can reconstruct the input baryonic Tully-Fisher parameters (slope and zero point) most accurately in a relatively broad redshift range from the local Universe to z=0.3 for all the considered levels of noise and survey areas, and up to z=0.55 for a nominal noise of 90μJy/channel over 5 deg2. Our model can also determine the M(HI)−M⋆ relation for spiral galaxies beyond the local Universe, and account for the detailed shape of the HI emission line, which is crucial for understanding the dynamics of spiral galaxies. Thus, we have developed a Bayesian stacking technique for measuring the baryonic Tully-Fisher relation for galaxies at low stellar and/or HI masses and/or those at high redshift, where the direct detection of HI requires prohibitive exposure times.
Hengxing Pan, et al., "Measuring the baryonic Tully-Fisher relation below the detection threshold" arXiv:2109.04273 (September 9, 2021) (Accepted for publication in MNRAS).

RAR For Clusters

This past July, a new paper confirmed that the apparent overall magnitude of dark matter effects in clusters is about ten times what it is in isolated galaxies as noted in the lead article in this post in its body text.
We carry out a test of the radial acceleration relation (RAR) for a sample of 10 dynamically relaxed and cool-core galaxy clusters imaged by the Chandra X-ray telescope, which was studied in Giles et al. For this sample, we observe that the best-fit RAR shows a very tight residual scatter equal to 0.09 dex. We obtain an acceleration scale of 1.59×10^−9m/s^2, which is about an order of magnitude higher than that obtained for galaxies. Furthermore, the best-fit RAR parameters differ from those estimated from some of the previously analyzed cluster samples, which indicates that the acceleration scale found from the RAR could be of an emergent nature, instead of a fundamental universal scale.
S. Pradyumna, Shantanu Desai, "A test of Radial Acceleration Relation for the Giles et al Chandra cluster sample" arXiv:2107.05845 33 Physics of the Dark Universe 100854 (July 13, 2021). DOI: 10.1016/j.dark.2021.100854

UPDATE September 27, 2021

Triton Station has a new plot of the Baryonic Tully-Fischer Relation showing very heavy superspirals and very light dwarf galaxies, over a factor of a million in scale: