Wednesday, September 20, 2023

More On Wide Binary Stars

This somewhat mixed result is the latest episode in the wide binary star dynamics debate. It shows strong signs of non-Newtonian behavior, although not necessarily MOND-like. This is important because dark matter particle models shouldn't produce non-Newtonian dynamics in wide binary stars.
It is found that Gaia DR3 binary stars selected with stringent requirements on astrometric measurements and radial velocities naturally satisfy Newtonian dynamics without hidden close companions when projected separation s>2 kau, showing that pure binaries can be selected. It is then found that pure binaries selected with the same criteria show a systematic deviation from the Newtonian expectation when s<2 kau. 
When both proper motions and parallaxes are required to have precision better than 0.003 and radial velocities better than 0.2, I obtain 1558 statistically pure binaries within a 'clean' G-band absolute magnitude range. From this sample, I obtain an observed to Newtonian predicted kinematic acceleration ratio of γ(g)=g(obs)/g(pred)=1.43+0.23−0.19 for acceleration <10^−10 m s^−2, in excellent agreement with a recent finding 1.43±0.06 for a much larger general sample with the amount of hidden close companions self-calibrated. I also investigate the radial profile of stacked sky-projected relative velocities without a deprojection to the 3D space. The observed profile matches the Newtonian predicted profile for s<2 kau without any free parameters but shows a clear deviation at a larger separation with a significance of 4.6σ. The projected velocity boost factor for s>8 kau is measured to be γ(v(p))=1.18±0.06 matching γ(g)‾‾√. 
Finally, for a small sample of 23 binaries with exceptionally precise radial velocities (precision <0.0043) the directly measured relative velocities in the 3D space also show a boost at larger separations. These results robustly confirm the recently reported gravitational anomaly at low acceleration for a general sample.
Kyu-Hyun Chae, "Robust Evidence for the Breakdown of Standard Gravity at Low Acceleration from Statistically Pure Binaries Free of Hidden Companions" arXiv:2309.10404 (September 19, 2023) (submitted to ApJ (this new work complements the paper ApJ, 952, 128 (arXiv:2305.04613) in an important way).

Tuesday, September 19, 2023

CP Violation In The Standard Model Quantified And More

There are two components of the Standard Model of Particle Physics that violate charge parity (CP) conversation, which is to say that the laws of physics are asymmetric between interactions going forward in time and interactions going backward in time. One is the CP violation parameter of the four parameter CKM matrix, which is called beta (β), and applies to W boson mediated changes in quark flavor. The other is the CP violation parameter of the PMNS matrix which governs neutrino flavor oscillations, which has been measured only crudely but is very likely to be non-zero given current measurements to date.

A new measurement of the CKM matrix CP violation parameter using the decays of electromagnetically neutral B mesons has been made by the LHCb experiment  at the Large Hadron Collider (LHC). This is "the most precise single measurement of the CKM angle β to date and is more precise than the current world average." The status quo leading up to this new measurement was as follows:

Measurements of CP violation in neutral meson decays to charmonium final states have thus resulted in a high degree of precision for the angle β of the CKM matrix: sin(2β) = 0.699 ± 0.017. The first observation of CP violation in the B-meson system was reported in the B0→J/ψK0 S channel by the BaBar andBelle collaborations. The measurement of the CP-violation parameters in (2β) has been updated several times by these experiments, and more recently by the LHCb andBelleII collaborations.

The new paper, regrettably, doesn't actually report its measured value of β but does provide a formula to convert a parameter that it does measure to β. Assuming no beyond the Standard Model physics:
The parameter S can be related to the CKM angle β as S = sin(2β+∆ϕd+ . . . ). . . . Contributions from penguin topologies to the decay amplitude that cause an additional phase shift ∆ϕd are CKM suppressed, hence deviations of S from sin(2β) are expected to be small in the Standard Model.

The bottom line value, simultaneously fitting data from three different decay modes for S is S = 0.717 ± 0.013(stat) ± 0.008(syst).

Another new paper recaps the latest and greatest measurements of the masses of the five hadronizing Standard Model quarks and the strong force coupling constant:

Monday, September 18, 2023

When And Why Was The Sahara Green?

We've made lots of progress in understanding African Paleoclimates. 

The two images above are from the paper cited below. The image below is from Wikipedia (see also a list of notable climate events here and here).

There is widespread evidence that the Sahara was periodically vegetated in the past, with the proliferation of rivers, lakes and water-dependent animals such as hippos, before it became what is now desert. These North African Humid Periods may have been crucial in providing vegetated corridors out of Africa, allowing the dispersal of various species, including early humans, around the world.

The so-called ‘greenings’ are thought to have been driven by changes in Earth’s orbital conditions, specifically Earth’s orbital precession. Precession refers to how Earth wobbles on its axis, which influences seasonality (i.e. the seasonal contrast) over an approximate 21,000-year cycle. These changes in precession determine the amount of energy received by the Earth in different seasons, which in turn controls the strength of the African Monsoon and the spread of vegetation across this vast region.

A major barrier to understanding these events is that the majority of climate models have been unable to simulate the amplitude of these humid periods, so the specific mechanisms driving them have remained uncertain.

This study deployed a recently-developed climate model to simulate the North African Humid periods to greatly advance understanding of their driving mechanisms.

The results confirm the North African Humid Periods occurred every 21,000 years and were determined by changes in Earth’s orbital precession. This caused warmer summers in the Northern Hemisphere, which intensified the strength of the West African Monsoon system and increased Saharan precipitation, resulting in the spread of savannah-type vegetation across the desert.

The findings also show the humid periods did not occur during the ice ages, when there were large glacial ice sheets covering much of the high latitudes. This is because these vast ice sheets cooled the atmosphere and suppressed the tendency for the African monsoon system to expand. This highlights a major teleconnection between these distant regions, which may have restricted the dispersal of species, including humans, out of Africa during the glacial periods of the last 800,000 years.

From a Science Daily press release

The paper and its abstract are as follows:

The Sahara region has experienced periodic wet periods over the Quaternary and beyond. These North African Humid Periods (NAHPs) are astronomically paced by precession which controls the intensity of the African monsoon system. However, most climate models cannot reconcile the magnitude of these events and so the driving mechanisms remain poorly constrained. Here, we utilise a recently developed version of the HadCM3B coupled climate model that simulates 20 NAHPs over the past 800 kyr which have good agreement with NAHPs identified in proxy data
Our results show that precession determines NAHP pacing, but we identify that their amplitude is strongly linked to eccentricity via its control over ice sheet extent. During glacial periods, enhanced ice-albedo driven cooling suppresses NAHP amplitude at precession minima, when humid conditions would otherwise be expected
This highlights the importance of both precession and eccentricity, and the role of high latitude processes in determining the timing and amplitude of the NAHPs. This may have implications for the out of Africa dispersal of plants and animals throughout the Quaternary.
Edward Armstrong, Miikka Tallavaara, Peter O. Hopcroft, Paul J. Valdes, "North African humid periods over the past 800,000 years." 14(1) Nature Communications (2023) (open access) DOI: 10.1038/s41467-023-41219-4

Monday, September 11, 2023

The Evolutionary Biology Of The Uncanny Valley


The obvious candidates giving rise to an evolutionary biology source for the uncanny valley effect in reality would be other archaic hominin species like Neanderthals, Denisovans, Homo erectus, and Homo floresiensis (a.k.a. "Hobbits"). And, this reaction may have evolved at the time of our pre-modern human ancestors because there were many species of the genus Homo in existence at that time, some of whom would have interacted with each other in Africa.

Less obviously, it could be something the developed to recognize when other people were suffering from diseases, physical and/or mental, or to trigger you not to trust what you see when you are under the influence of a hallucinogen.

It could also be a side effect our cognitive abilities developed for recognizing and evaluating other people, e.g., distinguishing people from another race or region, whose mechanism produces weird results when you are at the fringe of its domain of applicability.

Wikipedia notes at least nine theories to explain the psychological quirk, none of which is really dominant explanations in the academic community. They are:
Mate selection: Automatic, stimulus-driven appraisals of uncanny stimuli elicit aversion by activating an evolved cognitive mechanism for the avoidance of selecting mates with low fertility, poor hormonal health, or ineffective immune systems based on visible features of the face and body that are predictive of those traits.

Mortality salience: Viewing an "uncanny" robot elicits an innate fear of death and culturally supported defenses for coping with death's inevitability.... [P]artially disassembled androids...play on subconscious fears of reduction, replacement, and annihilation: (1) A mechanism with a human façade and a mechanical interior plays on our subconscious fear that we are all just soulless machines. (2) Androids in various states of mutilation, decapitation, or disassembly are reminiscent of a battlefield after a conflict and, as such, serve as a reminder of our mortality. (3) Since most androids are copies of actual people, they are doppelgängers and may elicit a fear of being replaced, on the job, in a relationship, and so on. (4) The jerkiness of an android's movements could be unsettling because it elicits a fear of losing bodily control.

Pathogen avoidance: Uncanny stimuli may activate a cognitive mechanism that originally evolved to motivate the avoidance of potential sources of pathogens by eliciting a disgust response. "The more human an organism looks, the stronger the aversion to its defects, because (1) defects indicate disease, (2) more human-looking organisms are more closely related to human beings genetically, and (3) the probability of contracting disease-causing bacteria, viruses, and other parasites increases with genetic similarity." The visual anomalies of androids, robots, and other animated human characters cause reactions of alarm and revulsion, similar to corpses and visibly diseased individuals.

Sorites paradoxes: Stimuli with human and nonhuman traits undermine our sense of human identity by linking qualitatively different categories, human and nonhuman, by a quantitative metric: degree of human likeness.

Violation of human norms: If an entity looks sufficiently nonhuman, its human characteristics are noticeable, generating empathy. However, if the entity looks almost human, it elicits our model of a human other and its detailed normative expectations. The nonhuman characteristics are noticeable, giving the human viewer a sense of strangeness. In other words, a robot stuck inside the uncanny valley is no longer judged by the standards of a robot doing a passable job at pretending to be human, but is instead judged by the standards of a human doing a terrible job at acting like a normal person. This has been linked to perceptual uncertainty and the theory of predictive coding.

Conflicting perceptual cues: The negative effect associated with uncanny stimuli is produced by the activation of conflicting cognitive representations. Perceptual tension occurs when an individual perceives conflicting cues to category membership, such as when a humanoid figure moves like a robot, or has other visible robot features. This cognitive conflict is experienced as psychological discomfort (i.e., "eeriness"), much like the discomfort that is experienced with cognitive dissonance. Several studies support this possibility. Mathur and Reichling found that the time subjects took to gauge a robot face's human- or mechanical-resemblance peaked for faces deepest in the uncanny valley, suggesting that perceptually classifying these faces as "human" or "robot" posed a greater cognitive challenge. However, they found that while perceptual confusion coincided with the uncanny valley, it did not mediate the effect of the uncanny valley on subjects' social and emotional reactions—suggesting that perceptual confusion may not be the mechanism behind the uncanny valley effect. Burleigh and colleagues demonstrated that faces at the midpoint between human and non-human stimuli produced a level of reported eeriness that diverged from an otherwise linear model relating human-likeness to affect. Yamada et al. found that cognitive difficulty was associated with negative affect at the midpoint of a morphed continuum (e.g., a series of stimuli morphing between a cartoon dog and a real dog). Ferrey et al. demonstrated that the midpoint between images on a continuum anchored by two stimulus categories produced a maximum of negative affect, and found this with both human and non-human entities. Schoenherr and Burleigh provide examples from history and culture that evidence an aversion to hybrid entities, such as the aversion to genetically modified organisms ("Frankenfoods"). Finally, Moore developed a Bayesian mathematical model that provides a quantitative account of perceptual conflict. There has been some debate as to the precise mechanisms that are responsible. It has been argued that the effect is driven by categorization difficulty, configural processing, perceptual mismatch, frequency-based sensitization, and inhibitory devaluation. 
Threat to humans' distinctiveness and identity: Negative reactions toward very humanlike robots can be related to the challenge that this kind of robot leads to the categorical human – non-human distinction. Kaplan stated that these new machines challenge human uniqueness, pushing for a redefinition of humanness. Ferrari, Paladino and Jetten found that the increase of anthropomorphic appearance of a robot leads to an enhancement of threat to the human distinctiveness and identity. The more a robot resembles a real person, the more it represents a challenge to our social identity as human beings.

Religious definition of human identity: The existence of artificial but humanlike entities is viewed by some as a threat to the concept of human identity. An example can be found in the theoretical framework of psychiatrist Irvin Yalom. Yalom explains that humans construct psychological defenses to avoid existential anxiety stemming from death. One of these defenses is 'specialness', the irrational belief that aging and death as central premises of life apply to all others but oneself. The experience of the very humanlike "living" robot can be so rich and compelling that it challenges humans' notions of "specialness" and existential defenses, eliciting existential anxiety. In folklore, the creation of human-like, but soulless, beings is often shown to be unwise, as with the golem in Judaism, whose absence of human empathy and spirit can lead to disaster, however good the intentions of its creator.

Uncanny valley of the mind or AI: Due to rapid advancements in the areas of artificial intelligence and affective computing, cognitive scientists have also suggested the possibility of an "uncanny valley of mind". Accordingly, people might experience strong feelings of aversion if they encounter highly advanced, emotion-sensitive technology. Among the possible explanations for this phenomenon, both a perceived loss of human uniqueness and expectations of immediate physical harm are discussed by contemporary research.

What Does A Theoretical Physicist's Office Look Like?

 

The Imjin Wars In Korea

Incredibly destructive wars are nothing new.
[T]he most significant destruction on the Korean Peninsula was wrought by the Japanese invasions of the late sixteenth century. Nearly two million Koreans, a staggering 20 percent of the population, perished during the Imjin Wars, Toyotomi Hideyoshi’s campaigns of 1592-1598 to subjugate the Korean Peninsula. Hideyoshi’s object was the conquest of Ming China (1368-1644) but the result was to turn Korea into a ruined land.

Friday, September 8, 2023

Strengthening Evidence Of Another Predicted Higgs Boson Decay Channel

This is now reasonably strong (3.4 sigma) evidence from the LHC of Higgs boson decays to a Z boson and a photon at a rate consistent with the Standard Model predicted branching fraction for decays of this kind.

This was first hinted at in April of 2022, and this report reiterates results announced in May of this year. 
The first evidence for the Higgs boson decay to a Z boson and a photon is presented, with a statistical significance of 3.4 standard deviations. The result is derived from a combined analysis of the searches performed by the ATLAS and CMS Collaborations with proton-proton collision data sets collected at the CERN Large Hadron Collider (LHC) from 2015 to 2018. These correspond to integrated luminosities of around 140 fb−1 for each experiment, at a center-of-mass energy of 13 TeV. The measured signal yield is 2.2±0.7 times the Standard Model prediction, and agrees with the theoretical expectation within 1.9 standard deviations.
ATLAS, CMS Collaborations, "Evidence for the Higgs boson decay to a Z boson and a photon at the LHC" arXiv:2309.03501 (September 7, 2023).

Thursday, September 7, 2023

Evidence Of Warm Dark Matter Annihilation Undermined

A new study fails to replicate the findings of five out of six papers that claim to have seen a 3.5 keV radiation line which arguably is the footprint of dark matter annihilation, using the same underlying data. 

The new study, with multiple authors, argues that the backgrounds were not correctly modeled and also identifies other methodological flaws in those papers. This greatly weakens on line of evidence in support of particle dark matter that can annihilate into ordinary matter or photons through collisions with other dark matter particles, which if the data were more solid would support a popular version of a warm dark matter particle model.

At least at face value, this is a rather stunning refutation of the work of the authors of the previous papers.

The 3.5 keV line is a purported emission line observed in galaxies, galaxy clusters, and the Milky Way whose origin is inconsistent with known atomic transitions and has previously been suggested to arise from dark matter decay. We systematically re-examine the bulk of the evidence for the 3.5 keV line, attempting to reproduce six previous analyses that found evidence for the line. Surprisingly, we only reproduce one of the analyses; in the other five we find no significant evidence for a 3.5 keV line when following the described analysis procedures on the original data sets. For example, previous results claimed 4σ

evidence for a 3.5 keV line from the Perseus cluster; we dispute this claim, finding no evidence for a 3.5 keV line. We find evidence for background mismodeling in multiple analyses. We show that analyzing these data in narrower energy windows diminishes the effects of mismodeling but returns no evidence for a 3.5 keV line. We conclude that there is little robust evidence for the existence of the 3.5 keV line. Some of the discrepancy of our results from those of the original works may be due to the earlier reliance on local optimizers, which we demonstrate can lead to incorrect results. For ease of reproducibility, all code and data are publicly available.  
Christopher Dessert, Joshua W. Foster, Yujin Park, Benjamin R. Safdi, "Was There a 3.5 keV Line?" arXiv:2309.03254 (September 6, 2023).

Near Hominin Extinction About 870,000 Years Ago?


I have no doubt that there was a serious bottleneck in hominin populations at roughly the time claimed. But effective population size is a tricky statistic that is further from what people think it means than most people realize, so don't take the absolute magnitude of the bottleneck, or the naive assumptions about the census population of these archaic hominins at this time, too literally. As the New York Times explains:
[O]utside experts said they were skeptical of the novel statistical methods that the researchers used for the study. “It is a bit like inferring the size of a stone that falls into the middle of the large lake from only the ripples that arrive at the shore some minutes later,” said Stephan Schiffels, a population geneticist at Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.

The researchers also put too much faith in a model that assumes a single universal mutation rate for genetic evolution, when there is good research to suggest that some parts of the genome evolve at faster rates than other parts of the genome. On the scale of several tens of thousands of generations, those fine points could become important.

Modern humans evolved around 300,000 years ago, and the speciation event that the authors suggests might coincide with this population bottleneck would have given rise to the common ancestor of modern humans, Neanderthals, and Denisovans.

The circumstances driving this 117,000 year period in which hominins may have come close to extinction aren't entirely clear. The editor's summary states:
The model detected a reduction in the population size of our ancestors from about 100,000 to about 1000 individuals, which persisted for about 100,000 years. The decline appears to have coincided with both major climate change and subsequent speciation events.
The paper and its abstract are as follows:
Population size history is essential for studying human evolution. However, ancient population size history during the Pleistocene is notoriously difficult to unravel. 
In this study, we developed a fast infinitesimal time coalescent process (FitCoal) to circumvent this difficulty and calculated the composite likelihood for present-day human genomic sequences of 3154 individuals. 
Results showed that human ancestors went through a severe population bottleneck with about 1280 breeding individuals between around 930,000 and 813,000 years ago. The bottleneck lasted for about 117,000 years and brought human ancestors close to extinction. 
This bottleneck is congruent with a substantial chronological gap in the available African and Eurasian fossil record. Our results provide new insights into our ancestry and suggest a coincident speciation event.

One very basic methodological issue with the speculation that hominins went nearly extinct around 870,000 years ago made by this study, for example, is that genetic information from currently living modern humans showing a population bottleneck only tells us about our direct ancestors. 

As of 870,000 years ago, there was at least one species of the genus Homo, Homo erectus which had already dispersed from Africa to Eurasia. There is good reason to believe that there may have actually been more than one at that point, because the most plausible characterization of Homo floresiensis on the island of Flores in Indonesia and similar archaic hominins in the Philippines, is that this is a more archaic hominin species than Homo erectus.

Therefore, it is possible that these archaic hominin species suffered less severe bottleneck effects somewhere in Eurasia or Oceania that was outside of Africa, which subsequent events, such as the expansion of modern humans, Neanderthals, and Denisovans into Eurasia or later climate catastrophes, or a combination of causes, led to the complete extinction of them at some later time, even though these archaic hominins had weathered the circumstances of 870,000 years ago better than our direct ancestors did.

One can imagine a narrative, for example, in which Homo erectus in Southeast Asia wasn't hit nearly so hard as African Homo erectus around 870,000 years ago, but then was driven to extinction there by the one two punch of the Toba eruption and modern human expansion into Southeast Asia in the wake of that eruption around 70,000 years ago. But, if hominins had gone extinct in Africa, the second prong of this one two punch would have never wiped out Southeast Asian Homo erectus and events might have played out differently. Southeast Asian Homo erectus might have back migrated to Africa 120,000 year or so after Africa experienced the conditions that drove hominins to near extinctions, when those conditions abated.

Further New York Times discussion of the new study (at the same link) notes that:
After decades of fossil hunting, the record of ancient human relatives remains relatively scarce in Africa in the period between 950,000 and 650,000 years ago. The new study offers a potential explanation: there just weren’t enough people to leave behind many remains, Dr. Hu said.

Brenna Henn, a geneticist at the University of California, Davis, who was not involved in the new study, said that a bottleneck was “one plausible interpretation.” But today’s genetic diversity might have been produced by a different evolutionary history, she added.

For example, humans might have diverged into separate populations then come together again. “It would be more powerful to test alternative models,” Dr. Henn said. 
Dr. Hu and his colleagues propose that a global climate shift produced the population crash 930,000 years ago. They point to geological evidence that the planet became colder and drier right around the time of their proposed bottleneck. Those conditions may have made it harder for our human ancestors to find food. 
But Nick Ashton, an archaeologist at the British Museum, noted that a number of remains of ancient human relatives dating to the time of the bottleneck have been found outside Africa. 
If a worldwide disaster caused the human population in Africa to collapse, he said, then it should have made human relatives rarer elsewhere in the world. 
“The number of sites in Africa and Eurasia that date to this period suggests that it only affected a limited population, who may have been ancestors of modern humans,” he said. 

Wednesday, September 6, 2023

Fifth Forces And The Hubble Tension

Gravity modification theories seem to more easily solve the Hubble tension than alternative theories.
Fifth forces are ubiquitous in modified theories of gravity. In this paper, we analyze their effect on the Cepheid-calibrated cosmic distance ladder, specifically with respect to the inferred value of the Hubble constant (H0). We consider a variety of effective models where the strength, or amount of screening, of the fifth force is estimated using proxy fields related to the large-scale structure of the Universe. For all models considered, the local distance ladder and the Planck value for H0 agrees with a probability ≳20%, relieving the tension compared to the concordance model with data being excluded at 99% confidence. The alleviated discrepancy comes partially at the cost of an increased tension between distance estimates from Cepheids and the tip of the red-giant branch (TRGB). Demanding also that the consistency between Cepheid and TRGB distance estimates is not impaired, some fifth force models can still accommodate the data with a probability ≳20%. This provides incentive for more detailed investigations of fundamental theories on which the effective models are based, and their effect on the Hubble tension.
Marcus Högås, Edvard Mörtsell, "The Hubble tension and fifth forces: a cosmic screenplay" arXiv:2309.01744 (September 4, 2023).

Friday, September 1, 2023

The Hubble Tension Is Hard To Resolve

The simplest solutions to the disparities in measurements of the Hubble constant at early and late times in the history of the universe probably won't work.
The Hubble tension has now grown to a level of significance which can no longer be ignored and calls for a solution which, despite a huge number of attempts, has so far eluded us. Significant efforts in the literature have focused on early-time modifications of ΛCDM, introducing new physics operating prior to recombination and reducing the sound horizon. 
In this opinion paper I argue that early-time new physics alone will always fall short of fully solving the Hubble tension. I base my arguments on seven independent hints, related to 1) the ages of the oldest astrophysical objects, 2) considerations on the sound horizon-Hubble constant degeneracy directions in cosmological data, 3) the important role of cosmic chronometers, 4) a number of ``descending trends'' observed in a wide variety of low-redshift datasets, 5) the early integrated Sachs-Wolfe effect as an early-time consistency test of ΛCDM, 6) early-Universe physics insensitive and uncalibrated cosmic standard constraints on the matter density, and finally 7) equality wavenumber-based constraints on the Hubble constant from galaxy power spectrum measurements. 
I argue that a promising way forward should ultimately involve a combination of early- and late-time (but non-local -- in a cosmological sense, i.e. at high redshift) new physics, as well as local (i.e. at z∼0) new physics, and I conclude by providing reflections with regards to potentially interesting models which may also help with the S8 tension.
Sunny Vagnozzi, "Seven hints that early-time new physics alone is not sufficient to solve the Hubble tension" arXiv:2308.16628 (August 31, 2023) (accepted for publication in Universe).

Tuesday, August 29, 2023

Fuzzy Dark Matter Ruled Out

This paper essentially rules out the remainder of the viable fuzzy dark matter parameters space.  FDM had been one of the more viable ultra-light dark matter theories. 

Tatyana Shevchuk, Ely D. Kovetz, Adi Zitrin, "New Bounds on Fuzzy Dark Matter from Galaxy-Galaxy Strong-Lensing Observations" arXiv:2308.14640 (August 28, 2023).

Wednesday, August 23, 2023

What Would Dark Matter Have To Be Like To Fit Our Observations?

Stacey McGaugh at his Triton Station blog (with some typological errors due to the fact that he's dictating since he recently broke his wrist) engages with the question of what properties dark matter would have to have to fit our astronomy observations.

Cosmology considerations like the observed cosmic background radiation (after astronomy observations ruled out some of the baryonic matter contenders like brown dwarfs and black holes) suggest that dark matter should be nearly collisionless, lack interactions with ordinary matter other than gravity, and should be non-baryonic (i.e. not made up of Standard Model particles or composites of them).

But observations of galaxies show that dark matter with these properties would form halos different than those with the cosmology driven properties described above. Astronomy observations of galaxies show us that inferred dark matter distributions intimately track the distributions of ordinary matter in a galaxy, which Newtonian-like gravitational interactions can explain on their own.

As his post explains after motivating his comments with the historical background of the dark matter theoretical paradigm, the problem is as follows (I have corrected his dictation software related errors without attribution. The bold and underlined emphasis is mine):

If we insist on dark matter, what this means is that we need, for each and every galaxy, to precisely look like MOND. 
I wrote the equation for the required effects of dark matter in all generality in McGaugh (2004). The improvements in the data over the subsequent decade enable this to be abbreviated to:

This is in McGaugh et al. (2016), which is a well known paper (being in the top percentile of citation rates). 
So this should be well known, but the implication seems not to be, so let’s talk it through. g(DM) is the force per unit mass provided by the dark matter halo of a galaxy. This is related to the mass distribution of the dark matter – its radial density profile – through the Poisson equation. The dark matter distribution is entirely stipulated by the mass distribution of the baryons, represented here by g(bar). That’s the only variable on the right hand side, a(0) being Milgrom’s acceleration constant. So the distribution of what you see specifies the distribution of what you can’t.

This is not what we expect for dark matter. It’s not what naturally happens in any reasonable model, which is an NFW halo. That comes from dark matter-only simulations; it has literally nothing to do with g(bar). So there is a big chasm to bridge right from the start: theory and observation are speaking different languages. Many dark matter models don’t specify g(bar), let alone satisfy this constraint. Those that do only do so crudely – the baryons are hard to model. Still, dark matter is flexible; we have the freedom to make it work out to whatever distribution we need. But in the end, the best a dark matter model can hope to do is crudely mimic what MOND predicted in advance. If it doesn’t do that, it can be excluded. Even if it does do that, should we be impressed by the theory that only survives by mimicking its competitor?

The observed MONDian behavior makes no sense whatsoever in terms of the cosmological constraints in which the dark matter has to be non-baryonic and not interact directly with the baryons. The equation above implies that any dark matter must interact very closely with the baryons – a fact that is very much in the spirit of what earlier dynamicist had found, that the baryons and the dynamics are intimately connected. If you know the distribution of the baryons that you can see, you can predict what the distribution of the unseen stuff has to be.

And so that’s the property that galaxies require that is pretty much orthogonal to the cosmic requirements. There needs to be something about the nature of dark matter that always gives you MONDian behavior in galaxies. Being cold and non-interacting doesn’t do that. 
Instead, galaxy phenomenology suggests that there is a direct connection – some sort of direct interaction – between dark matter and baryons. That direct interaction is anathema to most ideas about dark matter, because if there’s a direct interaction between dark matter and baryons, it should be really easy to detect dark matter. They’re out there interacting all the time.

There have been a lot of half solutions. These include things like warm dark matter and self interacting dark matter and fuzzy dark matter. These are ideas that have been motivated by galaxy properties. But to my mind, they are the wrong properties. They are trying to create a central density core in the dark matter halo. That is at best a partial solution that ignores the detailed distribution that is written above. The inference of a core instead of a cusp in the dark matter profile is just a symptom. The underlying disease is that the data look like MOND.

MONDian phenomenology is a much higher standard to try to get a dark matter model to match than is a simple cored halo profile. We should be honest with ourselves that mimicking MOND is what we’re trying to achieve. Most workers do not acknowledge that, or even be aware that this is the underlying issue.

There are some ideas to try to build-in the required MONDian behavior while also satisfying the desires of cosmology. One is Blanchet’s dipolar dark matter. He imagined a polarizable dark medium that does react to the distribution of baryons so as to give the distribution of dark matter that gives MOND-like dynamics. Similarly, Khoury’s idea of superfluid dark matter does something related. It has a superfluid core in which you get MOND-like behavior. At larger scales it transitions to a non-superfluid mode, where it is just particle dark matter that reproduces the required behavior on cosmic scales.

I don’t find any of these models completely satisfactory. It’s clearly a hard thing to do. You’re trying to mash up two very different sets of requirements. With these exceptions, the galaxy-motivated requirement that there is some physical aspect of dark matter that somehow knows about the distribution of baryons and organizes itself appropriately is not being used to inform the construction of dark matter models. The people who do that work seem to be very knowledgeable about cosmological constraints, but their knowledge of galaxy dynamics seems to begin and end with the statement that rotation curves are flat and therefore we need dark matter. That sufficed 40 years ago, but we’ve learned a lot since then. It’s not good enough just to have extra mass. That doesn’t cut it.

This analysis is the main reason that I'm much more inclined to favor gravity based explanations for dark matter phenomena than particle based explanations.

Direct dark matter detection experiments pretty much rule out dark matter particles that interact with ordinary matter with sufficient strength with masses in the 1 GeV to 1000 GeV ranges (one GeV is 1,000,000,000 eV). 

Collider experiments pretty much rule out dark matter particles that interact in any way with ordinary matter at sufficient strength with masses in the low single digit thousands GeVs or less. These experiments are certainly valid down to something less than the mass scale of the electron (which as a mass of about 511,000 eV). 

Astronomy observations used to rule out MACHOs such as brown dwarfs, and large primordial black holes (PBHs), pretty much rule out dark matter lumps of asteroid size or greater (from micro-lensing for larger lumps, and from solar system dynamics for asteroid sized lumps), whether or not it interacts non-gravitationally with ordinary matter. 

This leaves a gap between about 1000 GeV and asteroid masses, but the wave-like nature of dark matter phenomena inferred from astronomy observations pretty much rules out dark matter particles of more than 10,000 eV.

Direct dark matter detection experiments can't directly rule out these low mass dark matter candidates because their not sensitive enough. 

Colliders could conceivably miss particles that interact only feebly with ordinary matter and have very low mass themselves, although nuclear physics was able to detect the feebly interacting and very low mass neutrinos way back in the 1930s with far more primitive equipment than we have now. 

Even light dark matter candidates like axions, warm dark matter, and fuzzy dark matter still can't reproduce the observed tight fit between ordinary matter distributions and dark matter distributions within dark matter halos, however, if they have no non-gravitational interactions with ordinary matter.

All efforts to directly detect axions (which would have some interactions with ordinary matter that can be theoretically modeled) have had null results.

Furthermore, because the MOND equations that dark matter phenomena follow in galaxies are tied in particular to the amount of Newtonian-like acceleration due to gravity that objects in the galaxy experience from the galaxy, envisioning this phenomena as arising from a modification to gravity makes more sense than envisioning it as an entirely novel and unrelated to gravity fifth force between dark matter particles and ordinary matter.

If you take the dark matter particle candidates to explain dark matter phenomena off the field for these reasons, you can narrow down the plausible possible explanations for dark matter phenomena dramatically.

We also know that toy model MOND itself isn't quite the right solution. 

The right solution needs to be embedded in a relativistic framework that addresses strong field gravitational phenomena and solar system scale gravitational phenomena more or less exactly identically to Einstein's General Relativity up to the limitations of current observational precision and accuracy which is great.

The right solution also needs to have a greater domain of applicability than toy-model MOND, by correctly dealing with galaxy cluster level phenomena (which displays a different by similar scaling law to the Tully-Fischer relation which can be derived directly from MOND), the behavior of particles near spiral galaxies that are outside the main galactic disk, the behavior of wide binary stars (which is still currently unknown), and must be generalized to address cosmology phenomena like the cosmic background radiation and the timing of galaxy formation.

Fortunately, several attempts using MOND-variants, Moffat's MOG theory, and Deur's gravitational field self-interaction model, have shown that this is possible in principle to achieve. All three approaches have reproduced the cosmic microwave background to high precision and modified gravity theories generically produce more rapid galaxy formation than the LambdaCDM dark matter particle paradigm.

I wouldn't put money on Deur's approach being fully consistent with General Relativity, which a recent paper claimed to disprove, albeit without engaging in the key insight of Deur's that non-perturbative modeling of the non-Abelian aspects of gravity is necessary. 

But Deur's approach, even if it is actually a modification of GR, remains the only one that secures a complete range of applicability in a gravitational explanation of both dark matter and dark energy, from a set of theoretical assumptions very similar to those of general relativity and generically assumed in quantum gravity theories, in an extremely parsimonious and elegant manner. 

MOND doesn't have the same theoretical foundation or level of generality, and some of its relativistic generalizations like TeVeS don't meet certain observational tests. 

MOG requires a scalar-vector-tensor theory, while Deur manages to get the same results with a single tensor field.

Deur claims that he is introducing no new physically measured fundamental constants beyond Newton's constant G, but doesn't do this derivation for the constant he determines empirically for spiral galaxies from a(0), so that conclusion, if true, is an additional remarkable accomplishment, but I take it with a grain of salt.

Deur's explanation for dark energy phenomena also sets it apart. It dispenses with the need for the cosmological constant (thus preserving global conservation of mass-energy), in a way that is clever, motivated by conservation of energy principles at the galaxy scale related to the dark matter phenomena explanation of the theory, and is not used by any other modified gravity theories of which I am aware. It also provides an explanation for the apparent observation that  the Hubble constant hasn't remained constant over the life of the universe, which flows naturally from Deur's theory and is deeply problematic in a theory with a simple cosmological constant.

So, I think that it is highly likely the Deur's resolution of dark matter and dark energy phenomena, or a theory that looks very similar, is the right solution to these unsolved problems in astrophysics and fundamental physics.

A Recap Of What We Know About Neutrino Mass

This post about the state of research on the neutrino masses was originally made (with minor modifications from it for this blog post) at Physics Forums. Some of this material borrows heavily from prior posts at this blog tagged "neutrino".

Lower Bounds On Neutrino Mass Eigenstates From Neutrino Oscillation

The lower bound comes from the minimum sum of neutrino masses from the oscillation numbers (about 66 meV for a normal ordering of neutrino masses and about 106 meV for an inverse hierarchy of neutrino masses). See, e.g., here and here.


The 95% confidence interval minimum value of the mass difference between the second and third neutrino mass eigenstate is 48.69 meV, and the corresponding value of the mass difference between the first and second neutrino mass eigenstate is 8.46 meV. This implies that with a first neutrino mass eigenstate of 0.1 meV, a sum of the three neutrino masses is 0.01 + 8.47 + 57.16 = 65.64 meV in a normal hierarchy and 0.01 + 48.70 + 57.16 = 105.87 meV in an inverted hierarchy. The often quoted figure of 0.06 eV for the minimum sum of the neutrino masses in a normal ordering and 0.1 eV or 110 MeV for the minimum sum of the neutrino masses in an inverted ordering are just order of magnitude approximations (or may reflect outdated measurements).

The sum of the three neutrino masses could be greater than these minimums. If the sum of the three masses is greater than these minimums, the smallest neutrino mass is equal to a third of the amount by which the relevant minimum is exceeded to the extent that it is not due to uncertainty in measurements of the two mass differences.

So, for example, if the lightest of the three neutrino masses is 10 meV, then the sum of the three neutrino masses is about 96 meV in a normal mass ordering and about 136 meV in an inverted mass ordering.

The latest measurement of neutrino properties from T2K from March of this year favors a normal ordering of neutrino masses strongly but not decisively. We should be able to know the neutrino mass ordering more definitively in less than a decade according to a Snowmass 2021 paper released in December of 2022:
We have made significant progress since neutrino mass was first confirmed experimentally (also from the Snowmass 2021 paper):

Upper Bounds On Neutrino Mass From Direct Measurement

Direct measurement bounds the lightest neutrino mass at not more than about 800 meV, which isn't very constraining. This is potentially reducible to 200 meV within a few years according to physics conference presentations, which also isn't competitive with cosmology based bounds set forth below.

The tightest proposed constraints from cosmology (see below) are that this absolute mass value is actually 7 meV or less (with 95% confidence), although many cosmology based estimates are more conservative and would allow for a value of this as high as 18 meV or more (with 95% confidence). The one sigma (68% confidence) values are approximately 3.5 meV or less, and 9 meV or less, respectively.

Direct measurements of the neutrino masses are not anticipated to be meaningfully competitive with other means of determining the neutrino masses for the foreseeable future.

Upper Bounds On Neutrino Mass From Cosmology

The upper bound on the mass of the sum of the three neutrino masses is a cosmology based. As the Snowmass 2021 paper explains:
Cosmological measurements of the cosmic microwave background temperature and polarization information, baryon acoustic oscillations, and local distance ladder measurements lead to an estimate that the sum of for all i of m(i) < 90 meV at 90% CL which mildly disfavors the inverted ordering over the normal ordering since the sum of for all i of m(i) greater than or equal to 60 meV in the NO and greater than or equal to 110 meV in the IO; although these results depend on one’s choice of prior of the absolute neutrino mass scale.

Significant improvements are expected to reach the σ(the sum of for all m(ν)) ∼ 0.04 eV level with upcoming data from DESI and VRO, see the CF7 report, which should be sufficient to test the results of local oscillation data in the early universe at high significance, depending on the true values.
According to Eleonora di Valentino, Stefano Gariazzo, Olga Mena, "Model marginalized constraints on neutrino properties from cosmology" arXiv:2207.05167 (July 11, 2022), cosmology data favors a sum of three neutrino masses of not more than 87 meV (nominally ruling out an inverted mass hierarchy at the 95% confidence interval level, which oscillation data alone favor at a 2-2.7σ level), implying a lightest neutrino mass eigenstate of about 7 meV or less. 

Other estimates have put the cosmological upper bound on the sum of the three neutrino masses at 120 meV, implying of lightest neutrino mass eigenstate of about 18 meV or less.

The upper bound from cosmology is model dependent, but it is also quite robust to a wide variety of assumptions in those models. Of course, if future cosmology data implies that the sum of the three neutrino masses is lower than the lower bound from neutrino oscillation data (since all cosmology bounds to date are upper bounds), then there is a contradiction which would tend to cast doubt on the cosmology model used to estimate the sum of the three neutrino masses.

Upper Bounds On Majorana Neutrino Mass

There is also an upper bound on the Majorana mass of neutrinos, if they have Majorana mass, from the non-observation of neutrinoless double beta decay. 

As of July of 2022 (from here (arXiv 2207.07638)), we could determine with 90% confidence, based upon the non-detection of neutrinoless beta decay in a state of the art experiment establish a minimum half-life for the process of 8.3 * 1025 years. 

As explained by this source, an inverted mass hierarchy for neutrinos (with purely Majorana mass) is ruled out at a half life of about 1029 years (an improvement by a factor of 1200 in the excluded neutrinoless double beta decay half life over the current state of the art measurement). Exclusively Majorana mass becomes problematic even in a normal mass hierarchy in about 1032 or 1033 years (an improvement by a factor of 1.2 million to 12 million over the current state of the art). These limitations, however, are quite model dependent, in addition to being not very constraining.

On the other hand, if one is a supporter of the Majorana neutrino mass hypothesis, it is somewhat reassuring to know that we shouldn't have been able to see neutrinoless double beta decay yet if the neutrino masses are as small as neutrino oscillation data and cosmology data suggests.

Is There Any Theoretical Reason Forbidding Oscillations Between An Eigenstate Of Nonzero Rest Mass And An Eigenstate Of Exactly Zero Rest Mass?

Not a strong one, although there are suggestive reasons why it would make more sense if it had a tiny, but non-zero rest mass.

All neutrinos do interact directly via the weak force, and every single other Standard Model particle with a non-zero rest mass also interacts directly via the weak force (while all Standard Model particles that do not interact directly via the weak force, i.e. photons and gluons, and the hypothetical graviton which doesn't have weak force "charge") have zero rest mass. Similarly, all other Standard Model fermions have rest mass.

Possibly, the weak force self-interaction of the neutrinos ought to give rise to some rest mass. If the electron and lightest neutrino mass eigenstate both reflected predominantly the self-interactions of these particles via Standard Model forces (as some papers have suggested), a lightest neutrino mass eigenstate of the right order of magnitude given the combination of neutrino oscillation and cosmology bounds would flow from the relative values of the electromagnetic force and weak force coupling constants.

A massless neutrino would always travel at precisely the speed of light and would not experience the passage of time internally, while a massive neutrino would travel at a speed slightly less than the speed of light depending upon its kinetic energy due to special relativity, and would experience the passage of time internally, which makes more sense for a particle whose oscillations are not direction of time symmetric (because the PMNS matrix appears to have a non-zero CP violating term).

But none of this is ironclad theoretical proof that the lightest neutrino mass eigenstate can't be zero.

How So Oscillations Work Between Mass Eigenstates If The Total Energy Is Smaller Than The Mass Of An Eigenstate Potentially Involved In The Oscillation?

There is no reason that virtual particles in a series of neutrino oscillations shouldn't be possible, but the end states of any interaction need to conserve mass-energy.

In practice, we generally don't observe neutrinos with exceedingly low kinetic energy, from either reactors or nuclear decays or cosmic sources. We don't have the tools to do so, and don't know of processes that should give rise to them that we can observe.

All observed neutrinos have relativistic kinetic energy (i.e. kinetic energy comparable to or in excess of their rest mass), even though very low energy neutrinos are theoretically possible. Observations of relic neutrinos with very low kinetic energy are a scientific goal rather than a scientific achievement.

Tuesday, August 22, 2023

Old But Interesting

We show that, in the application of Riemannian geometry to gravity, there exists a superpotential Vij of the Riemann-Christoffel tensor which is the tensor generalization of Poisson's classical potential. Leaving open the question of a zero on nonzero rest mass k of the graviton we show that, in the latter case, k2 Vij is an energy momentum density, or “Maxwell-like tensor,” of the gravity field itself, adding to the “material tensor” in the right-hand sides of both the (generalized) Poisson equation and the Einstein gravity equation, but that, nevertheless, Einstein's requirement of geodesic motion of a point particle is rigorously preserved. 
Two interesting possibilities are thus opened: a tentative explanation of the cosmological “missing mass” and quantization of the Riemannian gravity field along a standard procedure.

O. Costa de Beauregard, "Massless or massive graviton?" 3 Foundations of Physics Letters 81-85 (1990).
volumpa81–85 (1990)

Wednesday, August 16, 2023

Ötzi the Iceman’s DNA Revisited

A new paper reveals that a 2012 analysis of Ötzi the Iceman's genome was contaminated and that rather than having steppe ancestry, that he was an almost pure European Neolithic farmer with quite dark skin (as was typical at the time).
In 2012, scientists compiled a complete picture of Ötzi’s genome; it suggested that the frozen mummy found melting out of a glacier in the Tyrolean Alps had ancestors from the Caspian steppe . . . The Iceman is about 5,300 years old. Other people with steppe ancestry didn’t appear in the genetic record of central Europe until about 4,900 years ago. Ötzi “is too old to have that type of ancestry,” says archaeogeneticist Johannes Krause of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. The mummy “was always an outlier.” 
Krause and colleagues put together a new genetic instruction book for the Iceman. The old genome was heavily contaminated with modern people’s DNA, the researchers report August 16 in Cell Genomics. The new analysis reveals that “the steppe ancestry is completely gone.”

About 90 percent of Ötzi’s genetic heritage comes from Neolithic farmers, an unusually high amount compared with other Copper Age remains. . . The Iceman’s new genome also reveals he had male-pattern baldness and much darker skin than artistic representations suggest. Genes conferring light skin tones didn’t become prevalent until 4,000 to 3,000 years ago when early farmers started eating plant-based diets and didn’t get as much vitamin D from fish and meat as hunter-gathers did. . . .“People that lived in Europe between 40,000 years ago and 8,000 years ago were as dark as people in Africa. . . .“We have always imagined that [Europeans] became light-skinned much faster. But now it seems that this happened actually quite late in human history.”
From Science News. The paper and its abstract are as follows:
The Tyrolean Iceman is known as one of the oldest human glacier mummies, directly dated to 3350–3120 calibrated BCE. A previously published low-coverage genome provided novel insights into European prehistory, despite high present-day DNA contamination. Here, we generate a high-coverage genome with low contamination (15.3×) to gain further insights into the genetic history and phenotype of this individual. Contrary to previous studies, we found no detectable Steppe-related ancestry in the Iceman. Instead, he retained the highest Anatolian-farmer-related ancestry among contemporaneous European populations, indicating a rather isolated Alpine population with limited gene flow from hunter-gatherer-ancestry-related populations. Phenotypic analysis revealed that the Iceman likely had darker skin than present-day Europeans and carried risk alleles associated with male-pattern baldness, type 2 diabetes, and obesity-related metabolic syndrome. These results corroborate phenotypic observations of the preserved mummified body, such as high pigmentation of his skin and the absence of hair on his head.
Figure thumbnail fx1
K. Wang et al. "High-coverage genome of the Tyrolean Iceman reveals unusually high Anatolian farmer ancestry." Cell Genomics (August 16, 2023). doi: 10.1016/j.xgen.2023.100377.

The open access paper states in  the body text that:
We found that the Iceman derives 90% ± 2.5% ancestry from early Neolithic farmer populations when using Anatolia_N as the proxy for the early Neolithic-farmer-related ancestry and WHGs as the other ancestral component (Figure 3; Table S4). When testing with a 3-way admixture model including Steppe-related ancestry as the third source for the previously published and the high-coverage genome, we found that our high-coverage genome shows no Steppe-related ancestry (Table S5), in contrast to ancestry decomposition of the previously published Iceman genome. We conclude that the 7.5% Steppe-related ancestry previously estimated for the previously published Iceman genome is likely the result of modern human contamination. . . . 
Compared with the Iceman, the analyzed contemporaneous European populations from Spain and Sardinia (Italy_Sardinia_C, Italy_Sardinia_N, Spain_MLN) show less early Neolithic-farmer-related ancestry, ranging from 27.2% to 86.9% (Figure 3A; Table S4). Even ancient Sardinian populations, who are located further south than the Iceman and are geographically separate from mainland Europe, derive no more than 85% ancestry from Anatolia_N (Figure 3; Table S4). The higher levels of hunter-gatherer ancestry in individuals from the 4th millennium BCE have been explained by an ongoing admixture between early farmers and hunter-gatherers in the Middle and Late Neolithic in various parts of Europe, including western Europe (Germany and France), central Europe, Iberia, and the Balkans.

Only individuals from Italy_Broion_CA.SG found to the south of the Alps present similarly low hunter-gatherer ancestry as seen in the Iceman.

We conclude that the Iceman and Italy_Broion_CA.SG might both be representatives of specific Chalcolithic groups carrying higher levels of early Neolithic-farmer-related ancestry than any other contemporaneous European group. This might indicate less gene flow from groups that are more admixed with hunter-gatherers or a smaller population size of hunter-gatherers in that region during the 5th and 4th millennium BCE. . . .
We estimated the admixture date between the early Neolithic-farmer-related (using Anatolia_N as proxy) and WHG-related ancestry sources using DATES to be 56 ± 21 generations before the Iceman’s death, which corresponds to 4880 ± 635 calibrated BCE assuming 29 years per generation (Figure 3B; Table S7) and considering the mean C14 date of this individual. Alternatively, using Germany_EN_LBK as the proxy for early Neolithic-farmer-related ancestry, we estimated the admixture date to be 40 ± 15 generations before his death (Table S7), or 4400 ± 432 calibrated BCE, overlapping with estimates from nearby Italy_Broion_CA.SG, who locate to the south of the Alps (Figure 3B).

While compared with the admixture time between early Neolithic farmers and hunter-gatherers in other parts of southern Europe, for instance in Spain and southern Italy, we found that, particularly, the admixture with hunter-gatherers as seen in the Iceman and Italy_Broion_CA.SG is more recent (Figure 3B; Table S3), suggesting a potential longer survival of hunter-gatherer-related ancestry in this geographical region.

Climate And Archaic Hominins

John Hawks has an intriguing analysis of a new paper on how the range and interactions of Neanderthals and Denisovans may have had a climate component. We know from the existence of genetic evidence showing Neanderthal-Denisovan admixture that there was some interaction.

He is skeptical of some aspects of the paper, including the hypothesis that Denisovans were systemically more cold tolerant, and the underlying concept of that there was a stable over time geographic range of occupation by particular species with frontiers that were rarely crossed. He acknowledges, however, that there is a wide geographic range where they could have been Neanderthal-Denisovan interaction. He also notes that:

The conclusion I draw from Ruan and colleagues' study is that no strong east-west climate barriers could have kept these populations apart for the hundreds of thousands of years of their evolution. That leaves open the possibility that other aspects of the environment besides temperature, rainfall, and general biome composition could have shaped their evolution. The alternative is that the survival and local success of hominin groups was itself so patchy over the long term that only a handful of lineages could persist.

One hypothesis that I've advanced over the years is that the jungles and hominin occupants of mainland Southeast Asia, formed a barrier to Neanderthal and modern human expansion until the Toba eruption at least temporarily removed that barrier.

I reproduce two images he borrows from papers he discusses below: 

One issue with the Denisovan habitat range shown is that Denisovan admixture in modern humans is strongest to the east of the Wallace line and together with residual Denisovan admixture in Southeast Asia and East Asia (albeit greatly diluted) suggests a much greater warm temperature range for these ancient hominins than the chart above suggests in both island and mainland Southeast Asia.