Tuesday, August 29, 2023

Fuzzy Dark Matter Ruled Out

This paper essentially rules out the remainder of the viable fuzzy dark matter parameters space.  FDM had been one of the more viable ultra-light dark matter theories. 

Tatyana Shevchuk, Ely D. Kovetz, Adi Zitrin, "New Bounds on Fuzzy Dark Matter from Galaxy-Galaxy Strong-Lensing Observations" arXiv:2308.14640 (August 28, 2023).

Wednesday, August 23, 2023

What Would Dark Matter Have To Be Like To Fit Our Observations?

Stacey McGaugh at his Triton Station blog (with some typological errors due to the fact that he's dictating since he recently broke his wrist) engages with the question of what properties dark matter would have to have to fit our astronomy observations.

Cosmology considerations like the observed cosmic background radiation (after astronomy observations ruled out some of the baryonic matter contenders like brown dwarfs and black holes) suggest that dark matter should be nearly collisionless, lack interactions with ordinary matter other than gravity, and should be non-baryonic (i.e. not made up of Standard Model particles or composites of them).

But observations of galaxies show that dark matter with these properties would form halos different than those with the cosmology driven properties described above. Astronomy observations of galaxies show us that inferred dark matter distributions intimately track the distributions of ordinary matter in a galaxy, which Newtonian-like gravitational interactions can explain on their own.

As his post explains after motivating his comments with the historical background of the dark matter theoretical paradigm, the problem is as follows (I have corrected his dictation software related errors without attribution. The bold and underlined emphasis is mine):

If we insist on dark matter, what this means is that we need, for each and every galaxy, to precisely look like MOND. 
I wrote the equation for the required effects of dark matter in all generality in McGaugh (2004). The improvements in the data over the subsequent decade enable this to be abbreviated to:

This is in McGaugh et al. (2016), which is a well known paper (being in the top percentile of citation rates). 
So this should be well known, but the implication seems not to be, so let’s talk it through. g(DM) is the force per unit mass provided by the dark matter halo of a galaxy. This is related to the mass distribution of the dark matter – its radial density profile – through the Poisson equation. The dark matter distribution is entirely stipulated by the mass distribution of the baryons, represented here by g(bar). That’s the only variable on the right hand side, a(0) being Milgrom’s acceleration constant. So the distribution of what you see specifies the distribution of what you can’t.

This is not what we expect for dark matter. It’s not what naturally happens in any reasonable model, which is an NFW halo. That comes from dark matter-only simulations; it has literally nothing to do with g(bar). So there is a big chasm to bridge right from the start: theory and observation are speaking different languages. Many dark matter models don’t specify g(bar), let alone satisfy this constraint. Those that do only do so crudely – the baryons are hard to model. Still, dark matter is flexible; we have the freedom to make it work out to whatever distribution we need. But in the end, the best a dark matter model can hope to do is crudely mimic what MOND predicted in advance. If it doesn’t do that, it can be excluded. Even if it does do that, should we be impressed by the theory that only survives by mimicking its competitor?

The observed MONDian behavior makes no sense whatsoever in terms of the cosmological constraints in which the dark matter has to be non-baryonic and not interact directly with the baryons. The equation above implies that any dark matter must interact very closely with the baryons – a fact that is very much in the spirit of what earlier dynamicist had found, that the baryons and the dynamics are intimately connected. If you know the distribution of the baryons that you can see, you can predict what the distribution of the unseen stuff has to be.

And so that’s the property that galaxies require that is pretty much orthogonal to the cosmic requirements. There needs to be something about the nature of dark matter that always gives you MONDian behavior in galaxies. Being cold and non-interacting doesn’t do that. 
Instead, galaxy phenomenology suggests that there is a direct connection – some sort of direct interaction – between dark matter and baryons. That direct interaction is anathema to most ideas about dark matter, because if there’s a direct interaction between dark matter and baryons, it should be really easy to detect dark matter. They’re out there interacting all the time.

There have been a lot of half solutions. These include things like warm dark matter and self interacting dark matter and fuzzy dark matter. These are ideas that have been motivated by galaxy properties. But to my mind, they are the wrong properties. They are trying to create a central density core in the dark matter halo. That is at best a partial solution that ignores the detailed distribution that is written above. The inference of a core instead of a cusp in the dark matter profile is just a symptom. The underlying disease is that the data look like MOND.

MONDian phenomenology is a much higher standard to try to get a dark matter model to match than is a simple cored halo profile. We should be honest with ourselves that mimicking MOND is what we’re trying to achieve. Most workers do not acknowledge that, or even be aware that this is the underlying issue.

There are some ideas to try to build-in the required MONDian behavior while also satisfying the desires of cosmology. One is Blanchet’s dipolar dark matter. He imagined a polarizable dark medium that does react to the distribution of baryons so as to give the distribution of dark matter that gives MOND-like dynamics. Similarly, Khoury’s idea of superfluid dark matter does something related. It has a superfluid core in which you get MOND-like behavior. At larger scales it transitions to a non-superfluid mode, where it is just particle dark matter that reproduces the required behavior on cosmic scales.

I don’t find any of these models completely satisfactory. It’s clearly a hard thing to do. You’re trying to mash up two very different sets of requirements. With these exceptions, the galaxy-motivated requirement that there is some physical aspect of dark matter that somehow knows about the distribution of baryons and organizes itself appropriately is not being used to inform the construction of dark matter models. The people who do that work seem to be very knowledgeable about cosmological constraints, but their knowledge of galaxy dynamics seems to begin and end with the statement that rotation curves are flat and therefore we need dark matter. That sufficed 40 years ago, but we’ve learned a lot since then. It’s not good enough just to have extra mass. That doesn’t cut it.

This analysis is the main reason that I'm much more inclined to favor gravity based explanations for dark matter phenomena than particle based explanations.

Direct dark matter detection experiments pretty much rule out dark matter particles that interact with ordinary matter with sufficient strength with masses in the 1 GeV to 1000 GeV ranges (one GeV is 1,000,000,000 eV). 

Collider experiments pretty much rule out dark matter particles that interact in any way with ordinary matter at sufficient strength with masses in the low single digit thousands GeVs or less. These experiments are certainly valid down to something less than the mass scale of the electron (which as a mass of about 511,000 eV). 

Astronomy observations used to rule out MACHOs such as brown dwarfs, and large primordial black holes (PBHs), pretty much rule out dark matter lumps of asteroid size or greater (from micro-lensing for larger lumps, and from solar system dynamics for asteroid sized lumps), whether or not it interacts non-gravitationally with ordinary matter. 

This leaves a gap between about 1000 GeV and asteroid masses, but the wave-like nature of dark matter phenomena inferred from astronomy observations pretty much rules out dark matter particles of more than 10,000 eV.

Direct dark matter detection experiments can't directly rule out these low mass dark matter candidates because their not sensitive enough. 

Colliders could conceivably miss particles that interact only feebly with ordinary matter and have very low mass themselves, although nuclear physics was able to detect the feebly interacting and very low mass neutrinos way back in the 1930s with far more primitive equipment than we have now. 

Even light dark matter candidates like axions, warm dark matter, and fuzzy dark matter still can't reproduce the observed tight fit between ordinary matter distributions and dark matter distributions within dark matter halos, however, if they have no non-gravitational interactions with ordinary matter.

All efforts to directly detect axions (which would have some interactions with ordinary matter that can be theoretically modeled) have had null results.

Furthermore, because the MOND equations that dark matter phenomena follow in galaxies are tied in particular to the amount of Newtonian-like acceleration due to gravity that objects in the galaxy experience from the galaxy, envisioning this phenomena as arising from a modification to gravity makes more sense than envisioning it as an entirely novel and unrelated to gravity fifth force between dark matter particles and ordinary matter.

If you take the dark matter particle candidates to explain dark matter phenomena off the field for these reasons, you can narrow down the plausible possible explanations for dark matter phenomena dramatically.

We also know that toy model MOND itself isn't quite the right solution. 

The right solution needs to be embedded in a relativistic framework that addresses strong field gravitational phenomena and solar system scale gravitational phenomena more or less exactly identically to Einstein's General Relativity up to the limitations of current observational precision and accuracy which is great.

The right solution also needs to have a greater domain of applicability than toy-model MOND, by correctly dealing with galaxy cluster level phenomena (which displays a different by similar scaling law to the Tully-Fischer relation which can be derived directly from MOND), the behavior of particles near spiral galaxies that are outside the main galactic disk, the behavior of wide binary stars (which is still currently unknown), and must be generalized to address cosmology phenomena like the cosmic background radiation and the timing of galaxy formation.

Fortunately, several attempts using MOND-variants, Moffat's MOG theory, and Deur's gravitational field self-interaction model, have shown that this is possible in principle to achieve. All three approaches have reproduced the cosmic microwave background to high precision and modified gravity theories generically produce more rapid galaxy formation than the LambdaCDM dark matter particle paradigm.

I wouldn't put money on Deur's approach being fully consistent with General Relativity, which a recent paper claimed to disprove, albeit without engaging in the key insight of Deur's that non-perturbative modeling of the non-Abelian aspects of gravity is necessary. 

But Deur's approach, even if it is actually a modification of GR, remains the only one that secures a complete range of applicability in a gravitational explanation of both dark matter and dark energy, from a set of theoretical assumptions very similar to those of general relativity and generically assumed in quantum gravity theories, in an extremely parsimonious and elegant manner. 

MOND doesn't have the same theoretical foundation or level of generality, and some of its relativistic generalizations like TeVeS don't meet certain observational tests. 

MOG requires a scalar-vector-tensor theory, while Deur manages to get the same results with a single tensor field.

Deur claims that he is introducing no new physically measured fundamental constants beyond Newton's constant G, but doesn't do this derivation for the constant he determines empirically for spiral galaxies from a(0), so that conclusion, if true, is an additional remarkable accomplishment, but I take it with a grain of salt.

Deur's explanation for dark energy phenomena also sets it apart. It dispenses with the need for the cosmological constant (thus preserving global conservation of mass-energy), in a way that is clever, motivated by conservation of energy principles at the galaxy scale related to the dark matter phenomena explanation of the theory, and is not used by any other modified gravity theories of which I am aware. It also provides an explanation for the apparent observation that  the Hubble constant hasn't remained constant over the life of the universe, which flows naturally from Deur's theory and is deeply problematic in a theory with a simple cosmological constant.

So, I think that it is highly likely the Deur's resolution of dark matter and dark energy phenomena, or a theory that looks very similar, is the right solution to these unsolved problems in astrophysics and fundamental physics.

A Recap Of What We Know About Neutrino Mass

This post about the state of research on the neutrino masses was originally made (with minor modifications from it for this blog post) at Physics Forums. Some of this material borrows heavily from prior posts at this blog tagged "neutrino".

Lower Bounds On Neutrino Mass Eigenstates From Neutrino Oscillation

The lower bound comes from the minimum sum of neutrino masses from the oscillation numbers (about 66 meV for a normal ordering of neutrino masses and about 106 meV for an inverse hierarchy of neutrino masses). See, e.g., here and here.

The 95% confidence interval minimum value of the mass difference between the second and third neutrino mass eigenstate is 48.69 meV, and the corresponding value of the mass difference between the first and second neutrino mass eigenstate is 8.46 meV. This implies that with a first neutrino mass eigenstate of 0.1 meV, a sum of the three neutrino masses is 0.01 + 8.47 + 57.16 = 65.64 meV in a normal hierarchy and 0.01 + 48.70 + 57.16 = 105.87 meV in an inverted hierarchy. The often quoted figure of 0.06 eV for the minimum sum of the neutrino masses in a normal ordering and 0.1 eV or 110 MeV for the minimum sum of the neutrino masses in an inverted ordering are just order of magnitude approximations (or may reflect outdated measurements).

The sum of the three neutrino masses could be greater than these minimums. If the sum of the three masses is greater than these minimums, the smallest neutrino mass is equal to a third of the amount by which the relevant minimum is exceeded to the extent that it is not due to uncertainty in measurements of the two mass differences.

So, for example, if the lightest of the three neutrino masses is 10 meV, then the sum of the three neutrino masses is about 96 meV in a normal mass ordering and about 136 meV in an inverted mass ordering.

The latest measurement of neutrino properties from T2K from March of this year favors a normal ordering of neutrino masses strongly but not decisively. We should be able to know the neutrino mass ordering more definitively in less than a decade according to a Snowmass 2021 paper released in December of 2022:
We have made significant progress since neutrino mass was first confirmed experimentally (also from the Snowmass 2021 paper):

Upper Bounds On Neutrino Mass From Direct Measurement

Direct measurement bounds the lightest neutrino mass at not more than about 800 meV, which isn't very constraining. This is potentially reducible to 200 meV within a few years according to physics conference presentations, which also isn't competitive with cosmology based bounds set forth below.

The tightest proposed constraints from cosmology (see below) are that this absolute mass value is actually 7 meV or less (with 95% confidence), although many cosmology based estimates are more conservative and would allow for a value of this as high as 18 meV or more (with 95% confidence). The one sigma (68% confidence) values are approximately 3.5 meV or less, and 9 meV or less, respectively.

Direct measurements of the neutrino masses are not anticipated to be meaningfully competitive with other means of determining the neutrino masses for the foreseeable future.

Upper Bounds On Neutrino Mass From Cosmology

The upper bound on the mass of the sum of the three neutrino masses is a cosmology based. As the Snowmass 2021 paper explains:
Cosmological measurements of the cosmic microwave background temperature and polarization information, baryon acoustic oscillations, and local distance ladder measurements lead to an estimate that the sum of for all i of m(i) < 90 meV at 90% CL which mildly disfavors the inverted ordering over the normal ordering since the sum of for all i of m(i) greater than or equal to 60 meV in the NO and greater than or equal to 110 meV in the IO; although these results depend on one’s choice of prior of the absolute neutrino mass scale.

Significant improvements are expected to reach the σ(the sum of for all m(ν)) ∼ 0.04 eV level with upcoming data from DESI and VRO, see the CF7 report, which should be sufficient to test the results of local oscillation data in the early universe at high significance, depending on the true values.
According to Eleonora di Valentino, Stefano Gariazzo, Olga Mena, "Model marginalized constraints on neutrino properties from cosmology" arXiv:2207.05167 (July 11, 2022), cosmology data favors a sum of three neutrino masses of not more than 87 meV (nominally ruling out an inverted mass hierarchy at the 95% confidence interval level, which oscillation data alone favor at a 2-2.7σ level), implying a lightest neutrino mass eigenstate of about 7 meV or less. 

Other estimates have put the cosmological upper bound on the sum of the three neutrino masses at 120 meV, implying of lightest neutrino mass eigenstate of about 18 meV or less.

The upper bound from cosmology is model dependent, but it is also quite robust to a wide variety of assumptions in those models. Of course, if future cosmology data implies that the sum of the three neutrino masses is lower than the lower bound from neutrino oscillation data (since all cosmology bounds to date are upper bounds), then there is a contradiction which would tend to cast doubt on the cosmology model used to estimate the sum of the three neutrino masses.

Upper Bounds On Majorana Neutrino Mass

There is also an upper bound on the Majorana mass of neutrinos, if they have Majorana mass, from the non-observation of neutrinoless double beta decay. 

As of July of 2022 (from here (arXiv 2207.07638)), we could determine with 90% confidence, based upon the non-detection of neutrinoless beta decay in a state of the art experiment establish a minimum half-life for the process of 8.3 * 1025 years. 

As explained by this source, an inverted mass hierarchy for neutrinos (with purely Majorana mass) is ruled out at a half life of about 1029 years (an improvement by a factor of 1200 in the excluded neutrinoless double beta decay half life over the current state of the art measurement). Exclusively Majorana mass becomes problematic even in a normal mass hierarchy in about 1032 or 1033 years (an improvement by a factor of 1.2 million to 12 million over the current state of the art). These limitations, however, are quite model dependent, in addition to being not very constraining.

On the other hand, if one is a supporter of the Majorana neutrino mass hypothesis, it is somewhat reassuring to know that we shouldn't have been able to see neutrinoless double beta decay yet if the neutrino masses are as small as neutrino oscillation data and cosmology data suggests.

Is There Any Theoretical Reason Forbidding Oscillations Between An Eigenstate Of Nonzero Rest Mass And An Eigenstate Of Exactly Zero Rest Mass?

Not a strong one, although there are suggestive reasons why it would make more sense if it had a tiny, but non-zero rest mass.

All neutrinos do interact directly via the weak force, and every single other Standard Model particle with a non-zero rest mass also interacts directly via the weak force (while all Standard Model particles that do not interact directly via the weak force, i.e. photons and gluons, and the hypothetical graviton which doesn't have weak force "charge") have zero rest mass. Similarly, all other Standard Model fermions have rest mass.

Possibly, the weak force self-interaction of the neutrinos ought to give rise to some rest mass. If the electron and lightest neutrino mass eigenstate both reflected predominantly the self-interactions of these particles via Standard Model forces (as some papers have suggested), a lightest neutrino mass eigenstate of the right order of magnitude given the combination of neutrino oscillation and cosmology bounds would flow from the relative values of the electromagnetic force and weak force coupling constants.

A massless neutrino would always travel at precisely the speed of light and would not experience the passage of time internally, while a massive neutrino would travel at a speed slightly less than the speed of light depending upon its kinetic energy due to special relativity, and would experience the passage of time internally, which makes more sense for a particle whose oscillations are not direction of time symmetric (because the PMNS matrix appears to have a non-zero CP violating term).

But none of this is ironclad theoretical proof that the lightest neutrino mass eigenstate can't be zero.

How So Oscillations Work Between Mass Eigenstates If The Total Energy Is Smaller Than The Mass Of An Eigenstate Potentially Involved In The Oscillation?

There is no reason that virtual particles in a series of neutrino oscillations shouldn't be possible, but the end states of any interaction need to conserve mass-energy.

In practice, we generally don't observe neutrinos with exceedingly low kinetic energy, from either reactors or nuclear decays or cosmic sources. We don't have the tools to do so, and don't know of processes that should give rise to them that we can observe.

All observed neutrinos have relativistic kinetic energy (i.e. kinetic energy comparable to or in excess of their rest mass), even though very low energy neutrinos are theoretically possible. Observations of relic neutrinos with very low kinetic energy are a scientific goal rather than a scientific achievement.

Tuesday, August 22, 2023

Old But Interesting

We show that, in the application of Riemannian geometry to gravity, there exists a superpotential Vij of the Riemann-Christoffel tensor which is the tensor generalization of Poisson's classical potential. Leaving open the question of a zero on nonzero rest mass k of the graviton we show that, in the latter case, k2 Vij is an energy momentum density, or “Maxwell-like tensor,” of the gravity field itself, adding to the “material tensor” in the right-hand sides of both the (generalized) Poisson equation and the Einstein gravity equation, but that, nevertheless, Einstein's requirement of geodesic motion of a point particle is rigorously preserved. 
Two interesting possibilities are thus opened: a tentative explanation of the cosmological “missing mass” and quantization of the Riemannian gravity field along a standard procedure.

O. Costa de Beauregard, "Massless or massive graviton?" 3 Foundations of Physics Letters 81-85 (1990).
volumpa81–85 (1990)

Wednesday, August 16, 2023

Ötzi the Iceman’s DNA Revisited

A new paper reveals that a 2012 analysis of Ötzi the Iceman's genome was contaminated and that rather than having steppe ancestry, that he was an almost pure European Neolithic farmer with quite dark skin (as was typical at the time).
In 2012, scientists compiled a complete picture of Ötzi’s genome; it suggested that the frozen mummy found melting out of a glacier in the Tyrolean Alps had ancestors from the Caspian steppe . . . The Iceman is about 5,300 years old. Other people with steppe ancestry didn’t appear in the genetic record of central Europe until about 4,900 years ago. Ötzi “is too old to have that type of ancestry,” says archaeogeneticist Johannes Krause of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. The mummy “was always an outlier.” 
Krause and colleagues put together a new genetic instruction book for the Iceman. The old genome was heavily contaminated with modern people’s DNA, the researchers report August 16 in Cell Genomics. The new analysis reveals that “the steppe ancestry is completely gone.”

About 90 percent of Ötzi’s genetic heritage comes from Neolithic farmers, an unusually high amount compared with other Copper Age remains. . . The Iceman’s new genome also reveals he had male-pattern baldness and much darker skin than artistic representations suggest. Genes conferring light skin tones didn’t become prevalent until 4,000 to 3,000 years ago when early farmers started eating plant-based diets and didn’t get as much vitamin D from fish and meat as hunter-gathers did. . . .“People that lived in Europe between 40,000 years ago and 8,000 years ago were as dark as people in Africa. . . .“We have always imagined that [Europeans] became light-skinned much faster. But now it seems that this happened actually quite late in human history.”
From Science News. The paper and its abstract are as follows:
The Tyrolean Iceman is known as one of the oldest human glacier mummies, directly dated to 3350–3120 calibrated BCE. A previously published low-coverage genome provided novel insights into European prehistory, despite high present-day DNA contamination. Here, we generate a high-coverage genome with low contamination (15.3×) to gain further insights into the genetic history and phenotype of this individual. Contrary to previous studies, we found no detectable Steppe-related ancestry in the Iceman. Instead, he retained the highest Anatolian-farmer-related ancestry among contemporaneous European populations, indicating a rather isolated Alpine population with limited gene flow from hunter-gatherer-ancestry-related populations. Phenotypic analysis revealed that the Iceman likely had darker skin than present-day Europeans and carried risk alleles associated with male-pattern baldness, type 2 diabetes, and obesity-related metabolic syndrome. These results corroborate phenotypic observations of the preserved mummified body, such as high pigmentation of his skin and the absence of hair on his head.
Figure thumbnail fx1
K. Wang et al. "High-coverage genome of the Tyrolean Iceman reveals unusually high Anatolian farmer ancestry." Cell Genomics (August 16, 2023). doi: 10.1016/j.xgen.2023.100377.

The open access paper states in  the body text that:
We found that the Iceman derives 90% ± 2.5% ancestry from early Neolithic farmer populations when using Anatolia_N as the proxy for the early Neolithic-farmer-related ancestry and WHGs as the other ancestral component (Figure 3; Table S4). When testing with a 3-way admixture model including Steppe-related ancestry as the third source for the previously published and the high-coverage genome, we found that our high-coverage genome shows no Steppe-related ancestry (Table S5), in contrast to ancestry decomposition of the previously published Iceman genome. We conclude that the 7.5% Steppe-related ancestry previously estimated for the previously published Iceman genome is likely the result of modern human contamination. . . . 
Compared with the Iceman, the analyzed contemporaneous European populations from Spain and Sardinia (Italy_Sardinia_C, Italy_Sardinia_N, Spain_MLN) show less early Neolithic-farmer-related ancestry, ranging from 27.2% to 86.9% (Figure 3A; Table S4). Even ancient Sardinian populations, who are located further south than the Iceman and are geographically separate from mainland Europe, derive no more than 85% ancestry from Anatolia_N (Figure 3; Table S4). The higher levels of hunter-gatherer ancestry in individuals from the 4th millennium BCE have been explained by an ongoing admixture between early farmers and hunter-gatherers in the Middle and Late Neolithic in various parts of Europe, including western Europe (Germany and France), central Europe, Iberia, and the Balkans.

Only individuals from Italy_Broion_CA.SG found to the south of the Alps present similarly low hunter-gatherer ancestry as seen in the Iceman.

We conclude that the Iceman and Italy_Broion_CA.SG might both be representatives of specific Chalcolithic groups carrying higher levels of early Neolithic-farmer-related ancestry than any other contemporaneous European group. This might indicate less gene flow from groups that are more admixed with hunter-gatherers or a smaller population size of hunter-gatherers in that region during the 5th and 4th millennium BCE. . . .
We estimated the admixture date between the early Neolithic-farmer-related (using Anatolia_N as proxy) and WHG-related ancestry sources using DATES to be 56 ± 21 generations before the Iceman’s death, which corresponds to 4880 ± 635 calibrated BCE assuming 29 years per generation (Figure 3B; Table S7) and considering the mean C14 date of this individual. Alternatively, using Germany_EN_LBK as the proxy for early Neolithic-farmer-related ancestry, we estimated the admixture date to be 40 ± 15 generations before his death (Table S7), or 4400 ± 432 calibrated BCE, overlapping with estimates from nearby Italy_Broion_CA.SG, who locate to the south of the Alps (Figure 3B).

While compared with the admixture time between early Neolithic farmers and hunter-gatherers in other parts of southern Europe, for instance in Spain and southern Italy, we found that, particularly, the admixture with hunter-gatherers as seen in the Iceman and Italy_Broion_CA.SG is more recent (Figure 3B; Table S3), suggesting a potential longer survival of hunter-gatherer-related ancestry in this geographical region.

Climate And Archaic Hominins

John Hawks has an intriguing analysis of a new paper on how the range and interactions of Neanderthals and Denisovans may have had a climate component. We know from the existence of genetic evidence showing Neanderthal-Denisovan admixture that there was some interaction.

He is skeptical of some aspects of the paper, including the hypothesis that Denisovans were systemically more cold tolerant, and the underlying concept of that there was a stable over time geographic range of occupation by particular species with frontiers that were rarely crossed. He acknowledges, however, that there is a wide geographic range where they could have been Neanderthal-Denisovan interaction. He also notes that:

The conclusion I draw from Ruan and colleagues' study is that no strong east-west climate barriers could have kept these populations apart for the hundreds of thousands of years of their evolution. That leaves open the possibility that other aspects of the environment besides temperature, rainfall, and general biome composition could have shaped their evolution. The alternative is that the survival and local success of hominin groups was itself so patchy over the long term that only a handful of lineages could persist.

One hypothesis that I've advanced over the years is that the jungles and hominin occupants of mainland Southeast Asia, formed a barrier to Neanderthal and modern human expansion until the Toba eruption at least temporarily removed that barrier.

I reproduce two images he borrows from papers he discusses below: 

One issue with the Denisovan habitat range shown is that Denisovan admixture in modern humans is strongest to the east of the Wallace line and together with residual Denisovan admixture in Southeast Asia and East Asia (albeit greatly diluted) suggests a much greater warm temperature range for these ancient hominins than the chart above suggests in both island and mainland Southeast Asia.

Tuesday, August 15, 2023

A New Higgs Boson Mass Measurement

The current Particle Data Group global average measurement for the Higgs boson mass is 125.25 ± 0.17 GeV. 

The previous combined ATLAS Higgs boson mass measurement (via the Particle Data Group) was 124.86 ± 0.27 GeV from 2018 (using Run 2 data), and the previous combined CMS Higgs boson mass measurement (from the same source) was 125.46 ± 0.16 GeV from 2020 (using Run 2 data). These measurement were consistent with each other at the 1.9 sigma level. The Run-1 measurement from ATLAS and CMS combined (from the same source) was 125.09 ± 0.24 GeV.

The new ATLAS diphoton decay channel Higgs boson mass measurement is 125.17 ± 0.14 GeV. The new ATLAS combined Higgs boson mass measurement is 125.22 ± 0.14 GeV.  

The new ATLAS combined measurement is consistent with the old CMS combined Run-2 measurement at the 1.1 sigma level. 

The new measurement should pull up the global average measurement of the Higgs boson mass to about 125.27 GeV and should also reduce the uncertainty in the global average measurement to ± 0.13 GeV or less. This is an uncertainty of roughly one part per thousand.
The mass of the Higgs boson is measured in the H→γγ decay channel, exploiting the high resolution of the invariant mass of photon pairs reconstructed from the decays of Higgs bosons produced in proton-proton collisions at a centre-of-mass energy s√=13 TeV. The dataset was collected between 2015 and 2018 by the ATLAS detector at the Large Hadron Collider, and corresponds to an integrated luminosity of 140 fb−1. The measured value of the Higgs boson mass is 125.17±0.11(stat.)±0.09(syst.) GeV and is based on an improved energy scale calibration for photons, whose impact on the measurement is about four times smaller than in the previous publication. A combination with the corresponding measurement using 7 and 8 TeV pp collision ATLAS data results in a Higgs boson mass measurement of 125.22±0.11(stat.)±0.09(syst.) GeV. With an uncertainty of 1.1 per mille, this is currently the most precise measurement of the mass of the Higgs boson from a single decay channel.
ATLAS Collaboration, "Measurement of the Higgs boson mass with H→γγ decays in 140 fb−1 of s√=13 TeV pp collisions with the ATLAS detector" arXiv:2308.07216 (August 14, 2023) (submitted to Phys. Lett. B).

Monday, August 14, 2023

Pompeii Scrolls May Be Recoverable

In AD 79, Mt Vesuvius, a volcano in Italy, erupted burying several nearby Roman towns, including Pompeii and Herculaneum.

With new advanced imaging techniques, it may be possible to recover many scrolls from an ancient library that has been found in Pompeii. This is a big deal because 99% of ancient writings have been lost. So, it is likely that this could result in the recovery large numbers of new ancient texts that were previously lost.

Thursday, August 10, 2023

An Improved Muon g-2 Measurement

Fermilab's new August 10, 2023 paper and its abstract described it latest improved measurement of muon g-2:

Screenshot 2023-08-10 at 11.12.57 AM.png

The new paper doesn't delve in depth into the theoretical prediction issues even to the level addressed in today's live streamed presentation. It says only:
A comprehensive prediction for the Standard Model value of the muon magnetic anomaly was compiled most recently by the Muon g−2 Theory Initiative in 2020[20], using results from[21–31]. The leading order hadronic contribution, known as hadronic vacuum polarization (HVP)was taken from e+e−→hadrons cross section measurements performed by multiple experiments. However, a recent lattice calculation of HVP by the BMW collaboration[30] shows significant tension with the e+e− data. Also, a new preliminary measurement of the e+e−→π+π−cross section from the CMD-3 experiment[32] disagrees significantly with all other e+e−data. There are ongoing efforts to clarify the current theoretical situation[33]. 
While a comparison between the Fermilab result from Run-1/2/3 presented here, aµ(FNAL),and the 2020 prediction yields a discrepancy of 5.0σ, an updated prediction considering all available data will likely yield a smaller and less significant discrepancy.
This is 5 sigma from the partially data based 2020 White Paper's Standard Model prediction, but much closer to (consistent at the 2 sigma level with) the 2020 BMW Lattice QCD based prediction (which is 1.8 sigma from the experimental result and has been corroborated by essentially all other partial Lattice QCD calculations since the last announcement) and to a prediction made using a subset of the data in the partially data based prediction which is closest to the experimental result (which is even closer to the experimental result).

This is shown in the YouTube screen shot from their presentation this morning (below):

Screenshot 2023-08-10 at 10.03.01 AM.png

As the screenshot makes visually very clear, there is now much more uncertainty in the theoretically calculated Standard Model predicted value of muon g-2 than there is in the experimental measurement itself.

For those of you who aren't visual learners:

World Experimental Average (2023): 116,592,059(22)
Fermilab Run 1+2+3 data (2023): 116,592,055(24)
Fermilab Run 2+3 data(2023): 116,592,057(25)
Combined measurement (2021): 116,592,061(41)
Fermilab Run 1 data (2021): 116,592,040(54)
Brookhaven's E821 (2006): 116,592,089(63)

Theory Initiative calculation: 116,591,810(43)
BMW calculation: 116,591,954(55)

It is important to note that every single experiment and every single theoretical prediction matches up exactly 116,592 times 10^-8, rounded to the nearest 10^-8 (the raw number before conversion to g-2 form is 2.00233184, which has nine significant digits). The spread from the highest best fit experimental value to the lowest best fit theoretical prediction spans only 279 times 10^-11, which is equivalent to a plus or minus two sigma uncertainty of 70 times 10^-11 from the midpoint of that range. So, all of the experimental and theoretical values are ultra-precise. 

The experimental value is already twice as precise at the theoretical prediction of its value in the Standard Model, and is expected to ultimately be about four times more precise than the current best available theoretical predictions as illustrated below.

Completed Runs 4 and 5 and in progress Run 6 are anticipated to reduce the uncertainty in the experimental measurement over the next two or three years, by about 50%, but mostly from Run 4 which should release its results sometime around October of 2024. The additional experimental precision anticipated from Run 5 and Run 6 is expected to be pretty modest.

It is likely that the true uncertainty in the 2020 White Paper result is too low, quite possibly because of understated systemic error in some of the underlying data upon which it relies from electron-positron collisions. The introduction to the CMD-3 paper also identifies a problem with the quality of the data that the Theory Initiative is relying upon:
The π+π−channel gives the major part of the hadronic contribution to the muon anomaly, 506.0±3.4×10−10 out of the total aHVP µ = 693.1±4.0×10−10 value. It also determines (together with the light-by-light contribution) the overall uncertainty ∆aµ= ±4.3×10−10 of the standard model prediction of muon g−2 [5]. 
To conform to the ultimate target precision of the ongoing Fermilab experiment [16,17]∆aexp µ [E989]≈±1.6×10−10 and the future J-PARC muon g-2/EDM experiment[18],the π+π− production cross section needs to be known with the relative overall systematic uncertainty about 0.2%. 
Several sub-percent precision measurements of the e+e−→π+π− cross section exist. The energy scan measurements were performed at VEPP-2M collider by the CMD-2 experiment (with the systematic precision of 0.6–0.8%)[19,20,21,22] and by the SND experiment (1.3%)[23]. These results have some what limited statistical precision. There are also measurements based on the initial-state radiation(ISR) technique by KLOE(0.8%)[24,25, 26,27], BABAR(0.5%)[28] and BES-III (0.9%)[29]. Due to the high luminosities of these e+e−factories, the accuracy of the results from the experiments are less limited by statistics, meanwhile they are not fully consistent with each other within the quoted systematic uncertainties. 
One of the main goals of the CMD-3 and SND experiments at the newVEPP-2000 e+e− collider at BINP, Novosibirsk, is to perform the new high precision high statistics measurement of the e+e−→π+π−cross section. 
Recently, the first SND result based on about 10% of the collected statistics was presented with a systematic uncertainty of about 0.8%[30]. 
In short, there is no reason to doubt that the Fermilab measurement of muon g-2 is every bit as solid as claimed, but the various calculations of the predicted Standard Model value of the QCD part of muon g-2 varies are in strong tension with each other.

It appears the the correct Standard Model prediction calculation is closer to the experimental result than the 2020 White Paper calculation (which mixed lattice QCD for parts of the calculation and experimental data in lieu of QCD calculations for other parts of the calculation), although the exact source of the issue is only starting to be pinned down.

Side Point: The Hadronic Light By Light Calculation

The hadronic QCD component is the sum of two parts, the hadronic vacuum polarization (HVP) and the hadronic light by light (HLbL) components. In the Theory Initiative analysis the QCD amount is 6937(44) which is broken out as HVP = 6845(40), which is a 0.6% relative error and HLbL = 98(18), which is a 20% relative error.

The presentation doesn't note it, but there was also an adjustment bringing the result closer to the experimental result in the hadronic light-by-light calculation (which is the smaller of two QCD contributions to the total value of muon g-2 and wasn't included in the BMW calculation) which was announced on the same day as the previous data announcement. The new calculation of the hadronic light by light contribution to the muon g-2 calculation increases the contribution from that component from 92(18) x 10[sup]-11[/sup] to 106.8(14.7) x 10[sup]-11[/sup].

As the precision of the measurements and the calculations of the Standard Model Prediction improves, a 14.8 x 10[sup]-11[/sup] discrepancy in the hadronic light by light portion of the calculation becomes more material.

Why Care?

Muon g-2 is an experimental observable which implicates all three Standard Model forces that serves as a global test of the consistency of the Standard Model with experiment.

If there really were a five sigma discrepancy between the Standard Model prediction and the experimental result, this would imply new physics at fairly modest energies that could probably be reached at next generation colliders (since muon g-2 is an observable that is more sensitive to low energy new physics than high energy new physics).

On the other hand, if the Standard Model prediction and the experimental result are actually consistent with each other, then low energy new non-gravitational physics are strongly disfavored at foreseeable new high energy physics experiments, except in very specific ways that cancel out in a muon g-2 calculation.

This post overlaps heavily, but not exactly, with my posts at the Physics Forums.

Blog Format Only Editorial Commentary:

I have no serious doubt that when the dust settles, tweaks to hadronic vacuum polarization and hadronic light-by-light calculations will cause the theoretical prediction of the Standard Model expected value of muon g-2 to be consistent with the experimentally measured value, a measurement which might get as precise as 7 times 10^-11.

When, and not if, this happens, non-gravitational new physics that can contribute via any Standard Model force to muon g-2 at energies well in excess of the electroweak scale (ca. 246 GeV) will be almost entirely precluded on a global basis. Indeed, it will probably be a pretty tight constraint even up to tens of TeVs energies.

There will always be room for tiny "God of the gaps" modifications to the Standard Model in places in the parameter space of the model where the happenstance of how our experiments are designed leave little loopholes, but no will be supported by positive evidence and none will be well motivated.

The Standard Model isn't quite a completely solved problem. 

Many of its parameters, particularly the quark masses and the neutrino physics parameters need to be measured more precisely. We still don't really understand the mechanism for neutrino mass, although I very much doubt that either Majorana mass or a see-saw mechanism, the two leading proposals to explain it, are right. We still haven't confirmed the Standard Model predictions of glue balls or sphalerons. We still don't have a good solid, predictive description of why we have the scalar mesons and axial vector mesons that we do, nor to we really understand their inner structure. There is plenty of room for improvement in how we do QCD calculations. Most of the free parameters of the Standard Model can probably be derived from a much smaller set of free parameters with just a small number of additional rules for determining them, but we haven't cracked that code yet. We are just on the brink of starting to derive the parton distribution functions of the hadrons from first principles, even though we known everything necessary to do so in principle. We still haven't fully derived the properties of atoms and nuclear physics and neutron stars from the first principles of the Standard Model. We still haven't reworked the beta functions of the Standard Model's free parameters to reflect gravity in addition to the Standard Model particles. We still haven't solved the largely theoretical issues arising from the point particle approximation of particles in the Standard Model that string theory sought to address - in part - because we haven't found any experimental data that points to a need to do so.

But let's get real. 

What we don't know about Standard Model physics is mostly esoterica with no engineering applications. There is little room for any really significant breakthroughs in high energy physics left. The things we don't know mostly relate to ephemeral particles only created in high energy particle colliders, to the fine details of properties of ghostly neutrinos that barely interact with anything else, to a desire to make physics prettier, and to increased precision in measurements of physical constants that we already mostly know with adequate precision. 

Our prospects for coming up with a "Grand Unified Theory" (GUT) of Standard Model physics, or a "Theory of Everything" (TOE), however, which was string theory's siren song for most of my life, look dim. But, while they have aesthetic appeal, a little book full of measurements of things that could probably be worked out from first principles in a GUT or TOE that could fit on a t-shirt, doesn't really change what we can do with the Standard Model.

The biggest missing piece of the laws of Nature, which does require real new physics or a major reimagining of existing laws of physics, is our understanding of phenomena attributed to dark matter and dark energy (and to a lesser extent cosmological inflation). 

I am fairly confident that these can be resolved with subtle tweaks to General Relativity (like considering non-perturbative effects or imposing conformal symmetry), and then this classical theory's reformulation as a quantum gravity theory. Searches for dark matter particles will all come up empty. Whether or not Deur is right on that score, the solution to those problems will look a lot like his solution (if not something even more elegant like emergent gravity). 

Then, the final "dark era" of physics will be over, and we will have worked out all of the laws of Nature, and thus straight jacketed into a complete set of laws of Nature, we may finally develop a more sensible cosmology as well. Science will go on, but "fundamental physics" will become a solved problem and the universe will look comparatively simple again.

With a little luck, this could all be accomplished, if not in my lifetime, in the lives of my children or grandchildren - less than a century. 

Many of these problems can be solved even without much more data, so long as we secure the quantum computing power and AI systems to finally overcome overwhelming calculation barriers in QCD and asymmetric systems in General Relativity. And, the necessary developments in quantum computing and AI are developments that are very likely to actually happen.

We could make a lot of progress, for example, by simply reverse engineering the proton and the neutron and a few other light hadrons, whose properties have been measured with exquisite precision, with improved QCD calculations that are currently too cumbersome for even our most powerful networks of supercomputers to crunch.

The problems of dark matter and dark energy have myriad telescope like devices and powerful computer models generating a torrent of new data and processing it directed at them. Sooner or later, if by no means other than sheer brute force, we ought to be able to solve these problems and I feel like we've made great progress on them just in the last few decades.

Monday, August 7, 2023

What Kind of Hominin Is The Latest Chinese Find?

Chinese scientists have found fairly complete remains of a hominin jaw and skull from about 300,000 years ago at Hualongdong (HLD), East China, with a mix of archaic and modern features. The specimen appears to be a pre-pubescent child of about 12-13 years of age. The correct classification of the species of the hominin bones is uncertain, in the absence of ancient DNA (trade press article here; journal article here). 

If the specimen is correctly dated, it shouldn't be a modern human, as the Homo sapien species was just barely emerging in Africa at the time and should not have reached Asia by that point. 

But, the specimen seems different in important respects from Homo erectus as well in its characteristics. As such, the specimen is a candidate for an East Asian Denisovan, a sister clade to Neanderthals for which no sufficiently complete type fossil has been secured despite the fact that ancient DNA samples have been obtained and that genetic traces of ancient admixture between Denisovans and modern humans is well established in Asia, Australia and Oceania. The specimen could also be from a Denisovan-Homo Erectus hybrid individual, or from some new previously unknown hominin species.

Time For A New Dark Matter Phenomena Paradigm

The authors are willing to make the leap that the ΛCDM model has utterly failed an needs to be abandoned. But they stubbornly refuse to consider how well as quite simple gravity based explanation can go towards resolving it, and don't make the leap to an alternative.
The phenomenon of the Dark matter baffles the researchers: the underlying dark particle has escaped so far the detection and its astrophysical role appears complex and entangled with that of the standard luminous particles. We propose that, in order to act efficiently, alongside with abandoning the current ΛCDM scenario, we need also to shift the Paradigm from which it emerged.
Fabrizio Nesti, Paolo Salucci, Nicola Turini, "The Quest for the Nature of the Dark Matter: The Need of a New Paradigm" arXiv:2308.02004 (August 3, 2023 (published in 2023(2) Astronomy 90-104).

Thursday, August 3, 2023

How Do People Decide Which Scientists To Believe?

Examining and resolving in my own mind disputes between scientists is pretty much the essence of what I do on a daily basis, especially, but not only, at this blog. So, this study caught my attention. I suspect that my methods are more analytical and sourced than average, and view myself as kindred to "superforecasters" in my methods.
Uncertainty that arises from disputes among scientists seems to foster public skepticism or noncompliance. Communication of potential cues to the relative performance of contending scientists might affect judgments of which position is likely more valid. We used actual scientific disputes—the nature of dark matter, sea level rise under climate change, and benefits and risks of marijuana—to assess Americans’ responses (n = 3150). 
Seven cues—replication, information quality, the majority position, degree source, experience, reference group support, and employer—were presented three cues at a time in a planned-missingness design. The most influential cues were majority vote, replication, information quality, and experience. Several potential moderators—topical engagement, prior attitudes, knowledge of science, and attitudes toward science—lacked even small effects on choice, but cues had the strongest effects for dark matter and weakest effects for marijuana, and general mistrust of scientists moderately attenuated top cues’ effects. 
Risk communicators can take these influential cues into account in understanding how laypeople respond to scientific disputes, and improving communication about such disputes.
Branden B. Johnson, Marcus Mayorga, Nathan F. Dieckmann, "How people decide who is correct when groups of scientists disagree" Risk Analysis (July 28, 2023).

Wednesday, August 2, 2023

A Strict Experimental Bound On A Quantum Gravity Effect From IceCube.

Most quantum gravity theories insert randomness into the structure of space-time that should causes neutrino oscillations over long distances to cease to become coherent. The IceCube Neutrino observatory at the South Pole measures neutrinos from space that would be expected to reveal this effect if many quantum gravity theories are correct. But, so far, this hasn't been observed, setting strict limits on this hypothesized quantum gravity effect.
Neutrino oscillations at the highest energies and longest baselines provide a natural quantum interferometer with which to study the structure of spacetime and test the fundamental principles of quantum mechanics. If the metric of spacetime has a quantum mechanical description, there is a generic expectation that its fluctuations at the Planck scale would introduce non-unitary effects that are inconsistent with the standard unitary time evolution of quantum mechanics. Neutrinos interacting with such fluctuations would lose their quantum coherence, deviating from the expected oscillatory flavor composition at long distances and high energies. 
The IceCube South Pole Neutrino Observatory is a billion-ton neutrino telescope situated in the deep ice of the Antarctic glacier. Atmospheric neutrinos detected by IceCube in the energy range 0.5-10 TeV have been used to test for coherence loss in neutrino propagation. No evidence of anomalous neutrino decoherence was observed, leading to the strongest experimental limits on neutrino-quantum gravity interactions to date, significantly surpassing expectations from natural Planck-scale models. The resulting constraint on the effective decoherence strength parameter within an energy-independent decoherence model is Γ0≤1.17×10^−15 eV, improving upon past limits by a factor of 30. For decoherence effects scaling as E^2, limits are advanced by more than six orders of magnitude beyond past measurements.
R. Abbasi, et al., "Searching for Decoherence from Quantum Gravity at the IceCube South Pole Neutrino Observatory" arXiv:2308.00105 (July 25, 2023).

A (Weak) Challenge To Relativistic MOND

MOND is a non-relativistic toy-model modification of Newtonian gravity that accurately models the dynamics of galaxies of all sizes, without dark matter, which was devised by Mordecai Milgrom in 1983. Several attempts have made to generalize the theory relativistically. TeVeS, a tensor-vector-scalar theory (which is also a play on a Hebrew word) by Jacob Bekenstein (who died before his mentor Mr. Milgrom in 2015) was the first notable attempt. RMOND, discussed in the paper below, is another effort to develop a relativistic MOND theory.

The new paper observes that RMOND does not exactly replicate "the expansion history of the Λ cold dark matter (ΛCDM) universe filled solely with dust-like matter," although it can do so with additional degrees of freedom in the matter sector, and is an exact match to the ΛCDM model in the trivial case of a vacuum solution for the evolution of the universe.

In and of itself, this isn't a huge deal. RMOND isn't intended to be identical to the ΛCDM Standard Model of Cosmology. 

For example, one of the generic predictions of MOND cosmologies, and indeed, of essentially any model that replaces dark matter with gravitational effects, is that the progression of galaxy formation in the early history of the universe progresses more quickly. 

Of course, the James Webb Space Telescope has established observationally that galaxy formation in the real universe does not match the predictions of the ΛCDM universe because it occurs too soon (as MOND and other modified gravity theories generically predict). 

Likewise, there are tensions between early measurements of the rate of galaxy expansion as quantified by Hubble's constant, from observations of the cosmic microwave background, and late time measurements of Hubble's constant by other means. Late time measurements of Hubble's constant are significantly larger than the CMB based Hubble constant measurements. 

The EDGES 21cm background radiation measurements also differ dramatically from the ΛCDM model predictions. The EDGES finding has not been replicated yet, however, and there are plausible methodological reasons to question their accuracy.

These observation are examples of data that suggest that there is a real possibility that a simple six parameter ΛCDM model is not an accurate description of the universe. If at least some of the observations contradicting the ΛCDM model cosmology predictions are wrong, then to match what is actually observed, you don't want a theory that exactly matches the expansion history of the ΛCDM model.

But, while the abstract of the new paper doesn't actually say so, the implication is that the ΛCDM model is still a pretty good fit to the expansion history of the real universe, even if it isn't perfect. So, significant deviations from the ΛCDM model which can easily be established without any observations, presents a yellow flag that suggests that RMOND might differ materially from the expansion history of the real universe without further elaboration as well.
In this paper, we present several explicit reconstructions for a novel relativistic theory of modified Newtonian dynamics (RMOND) derived from the background of Friedmann-Lemaı̂tre-Robertson-Walker cosmological evolution. It is shown that the Einstein-Hilbert Lagrangian with a positive cosmological constant is the only Lagrangian capable of accurately replicating the exact expansion history of the Λ cold dark matter (ΛCDM) universe filled solely with dust-like matter and the only way to achieve this expansion history for the RMOND theory is to introduce additional degrees of freedom to the matter sectors. Besides, we find that the ΛCDM-era also can be replicated without any real matter field within the framework of the RMOND theory and the cosmic evolution exhibited by both the power-law and de-Sitter solutions also can be obtained.
Qi-Ming Fu, Meng-Ci He, Tao-Tao Sui, Xin Zhang, "Reconstruction of relativistic modified Newtonian dynamics for various cosmological scenarios" arXiv:2308.00342 (August 1, 2023).