Wednesday, October 23, 2024

SUSY And String Theory Advocates Half-Admit Their Problem

So close, and yet so far. The goal posts have been moved again. SUSY is no longer well-motivated (and with it SUSY WIMPs as dark matter candidates are also no longer well-motivated). 

The non-detection of weak scale SUSY is also dragging string theory down with it. 

The signals hypothesized would be dubious and subject to varying interpretations even in the unlikely even that they appeared.
Experimental searches for supersymmetry (SUSY) are entering a new era. The failure to observe signals of sparticle production at the Large Hadron Collider (LHC) has eroded the central motivation for SUSY breaking at the weak scale. 
However, String Theory requires SUSY at the fundamental scale M(s) and hence SUSY could be broken at some high scale below M(s). Actually, if this were the case, the lack of experimental evidence for low-energy SUSY could have been anticipated, because most stringy models with high-scale SUSY breaking predict that sparticles would start popping up above about 10 TeV, well beyond the reach of current LHC experiments. 
We show that using next generation LHC experiments currently envisioned for the Forward Physics Facility (FPF) we could search for signals of neutrino-modulino oscillations to probe models with string scale in the grand unification region and SUSY breaking driven by sequestered gravity in gauge mediation. This is possible because of the unprecedented flux of neutrinos to be produced as secondary products in LHC collisions during the high-luminosity era and the capability of FPF experiments to detect and identify their flavors.
Luis A. Anchordoqui, Ignatios Antoniadis, Karim Benakli, Jules Cunat, Dieter Lust, "SUSY at the FPF", arXiv:2410.16342 (October 21, 2024).

Monday, October 21, 2024

Ethiopian Genetics

Razib Khan has a new piece out on Ethiopian genetics in which he analyzes a sample of modern whole genomes and compares those from Ethiopia with those from elsewhere.

Bottom line: Ethiopian genetics are distinct both from sub-Saharan Bantus, Pygmies and Khoi-San, and from the people of the Levant and Southern Europe.

Standardized genetic distance from Ethiopia’s cultural and historically dominant Amhara people.


In a four population ancestry analysis model, Ethiopians have significant Arabian ancestry, but little Iranian or Bantu ancestry.

Friday, October 18, 2024

Beginnings And What We Don't Know In Physics

This post lays out the fact that we know almost all of the fundamental laws of physics, and describes what I see as the most plausible resolutions to the ones for which we don't have consensus answers.

We Understand The Particle Physics Of Everything But The First Fraction Of A Second After The Big Bang

In the standard chronology of the universe in cosmology, there is a state change from quark-gluon plasma to confined hadrons (mostly protons and neutrons) at one second after the Big Bang. 

This is only a rough approximation, however. The physics of the Standard Model up to this temperature (about 5.5 trillion degrees Kelvin) have been experimentally confirmed at the Large Hadron Collider. More exactly, this temperature, where the quark-gluon plasma state change is expected ends at 10^-12 seconds after the Big Bang, in the standard chronology of the universe, dramatically shrinking the time period in the universe where we don't fully understand the relevant particle physics from the first second, to the first trillionth of a second. The trillionth of a second in the history of the universe where the Standard Model has not been experimentally confirmed is one part per 10^30 of the time that the universe has existed. 

The Standard Model is mathematically consistent and sound for at least another twenty-two orders of magnitude beyond that point (i.e. up to the GUT scale), to one part per 10^52 of the time that the universe has existed, but hasn't been experimentally tested in the higher energy parts of that domain. 

The hypothetical Planck epoch, which is a point where the Standard Model equations might break down, is about three orders of magnitude smaller in the time after the Big Bang. The Planck time, a dimensional reasoning based theoretical possible minimum unit of time, is about 10^-43 seconds and would be characterized by energy scales of 10^19 GeV or greater. Classical general relativity predicts a gravitational singularity before this point, although a quantum gravity theory might not have a singularity.

This chronology assumes that Big Bang Nucleosynthesis with the newly formed protons and neutrons begins about ten seconds after the Big Bang and lasts for about sixteen and a half minutes. The particle and nuclear physics of Big Bang Nucleosynthesis are scientifically well understood, and the predictions of the BBN model are confirmed with observations, subject to some modest discrepancies in lithium levels that recent astronomy observations have tended to confirm by finding missing lithium levels and better modeling lithium production and destruction during the 13.7 billion years between the end of BBN and the present in nuclear reactions in stars.

So, any "new physics" that arise at energies above those that the Large Hadron Collider could reach are restricted to some fraction of the first second of the Universe. We fully understand the laws of physics (except dark matter and dark energy and possibly quantum gravity) that apply in the circumstances found in the universe at all time after that.

There are three main possible kinds of "new physics" motivated by astrophysics that scientists are looking for, and one kind of unobserved Standard Model physics that hasn't been observed because energies are too low, that could be restricted to the high energies found only in the first second after the Big Bang. They are, respectively: Cosmological inflation, baryogenesis, leptogenesis, and baryon number and lepton number non-conserving (but B-L preserving) sphaleron interactions.

Sphaleron interactions, while theoretically interesting, aren't enough to explain baryogenesis or leptogenesis, or any other phenomena in the world later than a fraction of a second after the Big Bang, so while undiscovered and theoretically interesting, are basically a side curiosity. These also require temperatures about 100 times the hottest temperature reached at the LHC, so the time frame in which they could have occurred is significantly less than a trillionth of a second.

The cosmology narrative that I find most plausible, mirror cosmology, simultaneously eliminates the need for cosmic inflation, explains the baryon asymmetry of the universe without post-Big Bang new physics, and answers the question "how could something come out of nothing" at least partially by causing the Big Bang to no longer violate mass-energy conservation. 

Deur's approach to gravity meanwhile, eliminates the need for either dark matter or dark energy, and solves all or many of the problems with the LambdaCDM Standard Model of Cosmology, and has been explored preliminarily not just at the scale of galaxies and galaxy clusters, but also in cosmology applications. He also eliminates the mass-energy conservation exception (the only one in physics other than the Big Bang itself) associated with dark energy by a means that I've not seen utilized by any other theory in astrophysics.

Baryogenesis and Leptogenesis

The baryon asymmetry of the universe (i.e. the vast excess of protons and neutrons over anti-protons and anti-neutrons) requires one of two things: 

(1) the large positive net baryon number of the Universe (i.e. the excess of protons and neutrons over anti-protons and anti-neutrons) was present essentially at the outset of the Big Bang, or 

(2) the Big Bang started with matter and antimatter in perfect balance and sometime in the first second after the Big Bang, a baryon number non-conserving process that violates CP conservation much more strongly than any process in the Standard Model exists at energies higher than those in the Standard Model generating the asymmetry seen today.

Mirror cosmology, in which there is an anti-matter universe before the Big Bang in time, and our matter universe after it, explains baryogenesis and leptogenesis without new physics and basically allows the Universe as a whole to have equal amounts of matter and antimatter, while not requiring a new charge parity (CP) violating process not found in the Standard Model at high energies (except for a time bias between matter and antimatter in pair production of quarks and leptons at the time of the Big Bang). 

As quarks and antiquarks are photoproduced at the Big Bang, the quarks disproportionately end up on our side of the Big Bang, and the antiquarks disproportionately end up before the Big Bang in time. 

As charged leptons and charged antileptons are photo-produced at the Big Bang, the charged leptons disproportionately end up on our side of the Big Bang and the anti-leptons disproportionately end up before the Big Bang in time. 

Neutrinos, by the way, aren't photoproduced because they don't have an electromagnetic charge. So, the neutrinos in our post-Big Bang observable universe, would all be created post-Big Bang in lepton number conserving interactions. And, while beta decay can create an electron and an anti-neutrino, given what we know about the frequency of beta decay and the proportion of protons and neutrons in the universe, the net lepton number of the universe should be at least similar to the number of charged leptons in the universe (which is almost identical to the number of protons in the universe), while the number of anti-neutrinos should only slightly and imperceptibly exceed the number of neutrinos in the our observable universe. 

Cosmological Inflation

Cosmological inflation is a theory of exponential expansion of space in the early universe. The inflationary epoch lasted from 10^−36 seconds after the conjectured Big Bang singularity to some time between 10^−33 and 10^−32 seconds after the singularity. Following the inflationary period, the universe continued to expand, but at a slower rate.

Inflation presumes an expansion of space at faster than the speed of light in this time period. At the speed of light, in 10^−32 seconds, light moves 10^−24 meters (by comparison a proton is about 10^−15 meters across). But, in one typical inflation theory, space expands from a 10^-50 meter radius at 10^-35 seconds after the Big Bang to about a 1 meter radius at 10^-34 seconds.

These new inflation physics purportedly appear so early on in this scenario, that the entire universe would fit in my bathroom with room to spare. The energy scale at which this is supposed to happen is the GUT scale, i.e. about 10^15 GeV to 10^16 GeV.

Put another way, cosmological inflation is a one time fix that ends when the universe has a one meter radius in the first 10^-34 seconds after the Big Bang, or so, so it is functionally just a set of hypothesized initial conditions of the universe from that moment slightly after the Big Bang.

The Wikipedia article on Cosmic Inflation explains:

It explains the origin of the large-scale structure of the cosmosQuantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.

The detailed particle physics mechanism responsible for inflation is unknown. The basic inflationary paradigm is accepted by most physicists, as a number of inflation model predictions have been confirmed by observation; however, a substantial minority of scientists dissent from this position. The hypothetical field thought to be responsible for inflation is called the inflaton.

There are literally hundreds of proposed inflation theories (a 300 page summary of the various inflation theories can be found here) and astronomy observations have time and again narrowed the parameter space of potential inflation theories to rule out a great many of them. See, e.g., prior posts from this blog on March 21, 2013March 18, 2014July 24, 2017, and October 4, 2021. In the first of those posts, I noted that:

Planck [cosmic microwave background observations] strongly disfavors power law inflation, the simplest hybrid inflationary models, simple monomial models with n > 2, single fast roll inflation scenarios, multiple stage inflation scenarios, inflation scenarios with flat or concave potentials, dynamical dark energy, time variations of the fine structure constant are all strongly disfavored. Any theory that would create non-Gaussian statistics of the CMB anisotropies, non-flat universes, tensor modes, or statistically discernible deviations from isotropy at L >50 are ruled out.

The summary chart from the 2021 paper is as follows:

Only the blue part of the parameter space remains open, so "natural inflation" is ruled out.

Many people (including many serious astrophysicists and me) are skeptical that cosmic inflation is really necessary. In an October 31, 2016 post, I noted that even one of inflation theory's original creators doubts that it really exists. 

Paul Steinhardt gave a colloquium at Fermilab last month with the title Simply Wrong vs. Simple. In it he explained “why the big bang inflationary picture fails as a scientific theory” (it doesn’t work as promised, is not self-consistent and not falsifiable).

Deur's gravitational work is agnostic on the subject.  

An abstract of a paper on the topic of mirror cosmology also explains the basic astronomy motivation for cosmic inflation, and why the mirror cosmology model dispenses with the need for it.

Observations indicate that the early Universe was strikingly simple: a fraction of a second after the Big Bang, the Universe was radiation-dominated, almost perfectly homogeneous, isotropic, and spatially flat; with tiny (around 10^−5) deviations from perfect symmetry also taking a highly economical form: random, statistically gaussian, nearly scale-invariant, adiabatic, growing mode density perturbations.
Although we cannot see all the way back to the bang, we have this essential observational hint: the further back we look (all the way back to a fraction of a second), the simpler and more regular the Universe gets. This is the central clue in early Universe cosmology: the question is what it is trying to tell us.

In the standard (inflationary) theory of the early Universe one regards this observed trend as illusory: one imagines that, if one could look back even further, one would find a messy, disordered state, requiring a period of inflation to transform it into the cosmos we observe.

An alternative approach is to take the fundamental clue at face value and imagine that, as we follow it back to the bang, the Universe really does approach the ultra-simple radiation-dominated state described above (as all observations so far seem to indicate).

Then, although we have a singularity in our past, it is extremely special. Denoting the conformal time by τ , the scale factor a(τ) is ∝ τ at small τ so the metric g^(µν) ∼ a(τ)^(2ηµν) has an analytic, conformal zero through which it may be extended to a “mirror-reflected” universe at negative τ.

[W]e point out that, by taking seriously the symmetries and complex analytic properties of this extended two-sheeted spacetime, we are led to elegant and testable new explanations for many of the observed features of our Universe including: . . . (ii) the absence of primordial gravitational waves, vorticity, or decaying mode density perturbations; (iii) the thermodynamic arrow of time (i.e. the fact that entropy increases away from the bang); and (iv) the homogeneity, isotropy and flatness of the Universe, among others.

The Dark Sector

Deur's work on gravity, purports to explain the phenomena attributed to dark matter and dark energy with non-perturbative weak field gravitational effects. Other gravity based approaches (e.g. here) likewise seek to explain dark matter and dark energy without new particles or substances, in various ways, one of the most notable of which adds conformal symmetry to the constraints of general relativity. 

In my opinion, the balance of the evidence strongly favors either some gravitational explanation for dark matter or an ultralight bosonic dark matter particle with a mass-energy on the same order of magnitude as a graviton that looks a lot like a fifth force. 

One of the strongest pieces of evidence that dark matter, if it exists, must be very light is Alfred Amruth, "Einstein rings modulated by wavelike dark matter from anomalies in gravitationally lensed images" Nature Astronomy (April 20, 2023) https://doi.org/10.1038/s41550-023-01943-9 (Open access copy available at arxiv).

And, the gravitational approach is better motivated in y humble opinion, and has more rigorously been confronted with the evidence.

Deur's approach to gravity is more intuitive in a graviton based quantum gravity context, but he claims that it works in unmodified classical general relativity (without a cosmological constant) as well if non-perturbative effects are considered.

If Deur is correct, there is about 95% less mass-energy in the universe than expected in the LambdaCDM Standard Model of Cosmology, and there is no observational motivation for stable particles not explained by the Standard Model of Particle Physics (other than gravitons) to exist.

Mass-Energy Conservation

By eliminating the need for dark energy, Deur also removes the one exception to conservation of mass-energy in general relativity (and physics generally) that dark energy creates, except at the moment of the Big Bang when you have a "something created out of nothing" issue. This is solved at the moment of the Big Bang in mirror cosmology.

Quantum Gravity

Another benefit of removing the cosmological constant from general relativity is that it makes it easier to formulate a quantum gravity theory if you don't need to include the Lambda (i.e. cosmological constant) term from Einstein's field equations (which incidentally is a gravitational explanation for dark energy).

This isn't the biggest barrier to a quantum gravity theory, however. The biggest barrier is that general relativity isn't renormalizable, which is a property that any usable and non-pathological theory of quantum gravity should have, at least not perturbative the way that the three Standard Model forces are.

But, it looks like the same approach the Deur took to explain dark matter and dark energy phenomena, non-perturbative effects of general relativity, also make general relativity non-perturbatively renormalizable. This suggest a roadmap for finally crossing the seemingly insurmountable barrier of mathematically formulating a quantum gravity theory.

There are, however, quite strict observational constraints on quantum gravity effects.

If quantum gravity does exist, the gravitational coupling constant has a beta function that explains how it runs with energy scale, which could also provide useful insight and ought to be possible to devise from first principles. One effort to do so is mentioned here.

The existence of quantum gravity would also slightly tweak the beta functions of the other Standard Model experimentally measured physical constants, which might have interesting implications at very high energies.

Neutrinos

Science still need to pin down some of the physical constants associated with neutrinos (which is just a matter of brute force effort and isn't theoretically problematic) and the nature of neutrino mass (Majorana or Dirac). I strongly suspect that neutrinos have Dirac mass and have some ideas about where it comes from (basically, through W boson and/or Z boson mediated interactions and self-interactions).

Explaining the Standard Model Constants

The only other thing we don't know in physics that would be nice to know (although it isn't strictly necessary to explain the observed universe) is the way that the physical constants for the fifteen experimentally measured masses of the Standard Model (which are only fourteen degrees of freedom because the W and Z boson masses are functionally related to each other in the electroweak theory of the Standard Model), the eight parameters of the CKM matrix and the PMNS matrix, and the three Standard Model coupling constants (as well as Newton's constant, Planck's constant, and the speed of light). The electromagnetic and weak force coupling constants are also functionally related to each other and to the W and Z boson masses. 

There is good reason to believe that a "within the Standard Model" theories could explain a great many of these physical constants a derived values rather than just experimentally measured physical constants. 

The masses of the Standard Model fundamental particles, which have origins in the electroweak sector of the model, seem particularly susceptible to being explained this way, that I've explored in prior posts at this blog. Basically, I think that Koide's rule and extensions of this point to the Yukawa couplings to the Higgs field having their roots in dynamic balancing of these values through W boson interactions. The LP & C relation suggests a connection between the Higgs vacuum expectation value (which is a function of the W boson mass and the weak force coupling constant) is the source of the overall mass scale of the fundamental particles of the Standard Model.

It might also be possible to reduce the number of non-derived CKM matrix parameters from four to two with a little more theoretical work, and maybe someday, similar progress could be made with the four PMNS matrix parameters. 

These efforts wouldn't bring the number of experimentally measured fundamental physical constants in the Standard Model to zero or even to one, but it might be possible to get them down, perhaps, from twenty-five to eight (plus the speed of light, Planck's constant, and Newton's constant).

Fourth Generation (Or Greater) Standard Model Fermions

It is very unlikely that there are fourth generation fundamental fermions. Instead, there are exactly three generations of Standard Model fermions. 

For theoretical reasons, there has to be a full set of an up-type quark, down-type quark, charged lepton, and neutrino in each generation. 

Direct searches and cosmology observations rule out these possibilities up to high masses relative to the third generation. This is particularly in the case of a four generation active Standard Model neutrino; which has been ruled out to, at least, 45,000,000,000 eV (the heaviest of the three Standard Model neutrino masses is not more than 2 eV by direct measurements and neutrino oscillation data, and is less than 0.1 eV based upon cosmology bounds). Fourth generation down type quarks are ruled out up to 3,000 GeV by direct searches (the b quark mass is about 4.2 GeV). Fourth generation up  type quarks are ruled out up to 1,500 GeV by direct searches (the t quark mass is about 173 GeV). Fourth generation charged leptons are ruled out up to 100.8 GeV by direct searches (the tau lepton mass is about 1.78 GeV).

Also, from a theoretical perspective, a fourth generation or higher Standard Model fermion more massive that then top quark is ruled out because its expected mean lifetime would be shorter than the mean lifetime of the W boson that effects its decay. The mean lifetimes of the W and Z bosons, which are the most short-lived particles ever observed, are each about 3*10^-25 seconds. The mean lifetime of the top quark is about 5*10^-25 seconds, which is the shortest of the Standard Model fermions and is so short that top quarks decay before they can hadronize.

Furthermore, the existence of fourth generation fundamental Standard Model fermions would cause the decays of the Standard Model Higgs boson to differ greatly from what is observed.

The Strong CP Problem

I also have my own heuristic answer to the strong CP problem, which is that since gluons that mediate the strong force are massless and thus don't experience time in their own reference frame, that neither gluons, nor any other massless particles, can experience CP violation which is equivalent to time symmetry violation.

This makes a hypothetical axion particle to suppress CP violation in strong force interactions unnecessary.

Thursday, October 17, 2024

Quote Of The Day

This paper is both novel and correct, but the novel part is not correct and the correct part is not novel.
From a peer review of an academic journal article attributed to physicist Wolfgang Pauli.

Wednesday, October 16, 2024

Papuan Demographic History From Modern Genomes

A new pre-print at bioRxiv disputes the status of Papuans (presumably together with aboriginal Australians) as an outgroup to both European and Asian populations. Instead, it positions them as a sister population of other Asian populations.
The demographic history of the Papua New Guinean population is a subject of significant interest due to its early settlement in New Guinea, at least 50 thousand years ago, and its relative isolation compared to other out of Africa populations. This isolation, combined with substantial Denisovan ancestry, contributes to the unique genetic makeup of the Papua New Guinean population. Previous research suggested the possibility of admixture with an early diverged modern human population, but the extent of this contribution remains debated. 
This study re-examines the demographic history of the Papua New Guinean population using newly published samples and advanced analytical methods. Our findings demonstrate that the observed shifts in relative cross coalescent rate curves are unlikely to result from technical artefacts or contributions from an earlier out of Africa population. Instead, they are likely due to a significant bottleneck and slower population growth rate within the Papua New Guinean population. Our analysis positions the Papua New Guinean population as a sister group to other Asian populations, challenging the notion of Papua New Guinean as an outgroup to both European and Asian populations
This study provides new insights into the complex demographic history of the Papua New Guinean population and underscores the importance of considering population-specific demographic events in interpreting relative cross coalescent rate curves.
Mayukh Mondal, et al., "Resolving out of Africa event for Papua New Guinean population using neural network" bioRxiv (September 23, 2024) https://doi.org/10.1101/2024.09.19.613861

The introduction to the paper explains that:
The Papua New Guinean (PNG) population is among the most fascinating in the world, owing to its unique demographic history. Following the Out Of Africa (OOA) event, modern humans populated New Guinea at a remarkably early date-at least 50 thousand years ago. Since then, the population has remained relatively isolated compared to other OOA populations (such as European and Asian populations) and has gone through a strong bottleneck. The substantial Denisovan ancestry within the PNG population and the strong correlation between Denisovan and Papuan ancestries, contribute to the genetic distinctiveness of the PNG population. 
Researchers have suggested that the genomes of PNG populations contain evidence of admixture with a modern human population that might have diverged from African populations- around 120 thousand years ago- much earlier than the proclaimed primary divergence between African and OOA populations. However, the extent to which this early diverged population contributed to the genome of PNG populations remains a subject of ongoing debate. Interestingly, this early migration hypothesis is more widely accepted by archeologists. 
Pagani et al supports this hypothesis, notably through Relative Cross-Coalescent Rate (RCCR) analysis. This RCCR analysis suggests that the PNG population diverged from African populations significantly earlier than other OOA populations. They argued that this earlier divergence indicated by the RCCR curve might reflect a contribution from an earlier OOA population specific to PNG. While this shift in the RCCR curve is well-documented, some researchers attribute it to technical artefacts such as low sample sizes and phasing errors rather than genuine demographic events. 
The origins of the primary lineage of the PNG population have also been contested. Some researchers propose that the PNG population is closely related to the Asia-Pacific populations and serves as a sister group to other Asian populations. Conversely, other researchers argue that the PNG population is an outgroup to both European and Asian populations. 
Recent advancements in analytical methods may provide new insights into these debates. For example, Approximate Bayesian Computation with Deep learning and sequential Monte Carlo (ABC-DLS) allows for the use of any summary statistics derived from simulations to train neural networks, which can then predict the most likely demographic models and parameters based on empirical data. Additionally, the Relate software enhances RCCR analysis by employing a modified version of the hidden Markov model, initially used in the Multiple Sequentially Markovian Coalescent (MSMC) method, allowing for the analysis of thousands of individuals with greater robustness. 
In this paper, we re-examine the demographic history of the PNG population using newly published samples combined with data from the 1000 Genome Project and cutting edge methods. This approach has enabled us to address these longstanding questions with greater precision. We first generate new empirical RCCR curves and demonstrate that the previously observed shift is unlikely to be the result of low sample size or phasing errors. Through simulations, we further show that the PNG population is indeed a sister group to other Asian populations and this shift is probably not due to contributions from an earlier OOA population. Instead, it is likely a consequence of a significant bottleneck and slower population growth in the PNG population.

The paper then defines the demographic models that the paper analyzed at a broad brush level:

To explore the demographic processes causing the observed RCCR shift, we tested five plausible demographic scenarios labelled A, O, M, AX and OX. In Model A, the PNG and East Asian populations are sister groups. Model O positions the PNG population as an outgroup to both European and East Asian populations. Model M combines elements of both A and O, suggesting that the PNG population arose from admixture between a sister group of the Asian population and an outgroup of European and Asian populations. Model AX postulates that the PNG population is a sister group to the Asian population but received input from an earlier OOA population. Finally, in Model OX, the PNG population receives a contribution from an earlier OOA population, while remaining ancestry came from an out group to the European and East Asian populations. . . .  
The best-fitting parameters for Model A largely correspond with the previously established OOA model, with some deviations specific to the inclusion of the PNG population. 

Our model suggests that all OOA populations, including PNG, diverged from African populations (represented by Yoruba) around 62.4 (62- 62.8) thousand years ago, experiencing a significant bottleneck. Approximately 52 (51.6- 52.8) thousand years ago, Neanderthals contributed around 3.7% (3.59- 3.85%) of the genome to these OOA populations. Shortly thereafter, Europeans and East Asians diverged from the PNG populations around 51.2 (50.8- 51.6) and 46.2 (45.9- 46.5) thousand years ago, respectively. The PNG population then mixed with Denisovans around 31.2 (31.1- 31.5) thousand years ago, contributing approximately 3.16% (3.05- 3.21%) to the genome of PNG. 

Our analysis also shows that the PNG population experienced a more severe bottleneck (674 [663- 689] of effective population size) than other OOA populations (i.e. Europeans 3512 [3423- 3589] and East Asians 1771 [1730- 1799] of effective population size), with growth rates significantly lower than those of other OOA populations, consistent with previously published data. 

While our parameter inference is generally robust within the individual model, substantial changes occur when the underlying model is altered. Given that determining the precise demographic model for human populations is an ongoing effort, parameter estimates should be considered supplementary to the model rather than independent results. 

The concluding discussion of the results notes that:

We successfully replicated the shift observed by Pagani et al., confirming its presence in both physically mapped and statistically phased sequences, which involved over 100 PNG samples. This consistency suggests that the shift is reproducible, though its underlying cause may differ from the original interpretation of Pagani et al.. 

Our analysis using ABC-DLS supports a simpler demographic model for PNG populations, proposing them as a sister group to Asians with no substantial detectable contribution from an earlier OOA population. Interestingly, our simulated models reveal that a stronger bottleneck with a lower growth rate could produce a similar shift in RCCR analysis and potentially be misinterpreted as a signal of an earlier population separation. While RCCR is a valuable proxy for estimating the separation time between populations, it is not without biases. The shift could result from various factors, including earlier divergence times, admixture with earlier diverged populations, or even a bottleneck in one of the populations, as demonstrated in our study. Interestingly this demographic history of stronger bottleneck with slower growth rate was also experienced by the Andamanese population, which explains the shift found in the Andamanese population as well. Thus, using RCCR analysis to rebuild the tree of divergence might need to be revised.

The observed shift in the RCCR curve suggests that a recent bottleneck can impact estimates of effective population size in the distant past. Notably, in our simulations, the Papua New Guinean bottleneck occurred much later (around 46.2 thousand years ago) than the observed shift (peaking around 100 thousand years ago) with a population (Yoruba) that separated a long time ago. This finding implies that the estimation of effective population size and cross-coalescent rates may not be entirely independent, potentially affecting RCCR analysis in its current form. Further analysis suggests that the estimation of coalescent rate was affected earlier than true changes of effective population size, which shifts the RCCR curve as RCCR is a ratio of coalescent rates. Additionally, this shift was absent in simulations involving populations that separated 300 thousand years ago, akin to the San population, indicating that the bottleneck effect diminishes over longer separation times.

Our results also reveal that when the contribution from an earlier OOA population is between 1-5%, our neural analysis misclassifies the Model AX to be Model A at a higher rate. We found that when the contribution from an earlier OOA population is set between 1-5%, our ABC-DLS analysis tends to misclassify the Model AX as Model A at a higher rate. A similar issue arises with Model M, where a low contribution (less than 5%) from an outgroup Eurasian population can still be misclassified as Model A. Thus our analysis does not work for less than 5% contribution from these unknown ghost populations, though Model OX does not show a similar phenomenon with Model A misclassification. While we cannot completely rule out the possibility of a small contribution from these populations, our analysis suggests that such models are not necessary to explain the RCCR shift as previously proposed.

Interestingly, our results position PNG as a sister group to Asian populations rather than an outgroup of European and Asian. The primary difference between those models and ours lies in the migration rates between populations. Previous models that incorporated significant migration rates between populations were found to have confounded results, leading us to avoid including migration rates in our models. Without migration, our Model O closely resembles the previous models of PNG. Given that those models used substantial migration rates, they are not directly comparable to our models without migration rate. Indeed with high migration rates, our approach failed to distinguish between Model A and O with high certainty. Still our work suggests that the main lineage of PNG is coming from a sister group of Asia, which was not confounded by a convoluted migration rate patterns between populations.

Our parameter estimation suggests that the PNG population separated from other populations around 46.2 (45.9 - 46.5) thousand years ago, a timeline that aligns with archaeological estimates of when the ancestors of PNG reached the ancient continent of Sahul, the landmass that once connected New Guinea and Australia. 

Additionally, our Relate analysis indicates that the separation time between PNG and European populations was the longest observed among OOA populations. However, as our model suggests, this is likely a bias caused by the bottleneck of PNG. This bottleneck may lead to an overestimation of the separation time, particularly in RCCR analysis. In reality, it is more likely that PNG and East Asian populations separated later than the divergence between PNG and European populations. 

In conclusion, our study provides compelling evidence that the unique demographic events—specifically, a significant bottleneck and slower population growth—within the PNG population are key factors influencing the observed shifts in RCCR curves. These findings not only refine our understanding of PNG's demographic history but also emphasise the necessity of accounting for population-specific demographic events when interpreting RCCR curves. 

Semi-Recessive Genes

Some genes classified as recessive genes that have their main phenotypical effects only when both copies of it are present, are actually only "semi-recessive" and have much milder versions of the same phenotypic effects in "carriers" with only one copy of the gene.

A new pre-print at bioRxiv demonstrates this by looking at 1929 genes considered recessive in the British UK Biobank database which includes 378,751 unrelated European individuals, singling out carriers of recessive genes associated with intellectual disabilities, who exhibit below average intellectual abilities themselves, as an example.

The abstract explains that:

The genetic landscape of human Mendelian diseases is shaped by mutation and selection. Selection is mediated by phenotypic effects which interfere with health and reproductive success. Although selection on heterozygotes is well-established in autosomal dominant disorders, convincing evidence for selection in carriers of pathogenic variants associated with recessive conditions is limited, with only a few specific cases documented.

We studied heterozygous pathogenic variants in 1,929 genes, which cause recessive diseases when bi-allelic, in a cohort of 378,751 unrelated European individuals from the UK Biobank. We assessed the impact of these pathogenic variants on reproductive success. We find evidence for fitness effects in heterozygous carriers for recessive genes, especially for variants in constrained genes across a broad range of diseases. Our data suggest reproductive effects at the population level, and hence natural selection, for autosomal recessive disease variants. We further show that variants in genes that underlie intellectual disability are associated with reduced cognition measures in carriers. In concordance with this, we observe an altered genetic landscape, characterized by a threefold reduction in the calculated frequency of biallelic intellectual disability in the population relative to other recessive disorders. The existence of phenotypic and selective effects of pathogenic variants in constrained recessive genes is consistent with a gradient of heterozygote effects, rather than a strict dominant-recessive dichotomy.
Hila Fridman, et al., "Reproductive and cognitive effects in carriers of recessive pathogenic variants" bioRxiv (October 1, 2024). https://doi.org/10.1101/2024.09.30.615774

Dark Matter Is Still Probably The Wrong Answer

Stacy McGaugh has a reaction blog post to the Scientific American article "What if We Never Find Dark Matter?" by Slatyer & Tait.

It nicely sums up the sociological conundrum in astrophysics that has led the discipline to throw a lot of weight and support behind a deeply flawed dark matter particle hypothesis with a particle that hasn't been detected and no hypothetical particle that can fit the astronomy observations and no theory that has made many significant ex ante predictions, rather than MOND and modified gravity that is a much better fit to the astronomy observations and has made many significant ex ante predictions.

He is spot on. Some good quotes:
In the 1980s, cold dark matter was motivated by both astronomical observations and physical theory. Absent the radical thought of modifying gravity, we had a clear need for unseen mass. Some of that unseen mass could simply have been undetected normal matter, but most of it needed to be some form of non-baryonic dark matter that exceeded the baryon density allowed by Big Bang Nucleosynthesis and did not interact directly with photons. That meant entirely new physics from beyond the Standard Model of particle physics: no particle in the known stable of particles suffices. This new physics was seen as a good thing, because particle physicists already had the feeling that there should be something more than the Standard Model. There was a desire for Grand Unified Theories (GUTs) and supersymmetry (SUSY). SUSY naturally provides a home for particles that could be the dark matter, in particular the Weakly Interacting Massive Particles (WIMPs) that are the prime target for the vast majority of experiments that are working to achieve the exceptionally difficult task of detecting them. So there was a confluence of reasons from very different perspectives to make the search for WIMPs very well motivated.

That was then. Fast forward a few decades, and the search for WIMPs has failed. Repeatedly. Continuing to pursue it is an example of the sunk cost fallacy. We keep doing it because we’ve already done so much of it that surely we should keep going. So I feel the need to comment on this seemingly innocuous remark:

although many versions of supersymmetry predict WIMP dark matter, the converse isn’t true; WIMPs are viable dark matter candidates even in a universe without supersymmetry.

Strictly speaking, this is correct. It is also weak sauce. The neutrino is an example of a weakly interacting particle that has some mass. We know neutrinos exist, and they reside in the Standard Model – no need for supersymmetry. We also know that they cannot be the dark matter, so it would be disingenuous to conflate the two. Beyond that, it is possible to imagine a practically infinite variety of particles that are weakly interacting by not part of supersymmetry. That’s just throwing mud at the wall. SUSY WIMPs were extraordinarily well motivated, with the WIMP miracle being the beautiful argument that launched a thousand experiments. But lacking SUSY – which seems practically dead at this juncture – WIMPS as originally motivated are dead along with it. The motivation for more generic WIMPs is lacking, so the above statement is nothing more than an assertion that runs interference for the fact that we no longer have good reason to expect WIMPs at all. . . . 
I can save everyone a lot of time, effort, and expense. It ain’t WIMPs and it ain’t axions. Nor is the dark matter any of the plethora of other ideas illustrated in the eye-watering depiction of the landscape of particle possibilities in the article. These simply add mass while providing no explanation of the observed MOND phenomenology. This phenomenology is fundamental to the problem, so any approach that ignores it is doomed to failure. I’m happy to consider explanations based on dark matter, but these need to have a direct connection to baryons baked-in to be viable. None of the ideas they discuss meet this minimum criterion.

Of course it could be that MOND – either as modified gravity or modified inertia, an important possibility that usually gets overlooked – is essentially correct and that’s why it keeps having predictions come true. That’s what motivates considering it now: repeated and sustained predictive success, particularly for phenomena that dark matter does not provide a satisfactory explanation for. . . . 
The equation coupling dark to luminous matter I wrote down in all generality in McGaugh (2004) and again in McGaugh et al. (2016). The latter paper is published in Physical Review Letters, arguably the most prominent physics journal, and is in the top percentile of citation rates, so it isn’t some minuscule detail buried in an obscure astronomical journal that might have eluded the attention of particle physicists.

Bonus quote from the comments:

It’s exactly the same crap as with string theory, and supersymmetry, and inflation, and dark sectors, and many other research bubbles in the foundations of physics. It is mathematical fiction; it’s nothing to do with reality any more.
- Sabine Hossenfelder (YouTube link).

A New Published Koide Triple Paper

Arivero at the Physics Forums (who comments here from time to time) has gotten his article on Koide Triples published:
I had put in a preprint some calculations of Koide masses using the original composite idea and they have happened to be published as Eur. Phys. J. C 84, 1058 (2024). https://doi.org/10.1140/epjc/s10052-024-13368-3, so as a collateral effect we now have another published paper that mentions:
  • the waterfall, in a footnote.
  • the tuples (0ds), (scb) and (cbt).
  • the relation sum(scb) = 3 sum (leptons).

It is nice to see this promising line of inquiry advanced. The preprint linked and its abstract are as follows:

We propose an interpretation for the adjoint representation of the SO(32) group to classify the scalars of a generic Supersymmetric Standard Model having just three generations of particles, via a flavour group SU(5). 
We show that this same interpretation arises from a simple postulate of self-consistence of composites for these scalars. The model looks only for colour and electric charge, and it pays the cost of an additional chiral +4/3 quark per generation.
Alejandro Rivero, "An interpretation of scalars in SO(32)" arXiv:2407.05397 (July 7, 2024). The published version is open access and was published on October 15, 2024.

Bonus: The article contains a cute "Turtles All The Way Down" illustration, which we here on Turtle Island, appreciate.

Atomic Nuclei Described At Quark-Gluon Level


Context

Once of the things that physicists can do, even if our understanding of the laws of physics is complete, is to formally derive the properties of more complex structures, like atoms, from the fundamental laws of physics found in the Standard Model. 

We understand the structure of atoms mostly in the context of a simplified proton-neutron-electron model, that is used in chemistry and even, for the most part, in nuclear physics, with a simplified (mostly experimentally fit) binding energy description of how protons and neutrons are held together in atomic nuclei that describes the residual strong force that holds protons and neutrons in the atom together in a nucleus (that we know is mediated mostly by composite pions as force carriers and to a secondary extent by force carrying composite kaons, rather than directly by gluons). 

The Standard Model of Particle physics provides a more fundamental description of protons, neutrons, and their interactions in terms of quarks and gluons with its model of the strong force that binds quarks and gluons to each other that is mediated by force carrying gluons. This theory is called quantum chromodynamics (QCD) because the analogy to electric charge for the strong force is called "color charge". Quarks can have one of three color charges, and gluons come in eight combinations that involve pairs of color charges.

The New Paper

Half a century after the Standard Model was devised, a new paper has made a major breakthrough at advancing the unfinished project of explaining atomic nuclei in terms of quarks and gluons, rather than in terms of composite protons and neutrons bound by the residual strong force.

The new paper accurately reproduces the structure of 18 atomic different nuclei with quantum chromodynamics (the theory of the Standard Model strong force that binds quarks and gluons to each other).

The parton distribution functions (PDFs) describe the structure of a composite particle in terms of quarks and gluons. PDFs can be calculated, in theory, from first principles in the Standard Model without any experimental input beyond the values of the two dozen or so experimentally measured physical constants of the Standard Model. 

But until less than a decade ago, in practice, parton distribution functions were almost always created by a vast statistical data dump from billions of collisions to which a mathematical function was fitted, that were very particular to particular particles at particular energy scales. These were updated from time to time with new data from more collisions. 

When it has been done previously from first principles, this has mostly been confined to individual protons, neutrons, or other simple hadrons such as pions (hadrons are composite particles whose particles are bound by gluons), not to multi-hadron atoms.

The new paper make some big leaps beyond that, advancing the project of rigorously demonstrating what we had merely assumed (for some good reasons) for the last fifty years: that the structure of atomics can be described fully from first principles using the Standard Model.

I quote at length from a secondary source account of what the paper is doing, because the paper itself is too technical for a general audience and what the paper is doing is sufficiently technical that I don't want to mangle it in my paraphrased retelling of it.

The atomic nucleus is made up of protons and neutrons, particles that exist through the interaction of quarks bonded by gluons. It would seem, therefore, that it should not be difficult to reproduce all the properties of atomic nuclei hitherto observed in nuclear experiments using only quarks and gluons. However, it is only now that an international team of physicists has succeeded in doing this. . . .

This long-standing deadlock has only now been broken, in a paper published in Physical Review Letters. Its main authors are scientists from the international nCTEQ collaboration on quark-gluon distributions.

"Until now, there have been two parallel descriptions of atomic nuclei, one based on protons and neutrons which we can see at low energies, and another, for high energies, based on quarks and gluons. In our work, we have managed to bring these two so far separated worlds together," says Dr. Aleksander Kusina, one of the three theoreticians from the Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) participating in the research. . . .

Experiments . . . show that when electrons have relatively low energies, atomic nuclei behave as if they were made of nucleons (i.e. protons and neutrons), whereas at high energies, partons (i.e. quarks and gluons) are "visible" inside the atomic nuclei.

The results of colliding atomic nuclei with electrons have been reproduced quite well using models assuming the existence of nucleons alone to describe low-energy collisions, and partons alone for high-energy collisions. However, so far these two descriptions have not been able to be combined into a coherent picture.

In their work, physicists from the IFJ PAN used data on high-energy collisions, including those collected at the LHC accelerator at CERN laboratory in Geneva. The main objective was to study the partonic structure of atomic nuclei at high energies, currently described by parton distribution functions (PDFs).

These functions are used to map how quarks and gluons are distributed inside protons and neutrons and throughout the atomic nucleus. With PDF functions for the atomic nucleus, it is possible to determine experimentally measurable parameters, such as the probability of a specific particle being created in an electron or proton collision with the nucleus.

From the theoretical point of view, the essence of the innovation proposed in this paper was the skillful extension of parton distribution functions, inspired by those nuclear models used to describe low-energy collisions, where protons and neutrons were assumed to combine into strongly interacting pairs of nucleons: proton-neutron, proton-proton and neutron-neutron.

The novel approach allowed the researchers to determine, for the 18 atomic nuclei studied, parton distribution functions in atomic nuclei, parton distributions in correlated nucleon pairs and even the numbers of such correlated pairs.

The results confirmed the observation known from low-energy experiments that most correlated pairs are proton-neutron pairs (this result is particularly interesting for heavy nuclei, e.g. gold or lead). Another advantage of the approach proposed in this paper is that it provides a better description of the experimental data than the traditional methods used to determine parton distributions in atomic nuclei.

"In our model, we made improvements to simulate the phenomenon of pairing of certain nucleons. This is because we recognized that this effect could also be relevant at the parton level. Interestingly, this allowed for a conceptual simplification of the theoretical description, which should in future enable us to study parton distributions for individual atomic nuclei more precisely," explains Dr. Kusina.

The agreement between theoretical predictions and experimental data means that, using the parton model and data from the high-energy region, it has been possible for the first time to reproduce the behavior of atomic nuclei so far explained solely by nucleonic description and data from low-energy collisions. The results of the described studies open up new perspectives for a better understanding of the structure of the atomic nucleus, unifying its high- and low-energy aspects.

From Phys.orgThe paper and its abstract are as follows:

We extend the QCD Parton Model analysis using a factorized nuclear structure model incorporating individual nucleons and pairs of correlated nucleons. Our analysis of high-energy data from lepton deep-inelastic scattering, Drell-Yan, and 𝑊 and 𝑍 boson production simultaneously extracts the universal effective distribution of quarks and gluons inside correlated nucleon pairs, and their nucleus-specific fractions. Such successful extraction of these universal distributions marks a significant advance in our understanding of nuclear structure properties connecting nucleon- and parton-level quantities.
A. W. Denniston, et al, "Modification of Quark-Gluon Distributions in Nuclei by Correlated Nucleon Pairs", 133 Physical Review Letters 152502 (October 11, 2024). DOI: 10.1103/PhysRevLett.133.152502

An earlier pre-print related to this paper can be found at arXiv, but the open access version of this paper is not yet available at arXiv.

Monday, October 14, 2024

Ancient South African DNA

A new paper on South African ancient DNA isn't paradigm shifting but does pin down more precisely when outsiders from different places began to admix with local hunter-gatherers and the extent to which that mixture involved mostly male outsiders who replaced a significant share of local hunter-gatherer men in the gene pool. 

Most admixture between South African hunter-gatherers, West African derived Bantus, and East African herders, took place starting around roughly 900 CE to 1500 CE (i.e. during the European Middle Ages). European admixture with Southern Africans begins in earnest around 1800 CE. All of this admixture with outsiders was male outsider dominated.

Joscha Gretzinger, et al., "9,000 years of genetic continuity in southernmost Africa demonstrated at Oakhurst rockshelter" (October 2024) provides ancient DNA from 10,000 to 1,300 years old (based upon radiocarbon dates) from the Southern tip of South Africa. (Hat tip to Bernard).


The Khoi-San hunter-gatherers of Southern Africa have largely been forced into the Kalahari desert to the north, but still have a north-south cline genetically, and these ancient DNA samples are towards the southern edge of that cline, although not at the actual bottom of it, as one might expect, with the greatest similarity to the Khomani San (who are part of the Tuu linguistic family).


Bernard recounts from the paper (as translated from the French by Google) that:
None of these individuals have any genetic affinity with other African populations, even the most recent dated 1300 years ago, suggesting that the non-San component arrived in South Africa more recently than 1300 years ago. On the other hand, all the ancient Oakhurst individuals older than 4000 years ago as well as the other ancient South Africans from St. Helena, Faraoskop and Ballito Bay, dated between 2200 and 1300 years ago, are genetically indistinguishable. 
Then the authors highlighted a genetic discontinuity between 1300 and 400 years ago relative to the arrival of populations from northeastern Africa in South Africa. All these results suggest a very strong genetic continuity between 10,000 and 1300 years ago in South Africa, while the Oakhurst population does not show any sign of particular isolation as shown by the analysis of heterozygous segments.

The authors then investigated recent genetic admixtures in South Africa with the Northeast and West African components. To do this, they used the qpAdm software on contemporary San, Khoe and Bantu individuals with no European ancestry. They thus highlighted the older arrival of the Northeast African component (1068 years on average) compared to the arrival of the West African component (808 years on average among Bantu populations and 578 years among San and Khoe populations). This last result suggests the arrival of several waves of Bantu migration in South Africa, or a continuous flow of arrival of this population over several centuries:

On the other hand, the authors highlighted a sexual bias in these genetic admixture events. Thus, the San, Khoe and Bantu populations have more South African components in the X chromosome than in the autosomes, suggesting that there are more San women than men among the ancestors of contemporary populations, and therefore more men than women from Northeast Africa or West Africa. The ratio of the number of San women to the number of San men among the ancestors of current populations is estimated at 1.4 for the Damara population, 2.28 for the Hoan population, 4 for the Shua population, 5.2 for the Haiom population and 2.1 for the Bantu population of South Africa.

Finally, the authors highlighted a recent sexual bias corresponding to the arrival of men from Northwestern Europe in South Africa dated on average 199 years ago.

As another recent paper demonstrated, Southern African hunter-gatherers were genetically isolated from other modern humans for about 300,000 years (until about 1,100 years ago as indicated by this paper). 

Thursday, October 10, 2024

Indirect Constraints On Dark Matter-Ordinary Matter Interactions

This very clever analysis strongly constrains the existence of non-gravitational long range interactions between dark matter and ordinary matter. 

Yet, the strong correlation of ordinary matter distributions and inferred dark matter halos, which necessarily follows from, among other things, the radial acceleration relation (RAR), requires such a force. So, this helps to prove that dark matter does not exist and the dark matter phenomena must be a function of modified gravity or a fifth force.
Dark matter's existence is known thanks to its gravitational interaction with Standard Model particles, but it remains unknown whether this is the only force present between them. While many searches for such new interactions with dark matter focus on short-range, contact-like interactions, it is also possible that there exist weak, long-ranged forces between dark matter and the Standard Model. In this work, we present two types of constraints on such new interactions. 
First, we consider constraints arising from the fact that such a force would also induce long range interactions between Standard Model particles themselves, as well as between dark matter particles themselves. Combining the constraints on these individual forces generally sets the strongest constraints available on new Standard Model-dark matter interactions. 
Second, we consider the possibility of constraining new long-ranged interactions between dark matter and the Standard Model using the effects of dynamical friction in ultrafaint dwarf galaxies, especially Segue I. Such new interactions would accelerate the transfer of kinetic energy from stars to their surrounding dark matter, slowly reducing their orbits; the present-day stellar half-light radius of Segue I therefore allows us to exclude new forces which would have reduced stars' orbital radii below this scale by now.
Zachary Bogorad, Peter Graham, Harikrishnan Ramani, "Constraints on Long-Ranged Interactions Between Dark Matter and the Standard Model", arXiv:2410.07324 (October 9, 2024).

Wednesday, October 9, 2024

The Nobel Prize In Chemistry In 2024

Commentators have noted the AI trend in both the physics and the chemistry awards this year. 

The Nobel Prize in Chemistry was awarded on Wednesday to three scientists for discoveries that show the potential of advanced technology, including artificial intelligence, to predict the shape of proteins, life’s chemical tools, and to invent new ones.

The laureates are: Demis Hassabis and John Jumper of Google DeepMind, who used A.I. to predict the structure of millions of proteins; and David Baker of the University of Washington, who used computer software to invent a new protein.

From the New York Times

Tuesday, October 8, 2024

Does Gravity Screen UV Divergences?

This article falls in the category of technical, but potentially important. And the fact that this is framed as a homage to Stanley Deser, one of the leading late 20th century GR theorists, means that it could gain traction, in the face of sociological rooted resistance to paradigm breaking ideas in astrophysics. 

Ultraviolet (i.e. high energy) divergences are a major barrier to quantum gravity theories, rendering gravity non-renormalizable. But this article explores the possibility that non-perturbative approaches to GR could solve that problem, which has been a major roadblock in quantum gravity theories. Deur's approach to explaining dark matter and dark energy phenomena also involve resort to non-perturbative GR effects.
In the peculiar manner by which physicists reckon descent, this article is by a "child" and "grandchild" of the late Stanley Deser. We begin by sharing reminiscences of Stanley from over 40 years. Then we turn to a problem which was dear to his heart: the prospect that gravity might nonperturbatively screen its own ultraviolet divergences and those of other theories. After reviewing the original 1960 work by ADM, we describe a cosmological analogue of the problem and then begin the process of implementing it in gravity plus QED.
R. P. Woodard and B. Yesilyurt, "The Other ADM" arXiv:2410.05213 (October 7, 2024).

The introduction begins as follows (in selected parts):
Stanley Deser was the Grand Old Man of quantum gravity. Everyone in the f ield knew him, and the vast majority of us loved him. His life was a testament to the persistence of scientific inquiry, optimism and simple humanity over the course of a turbulent century. . . . He graduated college at 18, and took a Ph.D from Harvard at age 22. In a career spanning seven decades he is credited with hundreds of publications, including 8 papers and a book written after the age of 90. 
. . . 

Gravity needed him: despite the lovely geometrical formulation of its early days, general relativity was not then a proper field theory. There was no canonical formalism, with its careful enumeration of the degrees of freedom and their contribution to the total energy. Hence there was no way to prove classical stability, no way to develop numerical integration techniques and no way to even begin thinking about quantization. In a memorable sequence of papers with Dick Arnowitt and Charlie Misner, Stanley sorted out the gauge issue, identified canonical variables and defined an energy functional for asymptotically flat geometries. Stanley would return to the problem of gravitational energy later on in his career. 

With Claudio Teitelboim (now Bunster) he established the stability of supergravity. And he collaborated with Larry Abbott to prove the classical stability of gravity with a positive cosmological constant. Stanley was a consummate collaborator: 

• With David Boulware he showed that endowing the graviton with a mass inevitably results in a ghost mode, provided that the theory has a smooth perturbative limit. 

• With Peter van Nieuwenhuizen he extended the work of ’t Hooft and Veltman to show that renormalizability is lost at one loop when general relativity is combined with either electromagnetism, Yang-Mills theory, or Dirac fermions. 

• With Bruno Zumino he showed that consistently coupling a spin 3/2 gravitino to gravity produces a locally supersymmetric theory. The two then applied their formalism to string theory and to the breaking of supersymmetry. 

• With Mike Duff and Chris Isham he identified the first true conformal anomaly, which led to a classification scheme with Adam Schwimmer. 

• With Roman Jackiw and Stephen Templeton he showed that adding a dimensionally-reduced Chern-Simons term to Yang-Mills or gravity results in massive particles of spin 1 and 2, respectively. 

• With Gerard ’t Hooft and Roman Jackiw he explained how to understand general relativity in 2 + 1 dimensions. 

• With Cedric Deffayet and Gilles Esposito-Farese he extended flat space Galileons to curved space. . . .

Then the two authors turn to a problem which long fascinated Stanley: the possibility that quantum gravity might regulate its own ultraviolet divergence problem and even those of other theories. We first review ADM’s prescient and thought-provoking demonstration that classical gravitation cancels the self-energy divergence of a point charge. This is “the other ADM” of our title. We then describe a quantum field theoretic analogue of the same basic effect in the context of inflationary cosmology. The next section discusses the prospects for implementing the ADM mechanism in general relativity plus QED on an asymptotically flat background. 

Part 3 of the paper explains this ADM mechanism:

The idea that gravity might regulate divergences is based on the fact that gravitational interaction energy is negative. 

For example, this makes the mass of the Earth-Moon system slightly smaller than the sum of their masses, even when one includes the kinetic energy of their orbital motion, 

M(EM)=M(E)+M(M)−(GM(E)M(M)/(2c^2R)). (1) 

The decrease works out to about 6 × 10^11 kilograms, which makes for a fractional reduction of 10^−13. Note that the fractional reduction becomes larger as the orbital radius R decreases. In 1960 Arnowitt, Deser and Misner quantified the mechanism in the context of a classical (in the sense of non-quantum) charged and gravitating point particle. Although they solved the full general relativistic constraints and then computed the ADM mass, their result can be understood using a simple model that they devised. 

Suppose the particle has a bare mass M(0) and charge Q, and is regulated as a spherical shell of radius R. Then its rest mass energy might be expressed as, 

M(R)c^2 = M(0)c^2 + Q^2/(8πε(0)R)−(GM^2(R)/(2R)), (2) 

where the single concession to relativity is that the Newtonian gravitational interaction energy has been evaluated using the total mass. Of course the quadratic equation (2) can be solved to give, 

M(R) = c^2R/G(√(1+(2GM(0)/(Rc^2)) + (GQ^2/(4πε(0)R^2c^4))) − 1). (3)

The unregulated limit is finite and independent of the bare mass,  

lim R→0 M(R) = √(Q^2/(4πε(0)G)). (4)

Three crucial points about the result (4) deserve mention: 

• It is finite; (3) (4) 

• It is independent of the bare mass M(0), as long as that is finite; and 

• It is nonperturbative. 

Of course finiteness results from the fact that gravitational interaction energy is negative. This is evident from expression (2). The Q^2/(8πǫ(0)R) term means that compressing a shell of charge costs energy, however, the −GM^2/(2R) term signals that gravity is able to pay the bill, no matter how high. 

The fact that any fixed M(0) drops out is also evident from expression (2). Note that this is not at all how a conventional particle physicist would have approached the problem. Our conventional colleague would have regarded the total mass M as a measured quantity and then required the bare mass to depend upon the regulating parameter R so as to force the result to agree with measurement, 

M(0)(R) = M(meas) − Q^2/(8πε(0)Rc^2) + GM(meas)^2/(2Rc^2) . (5) 

That is how renormalization works. It is unavoidable without gravity, but the presence of gravity opens up the fascinating prospect of computing fundamental particle masses from first principles. Setting Q = e in expression (4) gives an impossibly large result for the electron, 

(e^2/(4πε(0)G) = (e^2/(4πε(0)ℏc) × (ℏc/G) = √α × M(Planck). (6)

However, it is well known that quantum field theoretic effects often the linear self-energy divergence of a classical electron to alogarithmic divergence. 

This opens the possibility of the true relation containing exponentials. For example, one gets within a factor of four with, 

M(electron)=√α x M(Planck) × exp(− 1/(e^1 x α)) ≈ 0.134 MeV, (7)

[Ed. the measured value of the electron mass is 0.51099895000 ± 0.00000000015 MeV]

where e^1≈2.71828 is the base of the natural logarithm. The electron also carries weak charge, which should enter at some level. Perhaps all fundamental particle masses can be computed from first principles? One might even hope that the mysterious generations of the Standard Model appear as “excited states” in such a picture.

The nonperturbative nature of the ADM mechanism is evident from the fact that (4) goes like the square root of the fine structure constant and actually diverges as Newton’s constant goes to zero. The perturbative result comes from expanding the square root of (3) in powers of Q^2 and G, 

M(R)= (M(0)+ Q^2/(8πε(0)Rc^2)) x [1 

− 1/4(2GM(0)/(Rc^2) + GQ^2/(4πε(0)R2c4)) 

+ 1/8 (2GM(0)/(Rc^2) + GQ^2/(4πε(0)R^2c^4)^2 

− 5/64(2GM(0)/(Rc^2) + GQ^2/4πε(0)R^2c^4)^3 

+ ...] . (8) 

This is a series of ever-higher divergences. Of course perturbation theory becomes invalid for large values of the expansion parameter, 

2GM(0)/(Rc^2) + GQ^2/(4πε(0)R^2c^4). (9) 

Perhaps the same problem invalidates the use of perturbation theory in quantum general relativity, which would show cancellations like (4) if only we could devise a better approximation scheme? It is hopeless trying to perform a genuinely nonperturbative computation. However, a glance at the expansion (8) shows what goes wrong with conventional perturbation theory: gravity has no chance to“keep up” with the gauge sector. The lowest electromagnetic divergence is Q^2/(8πε(0)Rc^2), whereas gravity’s first move to cancel comes at order −[Q^2/(4πε(0)Rc^2)]^2 × G/(8Rc^2).

What is needed is are organization of perturbation theory in which the gravitational response comes at the“same order”as the gravitational response. Several studies have searched for such an expansion without success. 

The paper concludes with the material quoted below:

Stanley Deser was a great physicist and a good man who left the world a better place. Section 1 reviews some of his most important contributions to physics while section 2 presents personal reminiscences from one of his students. The remainder of the paper is devoted to one motivation for Deser’s early fascination with quantum gravity: the possibility that it might regulate its own divergences and those of other theories. This possibility arises because the gravitational interaction energy is negative, and sourced by the same sectors which diverge.

Section 3 reviews the example ADM discovered of how classical (that is, non-quantum) general relativity cancels the famous linear divergence of a point charged particle. The final result (4) is not only finite but also independent of the bare mass, as long as that is finite. This raises the fascinating prospect of not only solving the problem of quantum gravity but also computing fundamental particle masses from first principles. It is impossible to overstate the revolution this would work on our perception of quantum gravity. From a sterile issue of logical consistency, without observable consequences at ordinary energies, and only perturbatively small effects even at the fantastic scales of primordial inflation, quantum gravity would be thrust to center stage. Every measurement of a fundamental particle mass would represent a sensitive check. One might even hope that the mysterious 2nd and 3rd generations of the Standard Model emerged as excited states of the 1st generation.

Of course there is a catch: one must make the calculation nonperturbatively in an interacting quantum field theory. This is evident from how its classical limit (4) depends upon α and G. There seems little hope of ever being able to perform an exact computation in an interacting 3 + 1 dimensional quantum field theory. What is needed instead is a way of reorganizing conventional perturbation theory so that the negative energy constrained degrees of freedom have a chance to “keep up” with the positive energy, unconstrained degrees of freedom. The key to this seems to be solving the Hamiltonian Constraint. 

Section 4 describes how one accomplishes just that in the theory of primordial inflation. Fittingly, the solution (24) is given using ADM variables. Although it is not clear if this form regulates the usual ultraviolet divergences of gravity with a scalar, the weak field expansion (25) of the gauge-fixed and constrained action does show an ADM-like erasure of the scalar perturbation except for those parts protected by the nonzero first slow roll parameter ǫ.

Section 5 represents our initial attempt to implement the ADM mechanism for Quantum Electrodynamics + General Relativity. Although it is clear that the Hamiltonian Constraint can be solved exactly, a number of issues remain, the most important of which is how to extract “background” parts of the kinetic and potential energies so as to keep quantum corrections small. We look forward to further study of this system.