Thursday, July 30, 2020

Another Blow To Primordial Black Hole Dark Matter

Primordial black hole dark matter theories are dying deaths of a thousand cuts from observational evidence. This is one example of this process playing out. 

The funny normative phrasing of the final sentence of the abstract is probably just a function of "lost in translation" in the case of a paper written by a non-native speaker of the English language.
The frequent detection of binary mergers of ∼30M⊙ black holes (BHs) by the Laser Interferometer Gravitational-Wave Observatory (LIGO) rekindled researchers' interest in primordial BHs (PBHs) being dark matter (DM). In this work, we looked at PBHs distributed as DM with a monochromatic mass of 30M⊙ and examined the encounter-capture scenario of binary formation, where the densest central region of DM halo dominates. 
Thus, we paid special attention to the tidal effect by the supermassive black hole (SMBH) present. In doing so, we discovered a necessary tool called loss zone that complements the usage of loss cone. We found that the tidal effect is not prominent in affecting binary formation, which also turned out insufficient in explaining the totality of LIGO's event rate estimation, especially due to a microlensing event constraining the DM fraction in PBH at the mass of interest from near unity to an order smaller. Meanwhile, early-universe binary formation scenario proves so prevailing that the LIGO signal in turn constrains the PBH fraction below one percent. Thus, people should put more faith in alternative PBH windows and other DM candidates.

Neutrino Physics Hints Go Away

The data pointing to the neutrino mass ordering, its CP violating parameter, and tensions between two different ways of measuring one of the neutrino oscillation parameters has gotten weaker.

To recap, the big open questions in neutrino physics are: 

1. Is there a normal ordering or inverted ordering of neutrino masses? 

The downgraded 2.7 sigma preference implies that there is a 98% chance that the mass ordering is normal rather than inverted.

2. What is the lightest neutrino rest mass eigenstate? 

There is good reason to believe that it is significantly less than the difference between the lightest and second lightest neutrino rest mass eigenstate (which in a normal ordering is about 8.66 milli-electron volts, a.k.a. meV) and is greater than zero.

3. Is PMNS matrix parameter θ23 a little more than 45º or a little less (with the same magnitude of difference from 45º in either case)?  

There is roughly an 85% chance that it is more, and a roughly 15% chance that it is less, given the 1.6 sigma preference for the higher value

4. What is the CP violating phase of the PMNS matrix? 

The available observations suggest that the CP violation due to the CP violating phase of the PMNS matrix is more likely to happen than it is to not happen, and could be maximal, but the measurement uncertainty is too great to be very specific. There are myriad theoretical predictions of this phase which take on almost every conceivable value of the parameter.

5. Are there non-sphaleron processes like neutrinoless double beta decay involving neutrinos that do not conserve lepton number? 

The Standard Model answer to this question is "no", but the existing experimental tests aren't powerful enough to resolve the question because the expected number of neutrinoless double beta decay events is expected to be very small for neutrino masses of the magnitude that the combined available evidence especially from cosmology favors. In most of the beyond the Standard Model theories that are discussed in published scientific journal articles, the answer is "yes".

6. By what means do neutrinos acquire their rest mass? 

There is not a Standard Model answer to the question. The other fundamental particles of the Standard Model acquire their mass via interactions with the Higgs field, but due to the absence of a right handed neutrino and left handed anti-neutrino in the Standard Model, an extension of this mechanism to the neutrinos is not obvious or straightforward. Until neutrino oscillation and neutrino mass were confirmed, the Standard Model assumed taht neutrinos were massless. There is still some semantic dispute over whether neutrino mass and neutrino oscillation are actually truly part of the Standard Model of Particle Physics in the narrow sense of that term.

7. Do sterile neutrinos that oscillate with ordinary neutrinos exist? 

The Standard Model answer to the question is "no", and quite a bit of evidence from multiple sources (both cosmology and terrestrial experiment data) supports this answer. But there is some experimental evidence from neutrino oscillation data using neutrinos emitted by nuclear reactors (with the data from different experiments being somewhat inconsistent) that supports the existence of at least one sterile neutrino that oscillates with ordinary active neutrinos and such a neutrino is a dark matter particle candidate.

8. What is the ratio of neutrinos to antineutrinos in the Universe? 

We don't have a reliable measurement of any great precision. The electron neutrino asymmetry could be on the order of 3% and the muon and tau neutrino asymmetry could be on the order of 50%.

9. Is some other aspect of the Standard Model description of neutrinos incorrect? 

The Standard Model answer is obviously "no" and there is no strong evidence to suggest otherwise. The most plausible deviations from the Standard Model are discussed in the question above. Experimental searches for "non-standard interactions" (NSI) of neutrinos have not differed in a statistically significant manner from the null hypothesis.

10. Finally, it is always desirable to measure each of the seven experimentally measured neutrino property parameters of the Standard Model more precisely.

The measurement of the CP violating phase of the PMNS matrix is the least precisely measured experimentally determined constant in the entire Standard Model (or for that matter, in general relativity either, the other part of "Core Theory").

11. Bonus: Why do the seven experimentally measured neutrino property parameters take the values that they do? 

There is not a Standard Model answer to the question and it does not aspire to provide one.  The Standard Model does not aspire to explain why any of its experimentally measured fundamental constants take on the values that they do, except to demonstrate that a few of them (like the electromagnetic coupling constant, weak force coupling constant, W boson mass, Z boson mass, and Higgs vev) are functionally related to each other.

The preprint and its abstract are as follows:

Our herein described combined analysis of the latest neutrino oscillation data presented at the Neutrino2020 conference shows that previous hints for the neutrino mass ordering have significantly decreased, and normal ordering (NO) is favored only at the 1.6σ level. Combined with the χ2 map provided by Super-Kamiokande for their atmospheric neutrino data analysis the hint for NO is at 2.7σ. 

The CP conserving value δCP=180∘ is within 0.6σ of the global best fit point. Only if we restrict to inverted mass ordering, CP violation is favored at the ∼3σ level. 

We discuss the origin of these results - which are driven by the new data from the T2K and NOvA long-baseline experiments -, and the relevance of the LBL-reactor oscillation frequency complementarity. 

The previous 2.2σ tension in Δm^2(21) preferred by KamLAND and solar experiments is also reduced to the 1.1σ level after the inclusion of the latest Super-Kamiokande solar neutrino results. 

Finally we present updated allowed ranges for the oscillation parameters and for the leptonic Jarlskog determinant from the global analysis.

Ivan Esteban, M.C. Gonzalez-Garcia, Michele Maltoni, Thomas Schwetz, Albert Zhou "The fate of hints: updated global analysis of three-flavor neutrino oscillations" arXiv (July 27, 2020). This is largely in accord with another recent review of the same matters by different authors.


Other conclusions from the body text:


Despite slightly different tendencies in some parameter regions, T2K, NOvA and reactor experiments are statistically in very good agreement with each other. We have performed tests of various experiment and analysis combinations, which all show consistency at a CL below 2σ. 
We obtain a very mild preference for the second octant of θ23, with the best fit point located at sin^2(θ23) = 0.57 (slightly more non-maximal than the best fit of 0.56 in NuFIT 4.1), but with the local minimum in the first octant at sin^2(θ23) = 0.455 at a ∆χ 2 = 0.53 (2.2) without (with) SK-atm. Maximal mixing (sin^2(θ23) = 0.5) is disfavored with ∆χ 2 = 2.4 (3.9) without (with) SK-atm. 
The best fit for the complex phase is at δCP = 195◦ . Compared to previous results (e.g., NuFIT 4.1), the allowed range is pushed towards the CP conserving value of 180◦ , which is now allowed at 0.6σ with or without SK-atm. If we restrict to IO, the best fit of δCP remains close to maximal CP violation, with CP conservation being disfavored at around 3σ.

LambdaCDM Can't Explain Why The Milky Way's Satellite Galaxies Are In A Plane

The Standard Model of Cosmology also known as the lambdaCDM model is repeatedly in conflict with galaxy scale phenomena. The "Planes of Satellite Galaxies Problem" is one example of this reality. This result is expected and predicted, however, in many modified gravity theories.

Comparing satellite dwarf galaxies with ΛCDM simulations also results in numerous other small-scale problems (Missing Satellites, Core-Cusp, Too-Big-To-Fail) all of which are affected by baryonic physics.

We study the correlation of orbital poles of the 11 classical satellite galaxies of the Milky Way, comparing results from previous proper motions with the independent data by Gaia DR2. Previous results on the degree of correlation and its significance are confirmed by the new data. A majority of the satellites co-orbit along the Vast Polar Structure, the plane (or disk) of satellite galaxies defined by their positions. The orbital planes of eight satellites align to <20∘ with a common direction, seven even orbit in the same sense. Most also share similar specific angular momenta, though their wide distribution on the sky does not support a recent group infall or satellites-of-satellites origin. 
The orbital pole concentration has continuously increased as more precise proper motions were measured, as expected if the underlying distribution shows true correlation that is washed out by observational uncertainties. The orbital poles of the up to seven most correlated satellites are in fact almost as concentrated as expected for the best-possible orbital alignment achievable given the satellite positions. 
Combining the best-available proper motions substantially increases the tension with ΛCDM cosmological expectations: <0.1 per cent of simulated satellite systems in IllustrisTNG contain seven orbital poles as closely aligned as observed. 
Simulated systems that simultaneously reproduce the concentration of orbital poles and the flattening of the satellite distribution have a frequency of <0.1 per cent for any number of k > 3 combined orbital poles, indicating that these results are not affected by a look-elsewhere effect. 
This compounds the Planes of Satellite Galaxies Problem.
Marcel S. Pawlowski, Pavel Kroupa "The Milky Way's Disk of Classical Satellite Galaxies in Light of Gaia DR2" arXiv (November 12, 2019) (Accepted for publication in MNRAS).

Antimatter Does Not Have Negative Gravitational Mass

Kroupa comes up with a clever way to rule out a theory rooted in the notion that antimatter has negative gravitational mass without directly measuring it, using solar system based constraints.
The gravitational dipole theory of Hadjukovic (2010) is based on the hypothesis that antimatter has a negative gravitational mass and thus falls upwards on Earth. 
Astrophysically, the model is similar to but more fundamental than Modified Newtonian Dynamics (MOND), with the Newtonian gravity gN towards an isolated point mass boosted by the factor ν=1+(α/x)tanh(x√/α), where x≡gN/a0 and a0=1.2×10^−10 m/s2 is the MOND acceleration constant. We show that α must lie in the range 0.4−1 to acceptably fit galaxy rotation curves. 
In the Solar System, this interpolating function implies an extra Sunwards acceleration of αa0. This would cause Saturn to deviate from Newtonian expectations by 7000(α/0.4) km over 15 years, starting from known initial position and velocity on a near-circular orbit. 
We demonstrate that this prediction should not be significantly altered by the postulated dipole haloes of other planets due to the rather small region in which each planet's gravity dominates over that of the Sun. The orbit of Saturn should similarly be little affected by a possible ninth planet in the outer Solar System and by the Galactic gravity causing a non-spherical distribution of gravitational dipoles several kAU from the Sun. 
Radio tracking of the Cassini spacecraft orbiting Saturn yields a 5σ upper limit of 160 metres on deviations from its conventionally calculated trajectory. These measurements imply a much more stringent upper limit on α than the minimum required for consistency with rotation curve data. Therefore, no value of α can simultaneously match all available constraints, falsifying the gravitational dipole theory in its current form at extremely high significance.
Indranil Banik, Pavel Kroupa "Solar System limits on gravitational dipoles" arXiv (June 10, 2020) (Accepted for publication by the Monthly Notices of the Royal Astronomical Society.)

Supermassive Black Holes Form Early

It takes about 100 million years after the Big Bang (the Universe is now about 13.8 billion years old) for supermassive black holes to form.
The observation of quasars at very high redshift such as Poniuaena is a challenge for models of super-massive black hole (SMBH) formation. This work presents a study of SMBH formation via known physical processes in star-burst clusters formed at the onset of the formation of their hosting galaxy. 
While at the early stages hyper-massive star-burst clusters reach the luminosities of quasars, once their massive stars die, the ensuing gas accretion from the still forming host galaxy compresses its stellar black hole (BH) component to a compact state overcoming heating from the BH--BH binaries such that the cluster collapses, forming a massive SMBH-seed within about a hundred Myr. Within this scenario the SMBH--spheroid correlation emerges near-to-exactly. The highest-redshift quasars may thus be hyper-massive star-burst clusters or young ultra-compact dwarf galaxies (UCDs), being the precursors of the SMBHs that form therein within about 200 Myr of the first stars. 
For spheroid masses <10^9.6 Msun a SMBH cannot form and instead only the accumulated nuclear cluster remains. The number evolution of the quasar phases with redshift is calculated and the possible problem of missing quasars at very high redshift is raised. SMBH-bearing UCDs and the formation of spheroids are discussed critically in view of the high redshift observations. 
A possible tension is found between the high star-formation rates (SFRs) implied by downsizing and the observed SFRs, which may be alleviated within the IGIMF theory and if the downsizing times are somewhat longer.
Pavel Kroupa, Ladislav Subr, Tereza Jerabkova, Long Wang "Very high redshift quasars and the rapid emergence of super-massive black holes" arXiv (July 28, 2020) (MNRAS, in press).

Footnote: Demerits to Professor Kroupa (whose work overall is excellent) for the incredibly long run on sentence in the abstract (part  of which is highlighted above).


Wednesday, July 29, 2020

Larger Stonehenge Stones Came From 15 Miles Away

A long standing mystery related to Stonehenge has been solved and the answer is less fabulous than we might have hoped. 
The mineral origin of Stonehenge is an ancient mystery now solved, thanks to the solving of more contemporary one: who absconded with core samples from a crumbling standing stone, drilled out in 1959 so it could be reinforced with rebar? One of the three cores was recently returned by an 89-year-old worker from the diamond company that performed the work six decades ago, and tests show that it came from a quarry only 15 miles north of the monument. 
. . . Stonehenge's smaller bluestones were already proven to be from a quarry in Wales. Their 142-mile journey probably involved a leg by boat, a logistical feat that would have been much more challenging for the larger cuts.
From here

ATLAS Finds No Evidence Of Lepton Universality Violations In W-Boson Decays

In the Standard Model of Particle Physics, electrons, muons and tau leptons have identical properties except rest mass. But there is mixed experimental evidence from W boson mediated decays of B mesons (i.e. two quark composite particles containing bottom quarks) to suggest that different charge leptons decay differently (much like quarks of different generations do pursuant to the CKM matrix). 

This study is the first and only experimental data point from the Large Hadron Collider addressing the question from the perspective of the ratio of tau leptons to muons. It does so in a "cleaner" experimental design with less room for unexpected systemic errors or theoretical issues than the B meson decay experiments that have shown signs of lepton universality violations. 

The results support the Standard Model rule known as "lepton universality" with results within less than one standard deviation of the Standard Model expectation. There are multiple models for what could cause lepton universality violations in B meson decays, however, and it can't rule out all of them definitively.
The Standard Model of particle physics encapsulates our current best understanding of physics at the smallest scales. A fundamental axiom of this theory is the universality of the couplings of the different generations of leptons to the electroweak gauge bosons. 
The measurement of the ratio of the rate of decay of W bosons to τ-leptons and muons, R(τ/μ)=B(W→τντ)/B(W→μνμ), constitutes an important test of this axiom. A measurement of this quantity with a novel technique using di-leptonic tt¯ events is presented based on 139 fb^−1 of data recorded with the ATLAS detector in proton--proton collisions at s√=13 TeV. Muons originating from W bosons and those originating from an intermediate τ-lepton are distinguished using the lifetime of the τ-lepton, through the muon transverse impact parameter, and differences in the muon transverse momentum spectra. 
The value of R(τ/μ) is found to be 0.992±0.013[±0.007(stat)±0.011(syst)] and is in agreement with the hypothesis of universal lepton couplings as postulated in the Standard Model. This is the most precise measurement of this ratio, and the only such measurement from the Large Hadron Collider, to date.

The body text of the paper provides the following background to this experiment:
It is a fundamental axiom and remarkable feature of the Standard Model (SM) that the couplings of the electroweak gauge bosons (W, Z) to charged leptons, g(l) (l = e, µ, τ), are independent of the mass of the leptons. This fundamental assumption is referred to as lepton-flavour universality and is tested in this paper by measuring the ratio of the fraction of on-shell W boson decays, branching ratios (B), to τ-leptons and muons, R(τ/µ) = B(W → τντ)/B(W → µνµ). The measurement exploits the large number of top and anti-top quark pair (tt¯) events produced in proton-proton (pp) collisions at the Large Hadron Collider (LHC). Given the large B(t → W q), close to 100%, this gives a very large sample of W boson pairs. These are used in a tag and probe technique to obtain a large sample of clean and unbiased W boson decays to muons and τ-leptons. The τ-leptons are identified through their decay to muons. The displacement of the τ decay vertex and the different muon transverse momentum (pT) spectra are used to distinguish between muons from the W → τντ → µνµντ ντ and W → µνµ processes, to extract R(τ/µ). This is achieved by utilising the precise reconstruction of muon tracks obtainable by the ATLAS experiment. 
Previously, R(τ/µ) has been measured by the four experiments at the Large Electron–Positron Collider (LEP), yielding a combined value of 1.070 ± 0.026. This deviates from the SM expectation of unity[1] by 2.7σ, motivating a precise measurement of this ratio at the LHC. Other experimental measurements of the ratio B(W → τν(τ))/B(W → lν(l)), where l is either an electron or a muon, have not yet reached the precision of the LEP results. 
The equivalent ratio for the two light generations, B(W → µνµ)/B(W → eνe), has been accurately measured by the LEP, LHCb and ATLAS experiments, and is found to be consistent with the SM prediction at the 1% level. Additionally, while most low-energy experiments show good agreement, to very high precision, with the hypothesis of universality of lepton couplings, recent results from LHCb, Belle and BaBar show some tension with the SM, further motivating this analysis. 
This measurement relies on precise knowledge of the branching ratio of τ-leptons decaying to muons to extrapolate to the full W → τντ branching ratio. The value of (17.39 ± 0.04)% measured by the LEP experiments is used in the analysis. The relative uncertainty of 0.23% is included in the measured value of R(τ/µ) and is a subdominant component of the overall uncertainty.
[1] The phase space effects due to the masses of the decay products on this ratio are very small (∼ 5 × 10−4 ) and hence can be neglected [2]. 

Previous coverage of the question of lepton universality violations at this blog:

Monday, July 27, 2020

Viking Era Smallpox DNA

Somebody decided that "the Viking Age" sounded like a better time descriptor to put in the title of their paper than that time period's other common name, "the Dark Ages." 

Reading between the lines, one of the big questions being asked is whether smallpox suddenly evolved to become more deadly and virulent sometime in the 1600s.
Scientists have discovered extinct strains of smallpox in the teeth of Viking skeletons -- proving for the first time that the killer disease plagued humanity for at least 1400 years.
Smallpox spread from person to person via infectious droplets, killed around a third of sufferers and left another third permanently scarred or blind. Around 300 million people died from it in the 20th century alone before it was officially eradicated in 1980 through a global vaccination effort -- the first human disease to be wiped out.
Now an international team of scientists have sequenced the genomes of newly discovered strains of the virus after it was extracted from the teeth of Viking skeletons from sites across northern Europe. . . .
Smallpox was eradicated throughout most of Europe and the United States by the beginning of the 20th century but remained endemic throughout Africa, Asia, and South America. The World Health Organisation launched an eradication programme in 1967 that included contact tracing and mass communication campaigns -- all public health techniques that countries have been using to control today's coronavirus pandemic. But it was the global roll out of a vaccine that ultimately enabled scientists to stop smallpox in its tracks.
Historians believe smallpox may have existed since 10,000 BC but until now there was no scientific proof that the virus was present before the 17th century. It is not known how it first infected humans but, like Covid-19, it is believed to have come from animals. . . . 
The team of researchers found smallpox -- caused by the variola virus -- in 11 Viking-era burial sites in Denmark, Norway, Russia, and the UK. They also found it in multiple human remains from Öland, an island off the east coast of Sweden with a long history of trade. The team were able to reconstruct near-complete variola virus genomes for four of the samples. . . .
"The early version of smallpox was genetically closer in the pox family tree to animal poxviruses such as camelpox and taterapox, from gerbils. It does not exactly resemble modern smallpox which show that virus evolved. We don't know how the disease manifested itself in the Viking Age -- it may have been different from those of the virulent modern strain which killed and disfigured hundreds of millions."
Dr Terry Jones, one of the senior authors leading the study, a computational biologist based at the Institute of Virology at Charité -- Universitätsmedizin Berlin and the Centre for Pathogen Evolution at the University of Cambridge, said: "There are many mysteries around poxviruses. To find smallpox so genetically different in Vikings is truly remarkable. No one expected that these smallpox strains existed. It has long been believed that smallpox was in Western and Southern Europe regularly by 600 AD, around the beginning of our samples.
"We have proved that smallpox was also widespread in Northern Europe. Returning crusaders or other later events have been thought to have first brought smallpox to Europe, but such theories cannot be correct. While written accounts of disease are often ambiguous, our findings push the date of the confirmed existence of smallpox back by a thousand years."
From Science Daily.
Viking smallpox diversity 
Humans have a notable capacity to withstand the ravages of infectious diseases. Smallpox killed millions of people but drove Jenner's invention of vaccination, which eventually led to the annihilation of this virus, declared in 1980. 
To investigate the history of smallpox, Mühlemann et al. obtained high-throughput shotgun sequencing data from 1867 human remains ranging from >31,000 to 150 years ago (see the Perspective by Alcamí). Thirteen positive samples emerged, 11 of which were northern European Viking Age people (6th to 7th century CE). Although the sequences were patchy and incomplete, four could be used to infer a phylogenetic tree. This showed distinct Viking Age lineages with multiple gene inactivations. The analysis pushes back the date of the earliest variola infection in humans by ∼1000 years and reveals the existence of a previously unknown virus clade.
Science, this issue p. eaaw8977; see also p. 376 
Structured Abstract 
INTRODUCTION 
Variola virus (VARV), the causative agent of smallpox, is estimated to have killed between 300 million and 500 million people in the 20​th century and was responsible for widespread mortality and suffering for at least several preceding centuries. Humans are the only known host of VARV, and smallpox was declared eradicated in 1980. The timeline of the emergence of smallpox in humans is unclear. Based on sequence data up to 360 years old, the most recent common ancestor of VARV has been dated to the 16th or 17th century. This contrasts with written records of possible smallpox infections dating back at least 3000 years and mummified remains suggestive of smallpox dating to 3570 years ago. 
RATIONALE 
Ancient virus sequences recovered from archaeological remains provide direct molecular evidence of past infections, give detail of genetic changes that have occurred during the evolution of the virus, and can reveal viable virus sequence diversity not currently present in modern viruses. In the case of VARV, ancient sequences may also reduce the gap between the written historical record of possible early smallpox infections and the dating of the oldest available VARV sequences. We therefore screened high-throughput shotgun sequencing data from skeletal and dental remains of 1867 humans living in Eurasia and the Americas between ~31,630 and ~150 years ago for the presence of sequences matching VARV. 
RESULTS 
VARV sequences were recovered from 13 northern European individuals, including 11 dated to ~600–1050 CE, overlapping the Viking Age, and we reconstructed near-complete VARV genomes for four of them. The samples predate the earliest confirmed smallpox cases by ~1000 years. Eleven of the recovered sequences fall into a now-extinct sister clade of the modern VARVs in circulation prior to the eradication of smallpox, while two sequences from the 19th century group with modern VARV. The inferred date of the most recent common ancestor of VARV is ~1700 years ago.
The number of functional genes is generally reduced in orthopoxviruses with narrow host ranges. A comparison of the gene content of the Viking Age sequences shows great contrast with that of modern VARV. 
Three genes that are active in all modern VARV sequences were inactive over 1000 years ago in some or all ancient VARV. Among 10 genes inactive in modern and Viking Age VARV, the mutations causing the inactivations are different and the genes are predicted to be active in the ancestor of both clades, suggesting parallel evolution. Fourteen genes inactivated in modern VARV are active in some or all of the ancient sequences, eight of which encode known virulence factors or immunomodulators. 
The active gene counts of the four higher-coverage Viking Age viral genomes provide snapshots from an ~350-year period, showing the reduction of gene content during the evolution of VARV. These genomes support suggestions that orthopoxvirus species derive from a common ancestor containing all genes present in orthopoxviruses today, with the reduction in active gene count conjectured to be the result of long-term adaptation within host species.
CONCLUSION 
The Viking Age sequences reported here push the definitive date of the earliest VARV infection in humans back by ~1000 years. These sequences, combined with early written records of VARV epidemics in southern and western Europe, suggest a pan-European presence of smallpox from the late 6th century. The ancient viruses are part of a previously unknown, now-extinct virus clade and were following a genotypic evolutionary path that differs from modern VARV. The reduction in gene content shows that multiple combinations of active genes have led to variola viruses capable of circulating widely within the human population.
Barbara Mühlemann, et al., "Diverse variola virus (smallpox) strains were widespread in northern Europe in the Viking Age." 369 (6502) Science eaaw8977 (July 24, 2020).

The companion commentary and its abstract are as follows:
Smallpox—caused by variola virus (VARV), a poxvirus—was one of the most virulent diseases known to humans, killing up to 30% of infected individuals and 300 million to 500 million people in the 20th century. The year 2020 commemorates the 40th anniversary of smallpox eradication, the first human disease eradicated after a global vaccination campaign led by the World Health Organization (WHO). The last samples of VARV are kept in two high-security laboratories pending destruction, and fears about reemergence or deliberate release of VARV have not subsided (1). Smallpox eradication is one of the most successful stories of public health, but the origin of the deadly virus remains an enigma. On page 391 of this issue, Mühlemann et al. (2) report the identification of VARV in archaeological remains from the Viking Age (600 to 1050 CE) that reveals new information about the origin of VARV and its evolution in human populations.

Antonio Alcamí, "Was smallpox a widespread mild disease?" 369 (6503) Science 376-377 (June 24, 2020).

Is the Hydra An Allegory For Summer?

"[T] the mythical theme of a hero (god) slaying 7 headed dragon keeps popping up again and again in different cultures in Eurasia...

For instance the Ugaritic monster Lotan (meaning "coiled"), also called "the mighty one with seven heads", was a serpent of the sea god Yam. Or Yam himself as he was also called "the serpent". This monster was defeated by the storm god Hadad-Baʿal in the Ugaritic Baal Cycle...

Hadad defeating Lotan, Yahweh defeating Leviathan, Marduk defeating Tiamat, Zeus slaying Typhon, Heracles slaying Hidra, Perun killing Veles, Thor fighting Jörmungandr...Different versions of the same myth which originated most likely in the Fertile Crescent among the Neolithic farmers[.]"

The myth of a great warrior slaying a seven headed beast is an ancient and widespread one. Where did it come from? 

The author of the Old European culture blog makes the case that it is an allegory for the climate trends of the seasons, with the seven heads representing the seven months of the Fertile Crescent's summer.

Thursday, July 23, 2020

About Massive Gravity

There is a lot of ongoing publication of academic papers on massive gravity theories. There have been sixty-four pre-prints on the topic at arXiv in the last 12 months alone. This research is driven by the desire to develop a theory of quantum gravity, the absence of which is one of the most obvious defects of "core theory" (i.e. the Standard Model plus General Relativity) that remains one of the most important unsolved problems in physics.

It is one part of a larger project in quantum gravity and general relativity research to better understand gravity by exploring modifications of canonical classical general relativity and its most naive quantum generalizations, to see what they imply and to better understand the mathematics of gravity and general relativity more generally,

By crude analogy, abstract algebra, parts of which are vital tools in physics, is basically a working out of what happens if you remove or add rules to rules like the commutative and associative properties of ordinary algebra and seeing what it looks like and whether you can gain insights or tools from that exercise.

We, of course, don't have the instrumental capacity to detect individual gravitons the way that we can detect individual photons, because the coupling constant of gravity at the scale of individual fundamental particles is so much weaker than the three Standard Model forces. So, we can't simply directly measure the mass of a graviton, and even if we could, we couldn't do so with perfect precision (this isn't just a practical difficulty, it is theoretically impossible to do so). Therefore, we can't rule out the possibility of a massive graviton with a mass of less than the uncertainty of our most precise measurement by direct measurement alone.

So, the only way to distinguish between the two possibilities is to work out theoretically the observable implications of each possibility, to learn how they differ, and with this knowledge to try to see if we can use indirect evidence to distinguish between the possibilities.

Indeed, one reason to explore it is that it provides an alternative to the massless spin-2 graviton approach to quantum gravity (the overwhelmingly conventional wisdom) which allows you to quantify how far experimental data requires the conventional wisdom to be true, rather than the alternatives.

It is one thing to say that experimental data is not inconsistent with the hypothesis of a massive graviton. It is another to say that experimental evidence constrains any massive graviton theory to have a graviton with a mass of not more than 6*10^-32 eV at the two sigma confidence level, pursuant to a study of weak gravitational lensing data done in 2004, and that thirteen other papers analyzing different astronomy observations in an effort to constrain this parameter experimentally have imposed no other boundaries that are more strict.

Quantifying the allowed parameter space of massive gravity theories allows us to quickly rule out BSM theories which aren't within the limits of those parameters. So, this is an active area of experimental as well as theoretical investigation, although there haven't been a lot of really big breakthroughs on the experimental side recently.

More generally, comparing the math involved in quantum gravity with a massive graviton to the math involved in massive gravity with a massless graviton, helps you understand what is going on in each case better.

As a practical matter, one of the main barriers to a theory of quantum gravity is that the naive mathematical description of a massless spin-2 graviton that couples in proportion to the mass-energy of what it interacts with is non-renormalizable (and is also a non-Abelian gauge theory, i.e. its math doesn't obey the commutative law mathematically, and is highly non-linear) and no one has figured out how to do the non-perturbative math (or use not yet discovered #mathtricks) that are necessary to get meaningful answers out of this formulation in a more general case as opposed to some very specific, highly idealized and symmetric cases. So, looking at the closely related cases of massive graviton theories may help you to gain insight into why you can't solve the massless graviton case.

To take one concrete example of that, the graviton matter coupling in massive gravity is arguably easier to formulate than in the massless graviton case. Similarly, it is arguably to describe the interaction of quantum electrodynamics with gravity in a massive graviton formulation, and some of the insights that result from that analysis may generalize to both massive and massless graviton cases.

Also, it would be hubris to claim to know for sure that the massless graviton case is really true, when we can't even realize it mathematically in any way that we can use practically. This isn't Platonic knowledge that we are born with. While the massless graviton case is more attractive for many reasons, we can't rule out the possibility that the massless graviton mass is not just hard, but impossible to solve and non-physical. If so, perhaps the seemingly unlikely massive graviton case is actually correct, so we may as well pursue both possibilities theoretically.

On the other hand, if we pursue massive graviton theory to the point where we can definitively rule out it as a possibility due to some theoretical inconsistency that exists for all of the available parameter space, then we could indirectly establish that quantum gravity must arise from a massless graviton, even though we can't directly measure that fact.

Another reason to explore it is that even in the conventional massless spin-2 graviton approach to quantum gravity, gravitons still emit and absorb gravitons, because gravitons couple to mass-energy rather than mass alone, and gravitons have energy even though they lack rest mass in the conventional quantum gravity analysis. Gravitational fields have self-interactions in General Relativity, but the way that this occurs in GR is not very transparent or illuminating when GR is formulated in terms of Einstein's equations, but is much more transparent and obvious when developing an understanding of these self-interactions in a massive gravity quantum gravity context. A massive graviton approach can be used as a way to understand these self-interactions by viewing massless gravity as a limiting case of the massive graviton theory that is mathematically less vexed in some respects because any time you try to do math with zeros everywhere, you will usually end up with infinities that are mathematically hard to work with sooner or later (because division by zero is undefined and approaches "infinity" in the limit).

The problem with using a massive gravity theory to explore the limiting case of the massless graviton, however, is that there are qualitative differences between the way that bosons with even tiny masses behave compared to the way that a truly massless boson behaves.

For example, in layman's terms, massless bosons basically don't experience time and always move at exactly the speed of light regardless of how much energy they carry. But a massive boson does experience time, must move at either less than the speed of light (or more than the speed of light if it is a tachyon), and as a consequence of special relativity, it takes increasingly more energy to increase its speed by the same amount, as it approaches the relativistic regime near the speed of light.

Lensing effects are also not continuous between the massive graviton case and the massless graviton case, so the lensing effects of massive gravitons in the limit as the mass of the graviton falls in the direction of zero from above is not equal to the lensing effects created by a massless graviton.

In a massless graviton theory, tachyonic gravitons (i.e. those traveling at more than the speed of light) can be ruled out almost automatically by assumption. In a massive graviton theory, this isn't a foregone conclusion, and if it is true, you have to work a lot harder to reach that conclusion.

In general, it is quite challenging to formula a massive gravity theory that has desirable properties such as being "ghost free", and the discovery in 2010 that it appeared to be possible to devise a ghost free massive gravity theory rebooted interest in the theory that had gone dormant not long after it was determined in 1972 that a large class of massive gravity theories produce mathematical "ghosts" that it is impossible to remove from this class of theories. While this wasn't a true "no go" theorem, and the conclusion had loopholes that were later successfully exploited, interest in massive gravity theories waned to a trickle for a generation as a result of this discovery.

The fact that general relativity is modified at large distances in massive gravity provides a possible explanation for the accelerated expansion of the Universe that does not require any dark energy. Massive gravity and its extensions, such as bimetric gravity,[12] can yield cosmological solutions which do in fact display late-time acceleration in agreement with observations.[13][14][15]
This is attractive, because while it is trivial to insert a cosmological constant into Einstein's equations in classical GR as an integration constant, it is highly non-trivial to produce dark energy in a graviton based quantum gravity theory in which all global phenomena must arise from the local properties of a graviton (it is easier, at least in principle, to reproduce dark energy in quantum gravity theories like loop quantum gravity, that are quantizing space-time rather than merely inserting a graviton into a smooth and continuous space-time).

Theorists are also looking at massive gravity theories to address other problems in cosmology and black hole and neutron star physics that have gone unsolved in the massless graviton/massless gravitational field paradigm (all illustrated by the list of pre-prints linked above).

Wednesday, July 22, 2020

Paleo-Americans?

It is no secret that I have long been on skeptical side of the debate around whether modern humans were present in the Americas outside the Beringian land bridge between Siberia and Alaska until the tail of of the ice age that produced the Last Glacial Maximum about 26,000 years ago. The pre-Clovis arrival of members of the Founding population of the Americas to North and South America via a Pacific route became part of the paradigm long ago. 

But evidence of a hominin presence in the Americas during or before the Last Glacial Maximum ice age has relied on very small numbers of objects that are only arguably stone tools at various sites, with equivocal attempts at dating them, and no human remains. 

Two new articles in the leading scientific journal Nature may tip the balance, although I am still not unequivocally convinced, in part, because the studies fail to address by this pilot wave founding population would have had so slight an archaeological and ecological impact.

An account at NBC News explains in an article that is mostly correct and makes only subtle errors where it is not quite right:

Pieces of limestone from a cave in Mexico may be the oldest human tools ever found in the Americas, and suggest people first entered the continent up to 33,000 years ago – much earlier than previously thought. 
The findings, published Wednesday in two papers in the journal Nature, which include the discovery of the stone tools, challenge the idea that people first entered North America on a land bridge between Siberia and Alaska and an ice-free corridor to the interior of the continent.

Precise archaeological dating of early human sites throughout North America, including the cave in Mexico, suggests instead that they may have entered along the Pacific coast, according to the research. . . . 
The commonly accepted time for the arrival of the first people in North America is about 16,000 years ago, and recent studies estimate it happened up to 18,000 years ago. But the latest discoveries push the date back by more than 10,000 years. 
The NBC article linked about says this about the first article discussing an excavation deep into a cave high on a mountain in Mexico: 
Ciprian Ardelean, an archaeologist with the Autonomous University of Zacatecas in Mexico, the lead author of one of the papers, said the finds were the result of years of careful digging at the Chiquihuite Cave in north-central Mexico. 
The steeply-inclined cave is high on a mountainside and filled with crumbling layers of gravel: “The deeper you go, the higher the risk for the walls to collapse,” he said. 
A shaped limestone point, one of the stone tools found at the Chiquihuite Cave in central Mexico that archaeologists think dates from around 30,000 years ago, before the last Ice Age.
The excavations paid off with the discovery of three deliberately-shaped pieces of limestone — a pointed stone and two cutting flakes — that may be the oldest human tools yet found in the Americas. 
They date from a time when the continent seems to have been occupied by only a few groups of early humans – perhaps “lost migrations” that left little trace on the landscape and in the genetic record, Ardelean said. 
The tools were found in the deepest layer of sediment they excavated, which dates from up to 33,000 years ago – long before the last Ice Age, which occurred between 26,000 and 19,000 years ago. . . . 
“You have to live there and cook there, because it takes you a whole day to go back and forth from the town, and it’s a five-hour climb,” he said. “It is a logistical nightmare.”
More tools were found in sediments laid down during and after the Ice Age, and indicate the cave was occupied for short periods over thousands of years, maybe by nomadic people who knew of it from ancestral legends. 
The Chiquihuite Cave is high on a mountain, at an altitude of above 8,800 feet, and the interior is very steep. . . . 
“I think it was a refuge used occasionally and periodically,” Ardelean said. “Even if you never saw the site before, your grandparents had told you about it and there were indications when you got there.” 
The presence of stone tools from the Ice Age – known to archaeologists as the Last Glacial Maximum, or LGM – suggested people occupied the cave even before that. 
Much of North America was then covered with thick ice sheets that would have made migrations impossible, he said: “If you have people during the LGM, it is because they entered the continent before the LGM.” 
The first article and its abstract are:
The initial colonization of the Americas remains a highly debated topic, and the exact timing of the first arrivals is unknown. The earliest archaeological record of Mexico—which holds a key geographical position in the Americas—is poorly known and understudied. Historically, the region has remained on the periphery of research focused on the first American populations. 
However, recent investigations provide reliable evidence of a human presence in the northwest region of Mexico, the Chiapas Highland, Central Mexico and the Caribbean coast, during the Late Pleistocene and Early Holocene epochs. Here we present results of recent excavations at Chiquihuite Cave—a high-altitude site in central-northern Mexico—that corroborate previous findings in the Americas of cultural evidence that dates to the Last Glacial Maximum (26,500–19,000 years ago), and which push back dates for human dispersal to the region possibly as early as 33,000–31,000 years ago. 
The site yielded about 1,900 stone artefacts within a 3-m-deep stratified sequence, revealing a previously unknown lithic industry that underwent only minor changes over millennia. More than 50 radiocarbon and luminescence dates provide chronological control, and genetic, palaeoenvironmental and chemical data document the changing environments in which the occupants lived. 
Our results provide new evidence for the antiquity of humans in the Americas, illustrate the cultural diversity of the earliest dispersal groups (which predate those of the Clovis culture) and open new directions of research.
Ardelean, C.F., Becerra-Valdivia, L., Pedersen, M.W. et al. "Evidence of human occupation in Mexico around the Last Glacial Maximum." Nature (July 22, 2020). https://doi.org/10.1038/s41586-020-2509-0 (citations in abstract omitted).

The NBC article linked about says this about the second article:  
Lorena Becerra-Valdivia, an archaeological scientist at the University of Oxford and the University of New South Wales, and Thomas Higham, a radiocarbon dating specialist at the University of Oxford, compared the dates from the cave sediments with other archaeological sites in North America. 
Their research indicates very small numbers of humans probably lived in parts of North America before, during and immediately after the last Ice Age, but the human population grew much larger after a period of abrupt global warming that began about 14,700 years ago. 
The study also suggested some people had entered the Americas before 29,000 years ago, possibly along the Pacific coast, when the land bridge between Siberia and Alaska was completely or partially submerged, Becerra-Valdivia said. . . . 
Anthropologist Matthew Des Lauriers of California State University, San Bernardino, who was not involved in the studies, said they “pushed the boundaries” of knowledge about the earliest human arrival in the Americas. 
But he questioned how ancient people who had been in the Americas for more than 25,000 years could have remained “archaeologically invisible” for over 10,000 years.
He said that archaeologists in Australia and Japan, for example, had no difficulty finding evidence of human occupation from that time. 
The second article and its abstract are: 
The peopling of the Americas marks a major expansion of humans across the planet. However, questions regarding the timing and mechanisms of this dispersal remain, and the previously accepted model (termed ‘Clovis-first’)—suggesting that the first inhabitants of the Americas were linked with the Clovis tradition, a complex marked by distinctive fluted lithic points—has been effectively refuted. 
Here we analyse chronometric data from 42 North American and Beringian archaeological sites using a Bayesian age modelling approach, and use the resulting chronological framework to elucidate spatiotemporal patterns of human dispersal. We then integrate these patterns with the available genetic and climatic evidence. 
The data obtained show that humans were probably present before, during and immediately after the Last Glacial Maximum (about 26.5–19 thousand years ago) but that more widespread occupation began during a period of abrupt warming, Greenland Interstadial 1 (about 14.7–12.9 thousand years before AD 2000). 
We also identify the near-synchronous commencement of Beringian, Clovis and Western Stemmed cultural traditions, and an overlap of each with the last dates for the appearance of now-extinct faunal genera. Our analysis suggests that the widespread expansion of humans through North America was a key factor in the extinction of large terrestrial mammals.
Lorena Becerra-Valdivia, Thomas Higham, "The timing and effect of the earliest human arrivals in North America" Nature (July 22, 2020) DOIhttps://doi.org/10.1038/s41586-020-2491-6 (citations in abstract ommitted).

Wednesday, July 15, 2020

Relativistic MOND Theory Used To Reproduce CMB

A relativistic generalization of Milgrom's MOND theory that explains dark matter phenomena as a gravity modification has been fit to the cosmic microwave background (CMB) power spectrum.

Conventional wisdom has long argued that only dark matter particle theories could explain, although I had long suspected that if MOND could reproduce dark matter phenomena at galactic scales, that a generalization of it could so so at cosmological scales.

This is a huge boost to the gravity modification side in the "dark matter v. MOND" wars within astrophysics and cosmology.
Constantinos Skordis and Tom Złosnik. . . . have shown a version of a relativistic MOND theory (which they call RelMOND) . . . does fit the CMB power spectrum. Here is the plot from their paper:



The paper and its abstract are:
We propose a relativistic gravitational theory leading to Modified Newtonian Dynamics, a paradigm that explains the observed universal acceleration and associated phenomenology in galaxies. We discuss phenomenological requirements leading to its construction and demonstrate its agreement with the observed Cosmic Microwave Background and matter power spectra on linear cosmological scales. We show that its action expanded to 2nd order is free of ghost instabilities and discuss its possible embedding in a more fundamental theory.
Constantinos Skordis, Tom Złosnik, "A new relativistic theory for Modified Newtonian Dynamics" arXiv (June 30, 2020).

Theories Of Everything

I agree with basically every single word of Sabine Hossenfelder's recent post at her blog "Backreaction" entitled "Do we need a theory of everything?" (with the possible exception of her stylistic choices in her associated podcast video which I didn't watch).

TOEs in a nutshell

She argues, and I agree, that we know we need a theory of quantum gravity, but we do not need a "Grand Unified Theory" (GUT) that finds a common symmetry group for the entire Standard Model, nor do we need a theory of quantum gravity and a the Standard Model forces to be unified in a single symmetry group constituting a "Theory of Everything" (TOE) in sensu stricto.

I agree that all GUTs and TOEs proposed to date that are capable of being falsified have been falsified, and that there is no positive unexplained experimental evidence that can only be solved by a GUT or TOE. 

Early GUTs looked promising, but the most naive and obvious efforts to construct them predicted things like proton decay, flavor changing neutral currents at the tree level, neutrinoless double beta decay, sterile neutrinos, and electroweak scale supersymmetric particles, that have not been observed at a statistically significant level in replicated results at this time.

I agree that trying to find a GUT or TOE by building an expensive new collider employing thousands of scientists does not itself justify that effort relative to other ways that scarce funds available for scientific research could be spent, although there are other scientific justifications for these projects even though they are less compelling (they keep thousands of HEP physics employed testing the limits of science, bring greater precision to our measurement of Standard Model parameters, and promote the development of new technologies and calculation methods developed for the purpose of conducting the experiments that have value of their own).

I agree that "beauty" and "naturalness" have a pretty poor track record of motivating break throughs in physics, and in particular, have not been useful guides for high energy physics theorists in the last four decades. But, I am not quite as bearish as Dr. Hoffenfelder about the potential for these kinds of intuitions to provide at least some useful guidance in hypothesis generation in fundamental physics.

I would add a few additional observations, however, because I wouldn't have a science blog if that wasn't what I did.

No part of Core Theory has been falsified, despite decades of attempts to do so.

General relativity with a cosmological constant is a century old theory. The lion's share of the Standard Model of Particle Physics is forty years old and the modifications to it since then have been relatively minor. These combined constitute "Core Theory"

No experimental evidence has ever disproved any part of the current version of the Standard Model of Particle Physics, mostly put in place in the late 1970s, as modified later to include three generations of fundamental fermions, and either two or three massive, active, Standard Model neutrinos with three weak force flavors and three mass eigenstates that oscillate pursuant to the PMNS matrix.

Likewise, no experimental or observational evidence has ever disproved General Relativity with a cosmological constant as currently operationalized.

I use the phrase "as currently operationalized" because some credible physicists think that the axioms of General Relativity are conceptually sound and correct descriptions of reality but that there are flaws in how we apply or "operationalize" General Relativity to real world phenomena that are incorrect in some manner and that if operationalized correctly, we would have a more accurate description of reality. By using this limitation to what I am talking about when I talk about General Relativity, I am treating proposals to change how we operationalize General Relativity that lead to different predictions than General Relativity as currently operationalized as a form of "BSM" physics.

There are a number of currently unresolved experimental tensions with Core Theory that are not yet significant enough to necessitate "new physics."

This isn't to say that there aren't tensions in the observational and experimental data that could cause this reality to cease to be true. But, none of those tensions have met the gold standard of five standard deviation departures from Standard Model predictions that have been replicated by credible independent experimental groups.

Some of the most notable of these tensions, although certainly not anywhere near a complete list, are: 

(1) the discrepancy between the experimentally measured value of the anomalous magnetic moment of the muon and the Standard Model prediction, 

(2) suggestions of violation of charged lepton universality (i.e. that electrons, muons and tau leptons behave identically except for their different masses), and 

(3) tensions in measurements of the Hubble constant and other astronomy observations favoring explanations of dark energy phenomena other than a simple cosmological constant.

There are at least half a dozen to a dozen other statistical significant experimental tensions with the Standard Model that fall in the two to five standard deviation from the predicted value range.

Some previously significant tensions, like the Pioneer anomaly and the muonic hydrogen radius problem, have disappeared as better measurements and better analysis has eliminated these tensions without resorting to "new physics." Conventional wisdom is that the same fate awaits most of the existing experimental tensions with Core Theory.

There are other experimental claims that are nominally more statistically significant than the five sigma discovery threshold that have not been replicated. 

For example, a Moscow experiments claims to have seen neutrinoless double beta decay, that other experiments claims to have ruled out. Other experiments claims to have directly detected dark matter particles or sterile neutrinos that other experiments claim to have ruled out. And, another experiment claims to have detected a 17 MeV particle known as X17 that other experiments purport to have ruled out. An experiment in Italy claimed to have seen superluminal neutrinos which would be contrary to the Standard Model and special relativity, until a flaw in its experimental equipment was discovered.

Conventional wisdom is that each of these apparent discoveries are either actually cases of experimental or theoretical error of an undetermined nature by the scientists making the claims, or will confirmed as soon as another group of scientists gets around to trying to replicate the results (which can take years and millions or even billions of dollars of funding to do).

Core Theory doesn't answer every question we'd ideally like it to be able to answer.

This also isn't to say that the "Core Theory" of the Standard Model and General Relativity with a cosmological constant as currently operationalized, is a "complete" theory that explains everything that we would like it to explain. Most notably, Core Theory:

* does not explain the reason that its roughly two dozen experimentally measured parameters take the values that they do;

* does not explain the mechanism by which Standard Model neutrinos acquire mass;

* does not explain the process by which matter in the form of baryons and leptons came into existence in a manner in which almost all baryons and charged leptons are made of matter rather than antimatter; and

* does not answer other questions about how the conditions of the universe at times prior to Big Bang Nucleosynthesis came to be.

Of course, there are also myriad other scientific questions that Core Theory doesn't answer, because they deal with complex phenomena which should be capable of being understood with Core Theory working from first principles, but in practice, are too complex to be derived from the bottom up in that manner, even though Core Theory can qualitatively inform our understanding of these phenomena.

For example, you can't directly apply Core Theory to learn how many species of dolphins there are in the world, or how to build a better battery, or how to cure cancer, or what the best way is to predict major earthquakes are far as possible in advance. Core Theory ideally gives us the fundamental laws of Nature, but applying them gives rise to emergent phenomena that can't be easily predicted from knowledge of those laws of Nature alone.

We know that we need BSM physics to explain reality and haven't found it yet.

In addition to things left unexplained by Core Theory, we also know that some parts of Core Theory must be wrong because the Standard Model and General Relativity have theoretical inconsistencies and fail to explain, at a minimum, dark matter phenomena.

The most profound and glaring is unsolved problem in fundamental physics is the observation of "dark matter phenomena" which at far more than a five sigma "discovery" threshold of statistical significance cannot be explained with the Standard Model and with General Relativity with a cosmological constant as currently operationalized. Beyond the Standard Model and beyond "core theory" physics (i.e. "BSM physics" or "new physics") are needed to explain these phenomena. 

No complete dark matter particle theory, and no gravity modification or quantum gravity theory, and no combination of these theories, have solved the dark matter problem in a manner that has gained wide acceptance among physicists. 

All of the most popular explanations of dark matter phenomena fail in some respect or other to explain some dark matter phenomena in a manner consistent with observational evidence. Many of the less popular explanations of dark matter phenomena have simply not be vetted well enough to know whether or not there is evidence that these theories cannot explain, and are thus, not ripe to receive wide acceptance in the scientific community.

This fact alone means that even if there is a TOE in sensu stricto out there to be discovered that it would not simply be the Standard Model attached to a theory of quantum gravity that exactly replicates General Relativity with a cosmological constant except in technical respects that can't currently be observed. A TOE would have to cover everything explained by Core Theory and also some additional BSM physics.

There is good reason to think that a correct theory of quantum gravity, should we devise one, will shed light on a number of outstanding unsolved problems in physics, in addition to the mere technical and logical inconsistencies between classically formulated General Relativity with a cosmological constant.

A theory of quantum gravity might explain dark matter phenomena, might provide an alternative explanation of dark energy phenomena, and might also resolve one or more other unsolved problems in physics, particularly in the subfield of cosmology.

In particular, Deur's work on quantum gravity purports to formulate an alternative that largely reduces to general relativity without a cosmological constant in the classical strong field limit and to explain dark matter and dark energy phenomena as second order quantum gravity effects visible only in very weak gravitational fields, as well as resolving a number of other less pressing unsolved problems in astrophysics and cosmology. It is one of the only outstanding theory of which I am aware which purports to be able to do so in all circumstances that has not been falsified in any situation. But because it hasn't been sufficiently vetted yet by other scientists, the lack of falsification could be due to lack of scientific attention, rather than due to the soundness of the theory.

We have a Core Theory explanation of dark energy, but it is hard to reconcile with many plausible solutions to the quantum gravity problem.

It is also worth observing that while "dark energy" is commonly viewed as an unsolved problem in physics, that General Relativity plus a cosmological constant is one explanation of "dark energy" that is consistent with all observed dark energy phenomena to within experimental uncertainties (although there are some developing tensions that could grow strong as our astronomy observations grow more precise and change this status quo).

But it is much easier to formulate a theory of quantum gravity that replicates, in the classical limit, General Relativity as currently operationalized without a cosmological constant, than it is to do so with a cosmological constant. This is because most quantum theories are described locally, while the cosmological constant is a global rather than a local aspect of the classical theory that is General Relativity. 

It isn't impossible to add a cosmological constant to a quantum gravity theory. One way to do so is to add a particle in addition to a graviton to a quantum gravity theory to describe the cosmological constant. There is also a class of quantum gravity theory where it is somewhat less problematic to include a cosmological constant type term, because they utilize quanta of the space-time background (one way to describe the cosmological constant is as the innate curvature of a space-time vacuum or brane), rather than, or in addition to, a carrier boson of the gravitational force known as the graviton which is analogous to a photon (in the case of the electromagnetic force in the Standard Model) or a gluon (in the case of the strong force in the Standard Model).

For this reason, active investigation of alternative ways to explain dark energy phenomena goes hand in hand with research in to a theory of quantum gravity. It seems quite plausible that we may need to undo some of the grand and simple General Relativity with a cosmological constant solution to the question of gravity in order to get to a theoretically consistent theory of quantum gravity.