Pages

Monday, June 29, 2020

Ancient Subsaharan African DNA

Not so long ago there was doubt that we'd every get ancient DNA from Sub-Saharan Africa. A new open access paper with a score of new samples (many quite old) discusses almost all of the African ancient DNA data currently available (85 samples), which, while not abundant, is enough to clarify (and add mystery to) the population genetic history of Africa. The abstract of the paper in Science Advances is as follows:
Africa hosts the greatest human genetic diversity globally, but legacies of ancient population interactions and dispersals across the continent remain understudied. Here, we report genome-wide data from 20 ancient sub-Saharan African individuals, including the first reported ancient DNA from the DRC, Uganda, and Botswana. These data demonstrate the contraction of diverse, once contiguous hunter-gatherer populations, and suggest the resistance to interaction with incoming pastoralists of delayed-return foragers in aquatic environments. We refine models for the spread of food producers into eastern and southern Africa, demonstrating more complex trajectories of admixture than previously suggested. In Botswana, we show that Bantu ancestry post-dates admixture between pastoralists and foragers, suggesting an earlier spread of pastoralism than farming to southern Africa. Our findings demonstrate how processes of migration and admixture have markedly reshaped the genetic map of sub-Saharan Africa in the past few millennia and highlight the utility of combined archaeological and archaeogenetic approaches.

For reference purposes, Bantu expansion is generally consider an "Iron Age" event.

Bernard's Blog discusses a new sub-Saharan African ancient DNA paper (translation via Google), and I quote from Bernard below. It will take considerably greater analysis to fit this into the paradigm to see where it supports it, and where it supposed different narratives. 
Ke Wang and his colleagues have just published a paper entitled: Ancient genomes reveal complex patterns of population movement, interaction, and replacement in sub-Saharan Africa . They sequenced the genomes of twenty skeletons from Kenya (10 in red below), Congo (5 in blue), Uganda (1 in orange) and Botswana (4 in green) dated between 3900 and 150 years old:


The Principal Component Analysis:


Admixture data:

The figure above shows in particular that if the Levantine ancestry (in red) is fairly constant between 30 and 40% among pastoralists in East Africa the two other components: hunter-gatherer (in light blue) and Dinka (in dark blue) vary greatly from one individual to another. In addition, estimates of the dating of genetic mixtures suggest that genetic mixtures between pastors and hunter-gatherers have occurred regularly in Kenya. It is therefore likely that communities of hunter-gatherers and pastoralists have lived in parallel in the region for a long time. Furthermore, if there is indeed a flow of genes from hunter-gatherer populations to pastoralists, the reverse is not verified.

Two individuals from the Iron Age Kakapel archaeological site in Kenya dated 900 and 300 years show strong Dinka ancestry in the figure above and strong Bantu affinity on the PCA. In addition, one of the two individuals has a small proportion of Levantine ancestry, thus suggesting the complexity of the interactions that took place in the Iron Age in Africa. In addition, an individual from the Congo dated 750 years has a very high proportion of hunter-gatherer ancestry and adds to the complexity of the different interactions. Finally, the 500-year-old Ugandan individual shows a strong affinity with Bantu farmers. Several different livelihood groups entered East Africa during this period.

The four genomes of Botswana make it possible to investigate the arrival of agriculture in South Africa. Their position on the PCA highlights the high proportion of Bantu ancestry they have. These results are confirmed by the figure indicating the proportions of ancestry above. However, Botswana individuals have between 10 and 40% hunter-gatherer ancestry from South Africa. However, the best determined model shows that the South African source is not the same according to the individuals of Botswana. If the source corresponds to a hunter-gatherer of 2000 years for certain individuals, it corresponds to a hunter-gatherer of 1200 years for others. This latter ancestry is actually the result of a genetic mix between hunter-gatherers of South Africa and the pastors of East Africa. This thus suggests a first dissemination of pastors in South Africa followed by the dissemination of Bantu farmers. These results are also reflected in the archaeological evidence in South Africa which shows great diversity. Different and specific interactions between populations have taken place depending on the region.

Friday, June 26, 2020

Is It The Smallest Ever Observed Black Hole Or The Biggest Ever Observed Neutron Star?

Gravitational wave detectors have observed what appears to be an intermediate sized black hole (26 times the mass of the Sun) collide with a "compact object" with a mass of 2.6 (2.5 to 2.64 at a 90% confidence interval) times the mass of the Sun.

Beyond a certain cutoff mass at a given radius, compact masses like neutron stars, collapse and form black holes. But, calculating the cutoff isn't a clean and simple calculation, because you have to model how tightly protons and neutrons can be squeezed together by gravity as nuclear forces push back against being squeezed too tightly, and those are complex systems involving vast numbers of protons and neutrons that can't be modeled exactly.

The most dense, large compact objects in the universe that are not black holes are neutron starsNeutron stars are just massive, extremely dense, ordinary stars on the continuum of ordinary star behavior. 

But black holes are qualitatively different. In classical General Relativity they are mathematical singularities. In theories of quantum gravity, black holes are "almost" singularities (from which nothing can escape) but leak slightly in a theoretically described phenomena called "Hawking Radiation" which is too slight to be observed over the noise of cosmic background radiation with current means. 

This compact object is potentially more massive than any previously observed neutron star (it has a higher minimum mass within experimental uncertainties than any previously observed neutron star), but it is lighter than the lightest known black hole (see here), subject to outliers near the boundary with large error margins in their mass estimates. 

The least massive black hole ever observed has a mass of 2.72-2.82 solar masses (in a 95% confidence range). The most massive previously observed neutron stars have masses of 2.32-3.15, 1.9-3.00, 2.15-2.70 and 2.16-2.64 solar masses (in a 95% confidence range). So, the cutoff has to be somewhere in the range of 2.32 solar masses to 2.82 solar masses. This object is squarely in the middle of that range.

The paper is attracting wide attention because the object observed falls into this mass gap where we have no data to confirm if it is a neutron star or a black hole, which this result could help us to determine, shedding light on the accuracy of our predictions about the transition mass between ordinary stars and black holes. Even more exciting, but less likely, would be the possibility that the smaller object is a "primordial black hole" (discussed below).

In theory, a neutron star would emit light, which a black hole (by definition) does not, but neutron stars are dim at the best of times and this object is so far away that it would be hard to see that light even if it was being emitted. It is 241 +41 -45 megaparsecs from Earth (more than 700 light years away). No electromagnetic counterpart to the gravitational waves have been confirmed to date. This nominally favors a black hole interpretation, but not strongly, because light from a neutron star that far away is so hard to see, even with state of the art telescopes.

Knowing whether this is a black hole or a neutron star would help pin down the exact cutoff mass between black holes and neutron stars. This would in turn tell us a lot about the properties of neutrons in the ultra high pressure environment of a neutron star As an aside, note that stars can get much more massive than 3 solar masses, but only if they are much less dense than a neutron star.

It is also possible that there are some black holes out there (called "primordial black holes") which were formed not by the collapse of a star, but by the collapse of smaller amounts of matter than necessary to form a black hole now, in the early days of the Universe shortly after the Big Bang when everything was much more densely packed, providing an alternative means of creating black holes.

If primordial black holes do exist, smaller ones would have ceased to exist eons ago, because the proportion of a black hole's mass that is emitted in Hawking Radiation is a function of its total mass. Tiny black holes should evaporate via Hawking Radiation almost immediately, while primordial black holes with masses comparable to those of asteroids or larger could still be in existence 14 billion years or so after the Big Bang.

One of the main papers describing this event is LIGO Scientific Collaboration and Virgo Collaboration, "GW190814: Gravitational Waves from the Coalescence of a 23 M Black Holewith a 2.6 M Compact Object" arXiv (June 24, 2020).

UPDATE July 9, 2020: A follow up paper by different authors favors a black hole interpretation:

Although gravitational-wave signals from exceptional low-mass compact binary coalescences, like GW170817, may carry matter signatures that differentiate the source from a binary black hole system, only one out of every eight events detected by the current Advanced LIGO and Virgo observatories are likely to have signal-to-noise ratios large enough to measure matter effects, even if they are present. Nonetheless, the systems' component masses will generally be constrained precisely. Constructing an explicit mixture model for the total rate density of merging compact objects, we develop a hierarchical Bayesian analysis to classify gravitational-wave sources according to the posterior odds that their component masses are drawn from different subpopulations. Accounting for current uncertainty in the maximum neutron star mass, and adopting different reasonable models for the total rate density, we examine two recent events from the LIGO-Virgo Collaboration's third observing run, GW190425 and GW190814. For population models with no overlap between the neutron star and black hole mass distributions, we typically find that there is a ≳70% chance that GW190425 was a binary neutron star merger rather than a neutron-star--black-hole merger. On the other hand, we find that there is a ≲6% chance that GW190814 involved a slowly spinning neutron star, regardless of our assumed population model.
So does another paper:
Is the secondary component of GW190814 the lightest black hole or the heaviest neutron star ever discovered in a double compact-object system [R. Abbott et al., ApJ Lett., 896, L44 (2020)]? This is the central question animating this letter. Covariant density functional theory provides a unique framework to investigate both the properties of finite nuclei and neutron stars, while enforcing causality at all densities. By tuning existing energy density functionals we were able to: (a) account for a 2.6 Msun neutron star, (b) satisfy the original constraint on the tidal deformability of a 1.4 Msun neutron star, and (c) reproduce ground-state properties of finite nuclei. Yet, for the class of models explored in this work, we find that the stiffening of the equation of state required to support super-massive neutron stars is inconsistent with either constraints obtained from energetic heavy-ion collisions or from the low deformability of medium-mass stars. Thus, we speculate that the maximum neutron star mass can not be significantly higher than the existing observational limit and that the 2.6 Msun compact object is likely to be the lightest black hole ever discovered.

A First Take On A NYT Story About Collapse Theory In Quantum Mechanics

In story in yesterday's New York Times talked about someone who is developing an approach to quantum mechanics called "objective collapse models" and is portrayed as a radical rebel fighting scientific orthodoxy.

There is a lot of local color and personal profiling in the story, but the scientific core of it is less radical or rebellious than the atmospherics in the story would suggest.

In quantum mechanics there are lots of phenomena in which something behaves like a spread out wave until it is observed, after which when observed, they behave like point-like objects with more specific properties. Quantum mechanics is rather vague about exactly what it is about a measurement that triggers this "collapse of the wave function".

Angelo Bassi's "objective collapse models" are an attempt to put more definition on what triggers the collapse of the wave function. He does not try to dispute parts of quantum mechanics that are well established experimentally, as so many others crackpot theorists (mostly not professional physicists), who just don't like the realities that quantum mechanics reveals, do.

Wikipedia has this to say in its introduction to its Objective Collapse Theory page:
Objective-collapse theories, also known as models of spontaneous wave function collapse or dynamical reduction models, were formulated as a response to the measurement problem in quantum mechanics, to explain why and how quantum measurements always give definite outcomes, not a superposition of them as predicted by the Schrödinger equation, and more generally how the classical world emerges from quantum theory. The fundamental idea is that the unitary evolution of the wave function describing the state of a quantum system is approximate. It works well for microscopic systems, but progressively loses its validity when the mass / complexity of the system increases. 
In collapse theories, the Schrödinger equation is supplemented with additional nonlinear and stochastic terms (spontaneous collapses) which localize the wave function in space. The resulting dynamics is such that for microscopic isolated systems the new terms have a negligible effect; therefore, the usual quantum properties are recovered, apart from very tiny deviations. Such deviations can potentially be detected in dedicated experiments, and efforts are increasing worldwide towards testing them. 
An inbuilt amplification mechanism makes sure that for macroscopic systems consisting of many particles, the collapse becomes stronger than the quantum dynamics. Then their wave function is always well localized in space, so well localized that it behaves, for all practical purposes, like a point moving in space according to Newton’s laws.
In this sense, collapse models provide a unified description of microscopic and macroscopic systems, avoiding the conceptual problems associated to measurements in quantum theory.
 
The most well-known examples of such theories are: 
Ghirardi–Rimini–Weber (GRW) model 
* Continuous Spontaneous Localization (CSL) model 
Diósi–Penrose (DP) model 
Collapse theories stand in opposition to many-worlds interpretation theories, in that they hold that a process of wave function collapse curtails the branching of the wave function and removes unobserved behaviour.
The Stanford Encyclopedia of Philosophy has a nice entry on Collapse Theories that puts Bassi's work in context. This entry explains that experimental tests of his proposals have been proposed, but the differences in the theoretically expected experimental outcomes between these models and the mainstream alternatives are so subtle, that it is hard to do an experiment that definitively distinguishes between them. We can, however, do experiments that are close to being good enough to see these distinctions, and improvements in instrumental technologies should make it possible to confirm or  rule out this class of theories in the medium term future.

Bassi is also part of the LHCb collaboration jointly publishing papers such as this one experimentally measuring one of the Standard Model's weak force related constants, which governs the probability of a b quark transitioning to a charm quark in a W boson mediated interaction.

Bassi's most recent preprint on this subject is this one (which was eventually published):
Collapse models implement a progressive loss of quantum coherence when the mass and the complexity of quantum systems increase. We will review such models and the current attempts to test their predicted loss of quantum coherence.
Matteo Carlesso, Angelo Bassi, "Current tests of collapse models: How far can we push the limits of quantum mechanics?" arXiv (January 27, 2020) published in Quantum Information and Measurement (QIM) V: Quantum Technologies; OSA Technical Digest (Optical Society of America, 2019), paper S1C.3

Another paper exploring the line of research mentioned in the New York Times article is this September 25, 2019 preprint examining the question of whether there is a "minimum measurement time" for quantum phenomena. This paper references the "Continuous Spontaneous Localization" (CSL) model which is one of the objective collapse models that he works with in his research.

Wednesday, June 24, 2020

Calibrating Mutation Rates

Understanding the rate and pattern of germline mutations is of fundamental importance for understanding evolutionary processes. Here we analyzed 19 parent-offspring trios of rhesus macaques (Macaca mulatta) at high sequencing coverage of ca. 76X per individual, and estimated an average rate of 0.73 × 10−8 de novo mutations per site per generation (95 % CI: 0.65 × 10−8 - 0.81 × 10−8). By phasing 50 % of the mutations to parental origins, we found that the mutation rate is positively correlated with the paternal age. The paternal lineage contributed an average of 80 % of the de novo mutations, with a trend of an increasing male contribution for older fathers. About 1.9 % of de novo mutations were shared between siblings, with no parental bias, suggesting that they arose from early development (postzygotic) stages. Finally, the divergence times between closely related primates calculated based on the yearly mutation rate of rhesus macaque generally reconcile with divergence estimated with molecular clock methods, except for the Cercopithecidae/Hominoidea molecular divergence dated at 54 Mya using our new estimate of the yearly mutation rate.
Lucie A. Bergerson, et al., "The germline mutational process in rhesus macaque and its implications for phylogenetic dating" bioRxiv (June 23, 2020). doi: https://doi.org/10.1101/2020.06.22.164178

Volcano In Alaska Impacted Roman Empire and Egypt In 43 BCE

Once again a volcano has been linked to a historically influential climate event. 
Around the time of Julius Caesar's death in 44 BCE, written sources describe a period of unusually cold climate, crop failures, famine, disease, and unrest in the Mediterranean Region -impacts that ultimately contributed to the downfall of the Roman Republic and Ptolemaic Kingdom of Egypt. Historians have long suspected a volcano to be the cause, but have been unable to pinpoint where or when such an eruption had occurred, or how severe it was. 
In a new study published this week in Proceedings of the National Academy of Sciences (PNAS), a research team led by Joe McConnell, Ph.D. of the Desert Research Institute in Reno, Nev. uses an analysis of tephra (volcanic ash) found in Arctic ice cores to link the period of unexplained extreme climate in the Mediterranean with the caldera-forming eruption of Alaska's Okmok volcano in 43 BCE.
From here

The paper and its abstract are as follows:
Significance 
The first century BCE fall of the Roman Republic and Ptolemaic Kingdom and subsequent rise of the Roman Empire were among the most important political transitions in the history of Western civilization. Volcanic fallout in well-dated Arctic ice core records, climate proxies, and Earth system modeling show that this transition occurred during an extreme cold period resulting from a massive eruption of Alaska’s Okmok volcano early in 43 BCE. Written sources describe unusual climate, crop failures, famine, disease, and unrest in the Mediterranean immediately following the eruption—suggesting significant vulnerability to hydroclimatic shocks in otherwise sophisticated and powerful ancient states. Such shocks must be seen as having played a role in the historical developments for which the period is famed. 
Abstract 
The assassination of Julius Caesar in 44 BCE triggered a power struggle that ultimately ended the Roman Republic and, eventually, the Ptolemaic Kingdom, leading to the rise of the Roman Empire. Climate proxies and written documents indicate that this struggle occurred during a period of unusually inclement weather, famine, and disease in the Mediterranean region; historians have previously speculated that a large volcanic eruption of unknown origin was the most likely cause. Here we show using well-dated volcanic fallout records in six Arctic ice cores that one of the largest volcanic eruptions of the past 2,500 y occurred in early 43 BCE, with distinct geochemistry of tephra deposited during the event identifying the Okmok volcano in Alaska as the source. Climate proxy records show that 43 and 42 BCE were among the coldest years of recent millennia in the Northern Hemisphere at the start of one of the coldest decades. Earth system modeling suggests that radiative forcing from this massive, high-latitude eruption led to pronounced changes in hydroclimate, including seasonal temperatures in specific Mediterranean regions as much as 7 °C below normal during the 2 y period following the eruption and unusually wet conditions. While it is difficult to establish direct causal linkages to thinly documented historical events, the wet and very cold conditions from this massive eruption on the opposite side of Earth probably resulted in crop failures, famine, and disease, exacerbating social unrest and contributing to political realignments throughout the Mediterranean region at this critical juncture of Western civilization.

Tuesday, June 23, 2020

New Discoveries At Stonehenge

There is more to Stonehenge than was previously known. "The discovery makes up for the cancellation of this year’s summer solstice celebrations at Stonehenge – on 20 June – due to the ban on mass gatherings prompted by Covid-19." They were probably built by the same Neolithic/Megalithic (pre-Bell Beaker) people who built Stonehenge itself. 

The mystery near and around Stonehenge keeps growing. 
The latest revelation is the discovery of a ring of at least 20 prehistoric shafts about 2 miles from the famous Neolithic site of immense upright stones, according to an announcement from the University of Bradford. 
Archaeologists say the "astonishing" shafts in Durrington Walls date back to 2500 B.C. and form a circle more than 2 kilometers (1.2 miles) in diameter. Each one measures up to 10 meters (33 feet) in diameter and 5 meters (16 feet) deep. 
Researchers say there may have been more than 30 of the shafts at one time. . . . 
The exact purpose of the shafts is unclear, but one prominent theory is that they may have acted as a boundary to a sacred area connected to Stonehenge. 
Via National Public Radio (press release here).

Monday, June 22, 2020

Standard Model Neutrino Properties Recapped

There are seven experimentally determined parameters related to neutrinos in the Standard Model: four parameters of the PMNS matrix, and three neutrino mass eigenstates (for background see here). A new preprint has updated the latest state of the measurement of the PMNS matrix parameters and neutrino masses as of 2020.

The results for the parameters whose values aren't expressly recited in the abstract are set forth in Table III of the paper:

Taking the square roots and referring to the normal ordering that is strongly preferred by the data, the m(21) mass difference is 8.66 meV (one sigma error about 2.8%). The m(31) mass difference is 50.60 meV (one sigma error about 1.4%).

For example, the best fit values for a 1 meV first neutrino mass eigenvalue would be:

1 meV
9.66 meV
51.60 meV
Sum of neutrino masses:  62.26 meV.

Cosmology bounds the sum of the three neutrino masses to about 130 meV, which implies a mass of about 0 to 23.58 meV for the lightest neutrino mass eigenstates.

The preference for normal v. inverted mass ordering of the absolute neutrino masses is shown in Table II (in which OSC means oscillation data, and Cosmo refers to cosmology data).

In my view, the balance of the evidence also strongly disfavors the sterile neutrino hypothesis (not considered in this paper) and disfavors less strongly, but still disfavors, the existence of neutrinoless double beta decay.

As of last year, the constants were as follows (per the Particle Data Group):
The data below are in the form in which the actual values are directly measured (sine squared values of real valued mixing angles and squared values of mass differences) rather than the underlying parameter values which are easily derived from them with a scientific calculator.
in^2(theta23):

The full data for the parameters shown as ". . . " in the chart above are as follows:
sin^2(theta23) theta23 could be either side of a 45 degree angle based upon existing measurements and assuming a "normal" mass hierarchy for the neutrino masses. But existing experiments, while capable of determining that theta23 is not 45 degrees, but can't determine if it is greater or smaller than those values, which is why there is both an octant I and an octant II value.

0.5120.022+0.019OUR FIT  Normal ordering, octant I
0.5420.022+0.019OUR FIT  Normal ordering, octant II
0.5360.028+0.023OUR FIT  Inverted ordering

Delta m32^2 with normal ordering is 2.444 ± 0.034 (the number shown in chart is inverse mass hierarchy).

Sum of neutrino masses Σmν < 0.12 eV (95%, CMB + BAO); ≥ 0.06 eV (mixing).

Directly measured neutrino mass limits:

Neutrino Flavors

The number of neutrino flavors in the Standard Model is a theoretically determined, rather than experimentally measured value, but the experimental measurements are consistent at the two sigma level with the Standard Model value of 3:

Effective number of neutrino flavors Neff 2.99 ± 0.17 (cosmology measurements) (the expected value of this measured physical constant with exactly three types of neutrinos is 3.045 rather than zero for technical reasons related to the way that radiation impacts the relevant observables). This measurement includes all light neutrinos (up to the order of roughly 1-10 eV in mass) that oscillate with each other, and is independent of whether or not they interact via the weak force.

Number of light (i.e. less than 45 GeV) neutrino flavors from Z boson decays Nν = 2.984 ± 0.008. The Standard Model theoretical value is 3.
The paper and its abstract are as follows:

[Submitted on 19 Jun 2020]

2020 Global reassessment of the neutrino oscillation picture

We present an updated global fit of neutrino oscillation data in the simplest three-neutrino framework. In the present study we include up-to-date analyses from a number of experiments. Namely, we have included all T2K measurements as of December 2019, the most recent NOνA antineutrino statistics, and data collected by the Daya Bay and RENO reactor experiments. Concerning the atmospheric and solar sectors, we have also updated our analyses of DeepCore and SNO data, respectively. 
All in all, these new analyses result in more accurate measurements of θ13θ12Δm221 and |Δm231|. The best fit value for the atmospheric angle θ23 lies in the second octant, but first octant solutions remain allowed at 2σ. Regarding CP violation measurements, the preferred value of δ we obtain is 1.20π (1.54π) for normal (inverted) neutrino mass ordering. 
These new results should be regarded as extremely robust due to the excellent agreement found between our Bayesian and frequentist approaches. 
Taking into account only oscillation data, there is a preference for the normal neutrino mass ordering at the 2.7σ level. While adding neutrinoless double beta decay from the latest Gerda, CUORE and KamLAND-Zen results barely modifies this picture, cosmological measurements raise the significance to 3.1σ within a conservative approach. A more aggressive data set combination of cosmological observations leads to a stronger preference for normal with respect to inverted mass ordering, at the 3.3σ level.

This cosmological data set provides 2σ upper limits on the total neutrino mass corresponding to mν<0.13 (0.15)~eV in the normal (inverted) neutrino mass ordering scenario.

These bounds are among the most complete ones in the literature, as they include all currently available neutrino physics inputs.
Comments:34 pages, 15 figures, 3 tables
Subjects:High Energy Physics - Phenomenology (hep-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Experiment (hep-ex)
Cite as:arXiv:2006.11237 [hep-ph]
(or arXiv:2006.11237v1 [hep-ph] for this version)