Monday, February 6, 2017

Fat Tails Are The Norm

Scientific papers of all stripes routinely assume that the likelihood of low frequency events is "Gaussian" which is to say that the probability distribution of approximately a "normal" distribution which can be parameterized fully by a mean value and a standard deviation.

But, a recent study shows that, in real life, in complex fact patterns, probability distributions routinely have "fat tails" (a.k.a. "long tails") (i.e. the likelihood of extreme events is much greater than a normal distribution would suggest).
Published recently in the journal Royal Society Open Science, the study suggests that research in some of the more complex scientific disciplines, such as medicine or particle physics, often doesn't eliminate uncertainties to the extent we might expect. 
"This is due to a tendency to under-estimate the chance of significant abnormalities in results." said study author David Bailey, a professor in U of T's Department of Physics. 
Looking at 41,000 measurements of 3,200 quantities -- from the mass of an electron to the carbon dating of a sample -- Bailey found that anomalous observations happened up to 100,000 times more often than expected. 
"The chance of large differences does not fall off exponentially as you'd expect in a normal bell curve," said Bailey.
The paper and its abstract are as follows:
Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student’s t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply.
David C. Bailey. "Not Normal: the uncertainties of scientific measurements." Royal Society Open 4(1) Science 160600 (2017).

It isn't that the statistical law sometimes called the "law of averages" that make a Gaussian distribution seem reasonable, is inaccurate. But, the assumptions of that law often do not hold as tightly as our intuition suggest.

This could be because the events are not independent of each other, because systemic error is underestimated, because the measurements aren't properly weighted, because the thing being measured does not have sufficiently quantitatively comparable units, because look elsewhere effects aren't properly considered, or because the underlying distributions of individual events that add up to form the overall result are not simple binomial probabilities.

In particle physics this is handled by setting standards for nominal significance in error estimates assuming a Gaussian distribution that are far higher than what ought to be necessary to constitute a discovery (i.e. 5 sigma).

Lubos Motl also has a recent post on a similar subject which I won't attempt to summarize here, which distinguishes between probabilities with "fat tails" (when extreme events are actually more likely than in a normal distribution) and the application of the "precautionary principle" (which is used to justify assuming that unlikely bad events have relatively high probabilities and should be regulated when the exact probability can't be determined exactly) to justify regulations in a cost-benefit analysis.

Thursday, February 2, 2017

Thermal Relic Warm Dark Matter Can't Have A Mass Of Less Than 2 keV

The mass parameter space of one of the leading dark matter models remains in the same range where estimates from other means of estimated have put it. 
We study the substructure content of the strong gravitational lens RXJ1131-1231 through a forward modelling approach that relies on generating an extensive suite of realistic simulations. The statistics of the substructure population of halos depends on the properties of dark matter. We use a merger tree prescription that allows us to stochastically generate substructure populations whose properties depend on the dark matter particle mass. These synthetic halos are then used as lenses to produce realistic mock images that have the same features, e.g. luminous arcs, quasar positions, instrumental noise and PSF, as the data. By analysing the data and the simulations in the same way, we are able to constrain models of dark matter statistically using Approximate Bayesian Computing (ABC) techniques. This method relies on constructing summary statistics and distance measures that are sensitive to the signal being targeted. We find that using the HST data for \RXJ we are able to rule out a warm dark matter thermal relict mass below 2 keV at the 2 sigma confidence level.
Simon Birrer, Adam Amara, and Alexandre Refregier, "Lensing substructure quantification in RXJ1131-1231: A 2 keV lower bound on dark matter thermal relict mass" (January 31, 2017).

The bounds of an axion dark matter candidate's mass have also recently been significantly constrained.

As previously mentioned, primordial black holes (or, more generally MACHOS) are also strongly disfavored by the evidence.

Also, self-interacting dark mater has been pretty much ruled out, and the parameter space for a dark photon mass in SIDM theories had already been significantly restrained.

WIMPS have been ruled out in the mass range from about 1 GeV to 10 TeV. There are also strict limits on the rates at which dark matter particles of 10 GeV mass (or more) can annihilate.
Examination of the cosmic rays produced by a dwarf galaxy with an apparent high proportion of dark matter places strict limits on the dark matter annihilation cross-section and mean dark matter lifetime for dark matter candidates with 10 GeV or more of mass. 
The age of the universe is about 4.35*10^17 seconds (13.8 billion years). The minimum mean lifetime of dark matter with various assumptions given the observations made in this study is from 10^25 to 10^27 seconds. Thus, 99.999999% or more of the dark matter, if it exists and has 10 GeV or heavier particles, that ever in existence during the lifetime of the universe must still exist.
Planck data likewise confirm that there are strict limits on the rate at which dark matter can annihilate. High velocity galactic cluster impacts like the Bullet Cluster and El Gordo also disfavor Cold Dark Matter models. And, experimental observations have likewise ruled out proton decay at a high probability.

While they were never much of a dark matter candidate, per se, the data increasingly rule out a light sterile neutrino that oscillates with ordinary active neutrinos. 

Wednesday, February 1, 2017

Genetic Continuity From The Mesolithic Near Border Of Russia and Korea

Genetic continuity from the first wave farmers of the Neolithic era to the present is uncommon in Europe (Sardinia comes closest), and continuity from Mesolithic hunter-gatherer DNA to modern populations is almost unheard of in Europe. But, remarkably, two Mesolithic ancient DNA samples gathered near the border of Russia and North Korea are genetically similar to modern populations in the area.

I use the Western convention of calling hunter-gatherers on the eve of the food production era Mesolithic, rather the Neolithic, a term which in the Eastern convention is based upon pottery and basket use as a key litmus test. The paper notes that:
Early Neolithic societies in the Russian Far East, Japan, and Korea started to manufacture and use pottery and basketry 10.5 to 15 ka, but domesticated crops and livestock arrived several millennia later.  . . .
This site dates back to 9.4 to 7.2 ka, with the human remains dating to ~7.7 ka, and it includes some of the world’s earliest evidence of ancient textiles. The people inhabiting Devil’s Gate were hunter-fisher-gatherers with no evidence of farming; the fibers of wild plants were the main raw material for textile production. We focus our analysis on the two samples with the highest sequencing coverage, DevilsGate1 and DevilsGate2, both of which were female. The mitochondrial genome of the individual with higher coverage (DevilsGate1) could be assigned to haplogroup D4; this haplogroup is found in present-day populations in East Asia and has also been found in Jomon skeletons in northern Japan. For the other individual (DevilsGate2), only membership to the M branch (to which D4 belongs) could be established. . . .
We compared the individuals from Devil’s Gate to a large panel of modern-day Eurasians and to published ancient genomes. On the basis of PCA and an unsupervised clustering approach, ADMIXTURE, both individuals fall within the range of modern variability found in populations from the Amur Basin, the geographic region where Devil’s Gate is located, and which is today inhabited by speakers from a single language family (Tungusic). This result contrasts with observations in western Eurasia, where, because of a number of major intervening migration waves, hunter-gatherers of a similar age fall outside modern genetic variation.
We further confirmed the affinity between Devil’s Gate and modern-day Amur Basin populations by using outgroup f3 statistics in the form f3(African; DevilsGate, X), which measures the amount of shared genetic drift between a Devil’s Gate individual and X, a modern or ancient population, since they diverged from an African outgroup. Modern populations that live in the same geographic region as Devil’s Gate have the highest genetic affinity to our ancient genomes, with a progressive decline in affinity with increasing geographic distance (r2 = 0.756, F1,96 = 301, P < 0.001), in agreement with neutral drift leading to a simple isolation-by-distance pattern. 
The Ulchi, traditionally fishermen who live geographically very close to Devil’s Gate and are the only Tungusic-speaking population from the Amur Basin sampled in Russia (all other Tungusic speakers in our panel are from China), are genetically the most similar population in our panel. Other populations that show high affinity to Devil’s Gate are the Oroqen and the Hezhen—both of whom, like the Ulchi, are Tungusic speakers from the Amur Basin—as well as modern Koreans and Japanese. Given their geographic distance from Devil’s Gate, Amerindian populations are unusually genetically close to samples from this site, in agreement with their previously reported relationship to Siberian and other north Asian populations. . . . 
By analyzing genome-wide data from two early Neolithic East Asians from Devil’s Gate, in the Russian Far East, we could demonstrate a high level of genetic continuity in the region over at least the last 7700 years. The cold climatic conditions in this area, where modern populations still rely on a number of hunter-gatherer-fisher practices, likely provide an explanation for the apparent continuity and lack of major genetic turnover by exogenous farming populations, as has been documented in the case of southeast and central Europe. Thus, it seems plausible that the local hunter-gatherers progressively added food-producing practices to their original lifestyle. However, it is interesting to note that in Europe, even at very high latitudes, where similar subsistence practices were still important until very recent times, the Neolithic expansion left a significant genetic signature, albeit attenuated in modern populations, compared to the southern part of the continent. Our ancient genomes thus provide evidence for a qualitatively different population history during the Neolithic transition in East Asia compared to western Eurasia, suggesting stronger genetic continuity in the former region. These results encourage further study of the East Asian Neolithic, which would greatly benefit from genetic data from early agriculturalists (ideally, from areas near the origin of wet rice cultivation in southern East Asia), as well as higher-coverage hunter-gatherer samples from different regions to quantify population structure before intensive agriculture. 
The paper also conducts some interesting analysis of Korean and Japanese genetic informed by these new ancient genomes that largely support existing paradigms and suggest genetic connections between the Devil's Gate Mesolithic population and the Jomon of Japan.
The close genetic affinity between Devil’s Gate and modern Japanese and Koreans, who live further south, is also of interest. It has been argued, based on both archaeological and genetic analyses, that modern Japanese have a dual origin, descending from an admixture event between hunter-gatherers of the Jomon culture (16 to 3 ka) and migrants of the Yayoi culture (3 to 1.7 ka), who brought wet rice agriculture from the Yangtze estuary in southern China through Korea. The few ancient mtDNA samples available from Jomon sites on the northern Hokkaido island show an enrichment of particular haplotypes (N9b and M7a, with D1, D4, and G1 also detected) present in modern Japanese populations, particularly the Ainu and Ryukyuans, as well as southern Siberians (for example, Udegey and Ulchi). The mtDNA haplogroups of our samples from Devil’s Gate (D4 and M) are also present in Jomon samples, although they are not the most common ones (N9b and M7a). Recently, nuclear genetic data from two Jomon samples also confirmed the dual origin hypothesis and implied that the Jomon diverged before the diversification of present-day East Asians. 
We investigated whether it was possible to recover the Northern and Southern genetic components by modeling modern Japanese as a mixture of all possible pairs of sources, including both modern Asian populations and Devil’s Gate, using admixture f3 statistics. The clearest signal was given by a combination of Devil’s Gate and modern-day populations from Taiwan, southern China, and Vietnam, which could represent hunter-gatherer and agriculturalist components, respectively. However, it is important to note that these scores were just barely significant (−3 < z < −2) and that some modern pairs also gave negative scores, even if not reaching our significance threshold (z scores as low as −1.9). 
The origin of Koreans has received less attention. Also, because of their location on the mainland, Koreans have likely experienced a greater degree of contact with neighboring populations throughout history. However, their genomes show similar characteristics to those of the Japanese on genome-wide SNP data and have also been shown to harbor both northern and southern Asian mtDNA and Y chromosomal haplogroups. Unfortunately, our low coverage and small sample size from Devil’s Gate prevented a reliable estimate of admixture coefficients or use of linkage disequilibrium–based methods to investigate whether the components originated from secondary contact (admixture) or continuous differentiation and to date any admixture event that did occur. 
The abstract and citation to the paper are as follows: 
Ancient genomes have revolutionized our understanding of Holocene prehistory and, particularly, the Neolithic transition in western Eurasia. In contrast, East Asia has so far received little attention, despite representing a core region at which the Neolithic transition took place independently ~3 millennia after its onset in the Near East. 
We report genome-wide data from two hunter-gatherers from Devil’s Gate, an early Neolithic cave site (dated to ~7.7 thousand years ago) located in East Asia, on the border between Russia and Korea. Both of these individuals are genetically most similar to geographically close modern populations from the Amur Basin, all speaking Tungusic languages, and, in particular, to the Ulchi. The similarity to nearby modern populations and the low levels of additional genetic material in the Ulchi imply a high level of genetic continuity in this region during the Holocene, a pattern that markedly contrasts with that reported for Europe.
Siska et al., Genome-wide data from two early Neolithic East Asian individuals dating to 7700 years ago, 3(2) Science Advances e1601877 (February 1, 2017) (open access).

Hat Tip to Eurogenes.

Slavic Origins

I've noted before that Slavic expansion in the historic era dramatically transformed the linguistic landscape of Eastern Europe. As the comments to that post note, this expansion, in addition to spreading the Slavic languages, had a demic component that left a significant genetic trace across Slavic Europe. Indeed, the Slavic expansion was the last last linguistic transformation in Europe of this geographic scale, unless you count the transformation of vernacular Latin into the Romance languages which was taking place contemporaneously. Yet, this story is largely unknown, at least in the West, even to people well educated in history.


Places where the Modern Slavic languages are spoken, per Wikipedia. Romania, Hungary and Moldova are notable gaps between the northern and southern region of this range. Romanian is a Romance language. Hungarian is a Uralic language adopted following a Magyar conquest ca. 1000 CE that was largely confined to aristocratic elites and did not result in demic replacement.

Where did this begin?

The best available evidence suggests that it originated in the Kiev culture, although the origin of that culture in turn is disputed. The contemporaneous Chernyakhov culture to the South was a mixed Slavic and Gothic culture.

When did this happen?

An expansion from the Kiev culture began the last few centuries before the collapse of the Western Roman Empire in the 5th century. The proximate cause of the Western Roman empire's fall was an invasion of linguistically Eastern Germanic tribes including the Goths, although there is a huge amount of historical debate regarding the deeper causes of its fall. The Slavic expansion had largely run its course sometime in the Middle Ages.


Wielbark culture, migration of Goths (orange arrow), the 2nd-3rd centuries, Chernyakhov culture (orange line), the 3rd century and (red line), the 4th century, Kiev culture (yellow line), the 3rd-4th centuries. Image and caption per Wikipedia link above.

Wikipedia states regarding the Kiev culture:
The Kiev culture is an archaeological culture dating from about the 3rd to 5th centuries, named after Kiev, the capital of Ukraine. It is widely considered to be the first identifiable Slavic archaeological culture. It was contemporaneous to (and located mostly just to the north of) the Chernyakhov culture
Settlements are found mostly along river banks, frequently either on high cliffs or right by the edge of rivers. The dwellings are overwhelmingly of the semi-subterranean type (common among earlier Celtic and Germanic and later among Slavic cultures), often square (about four by four meters), with an open hearth in a corner. Most villages consist of just a handful of dwellings. There is very little evidence of the division of labor, although in one case a village belonging to the Kiev culture was preparing thin strips of antlers to be further reworked into the well-known Gothic antler combs, in a nearby Chernyakhov culture village. 
The descendants of the Kiev culture — the Prague-Korchak, Penkovka and Kolochin cultures — established in the 5th century in Eastern Europe. There is, however, a substantial disagreement in the scientific community over the identity of the Kiev culture's predecessors, with some historians and archaeologists tracing it directly from the Milograd culture, others, from the Chernoles culture (the Scythian farmers of Herodotus) through the Zarubintsy culture, still others through both the Przeworsk culture and the Zarubintsy culture.
Hat tip: The Old European culture blog.

Tuesday, January 31, 2017

Astronomy Data Strongly Disfavors Inverted Neutrino Mass Hierarchy

One of the major outstanding questions in neutrino physics is whether the neutrino masses have a "normal hierarchy", or an "inverted hierarchy." We are close to having an answer to that question.

The sum of the neutrino masses in an inverted hierarchy, we know from neutrino oscillation data, cannot be less than 98.6 meV. In a normal hierarchy, the sum of the neutrino masses must be at least a bit more than 65.34 meV. A global look at various kinds of astronomy data suggests that there is a 95% chance that the sum of the three neutrino masses is, in fact, 90 meV or less. This strongly favors a "normal hierarchy" of neutrino masses.

The state of the art measurements of the difference between the first and second neutrino mass eigenstate is roughly 8.66 +/- 0.12 meV, and the difference between the second and third neutrino mass eigenstate is roughly 49.5 +/- 0.5 meV, which implies that the sum of the three neutrino mass eigenstates cannot be less than about 65.34 meV with 95% confidence.

So, this also gives us absolute neutrino mass estimates that have relative precision comparable to that of the experimental value of lighter quark masses and precision in absolute terms that is truly remarkable. It narrows the range of each of the neutrino masses to a nearly perfectly correlated window only a bit larger than +/- 4 meV. This is a roughly 33% improvement over the previous state of the art precision with which the absolute neutrino masses can be estimated.

The bottom line is that the range of the three neutrino masses that would be consistent with experimental data is approximately as follows (with the location of each mass within the range being highly correlated with the other two and the sum):

Mv1  0-7.6 meV
Mv2  8.42-16.1 meV
Mv3  56.92-66.2 meV

Sum of all three neutrino masses should be in the range: 65.34-90 meV

Realistically, I think that most people familiar with the question would subjectively favor results at the low end of these ranges, not least because the best fit value for the sum of the three neutrino masses based on astronomy data is significantly lower than the 95% confidence interval upper bound on the sum of the three neutrino masses.

In a Majorana neutrino mass type model, the neutrino mass is very tightly related to the rate of neutrinoless double beta decay that occurs, something that has not yet been observed with credible experimental data. The narrow window for the absolute neutrino masses also results in a narrow window for the expected rate of neutrinoless double beta decay that is not terribly far from the current experimental upper bound on the rate of that phenomena from neutrinoless double beta decay experiments. So, we may be very close to proving or disproving the Majorana v. Dirac neutrino mass question.

If neutrinos do have a "normal hierarchy", as every other class of Standard Model fermions does, this also suggests that whatever mechanism gives rise to the relative fermion masses in the Standard Model (including the neutrinos) inherently leads to a normal hierarchy, rather than being arbitrary.

The paper and its abstract are as follows:
Using some of the latest cosmological datasets publicly available, we derive the strongest bounds in the literature on the sum of the three active neutrino masses, Mν. In the most conservative scheme, combining Planck cosmic microwave background (CMB) temperature anisotropies and baryon acoustic oscillations (BAO) data, as well as the up-to-date constraint on the optical depth to reionization (τ), the tightest 95% confidence level (C.L.) upper bound we find is Mν<0.151~eV. The addition of Planck high-ℓ polarization tightens the bound to Mν<0.118~eV. Further improvements are possible when a less conservative prior on the Hubble parameter is added, bringing down the 95%~C.L. upper limit to ∼0.09~eV. The three aforementioned combinations exclude values of Mν larger than the minimal value allowed in the inverted hierarchical (IH) mass ordering, 0.0986~eV, at ∼82%~C.L., ∼91%~C.L., and ∼96%~C.L. respectively. A proper model comparison treatment shows that the same combinations exclude the IH at ∼64%~C.L., ∼71%~C.L., and ∼77%~C.L. respectively. We test the stability of the bounds against the distribution of the total mass Mν among the three mass eigenstates, finding that these are relatively stable against the choice of distributing the total mass among three (the usual approximation) or one mass eigenstate. Finally, we compare the constraining power of measurements of the full-shape galaxy power spectrum versus the BAO signature, from the BOSS survey. Even though the latest BOSS full shape measurements cover a larger volume and benefit from smaller error bars compared to previous similar measurements, the analysis method commonly adopted results in their constraining power still being less powerful than that of the extracted BAO signal. (abridged)
Sunny Vagnozzi, et al., "Unveiling ν secrets with cosmological data: neutrino masses and mass hierarchy" (January 27, 2017).

Sunday, January 29, 2017

Year Of The Chicken

Now that we are in the Chinese Year of the Chicken, it is appropriate to note that the chicken was domesticated ca. 5400 BCE, and that the oldest evidence of a domesticated chicken is in Taiwan.

Friday, January 27, 2017

Undeniable Evidence Of Austronesian-South American Pre-Columbian Contact

I have probably blogged this before, but if I have, it still bears repeating. The data come from a 2014 article in Current Biology that was recently blogged about by Bernard

The autosomal genetics of the indigenous people of Polynesian Easter Island show three contributions: the largest is typical of other Polynesians. The best available evidence is that Easter Island was colonized by 30 to 100 people around 1200 CE. 

Recent studies have also cast considerable light on the mechanism by which the Polynesian blend came to have a mix of about 25% Papuan and about 75% indigenous Taiwanese Austronesian genetics. This probably involved a conquest of a previously purely Austronesian Lapita population in Tonga and Vanuatu by Papuan men, probably around 500 CE, after which the resulting mixed Polynesian population expanded further into Oceania.

Two other components of Easter Islander DNA are largely absent from Polynesians, one South American, and one European.

A very homogeneous 6% of the autosomal genes of indigenous Easter Islanders are Native American. This component results from an admixture date ca. 1310 CE to 1420 CE based upon Native American DNA segment lengths and the homogeneity of the contribution. This is clearly pre-Columbian and pre-Easter Island's first contact with Europeans. 

Two other strong pieces of physical evidence corroborate this genetic evidence of pre-European contact between Easter Island Polynesians and South America. On one hand, the kumara (a sweet potato native to South America), which showed up in the Polynesian diet at about the right time with a name linguistically similar to its name in South American languages. In the other direction:
[C]hicken bones were also found in Chile in pre-Columbian sites before the arrival of Europeans. Mitochondrial DNA of these Chilean remains belong to the same lines as the remains of the Oceania Lapita culture, which seems to show that the Polynesians reached America before the Europeans.
A European genetic contribution is also present in proportions that vary greatly from person to person and is made up of longer segments, which averages about 16% of autosomal Easter Islander DNA. The estimated age of this contribution is 1850 CE to 1870 CE, based upon the same methods used to determine the age of the Native American contribution. The first European contact with Easter Island was on Easter Sunday in 1722 (at which point there was a population of about 4,000), and there was major Peruvian contact in the 1860s. European diseases and other factors reduced the indigenous population of Easter Island to 110 people.

Notably, this still can't explain Paleo-Asian genetic contributions found in low percentages in a few recently contacted central South American jungle tribes, which differ from the Polynesian genetic component and also appears to be more ancient, if this is really a signal at all and not an unnoticed experimental or analysis error.

The Easter Island-South American connection seems to have had only a minimal impact on the South Americans in the long run, although it did have a discernible genetic impact on the Easter Islanders and a discernible dietary impact on all of Polynesia.

Thursday, January 26, 2017

Direct Proof Of Early Modern Human Presence In Arabia?

Researchers claim to have found a 90,000 year old human bone in Saudi Arabia, which would be consistent with previous findings of tools thought to be associated with modern humans in Arabia and with evidence of human remains in the Levant from around that era. This would be the oldest example of actual human remains ever found in Arabia and within about 10,000-20,000 years of the oldest example of human remains ever found outside of Africa.

The linked source does not provide a journal article reference, however, and at least unless an ancient DNA sample can be extracted (a hit or miss possibility at best), there is no well to tell if this individual is from a population that provides most of the ancestry of non-African modern humans, or if this sample is from an earlier wave of modern human migrants which some have argued represents an "Out of Africa that failed" or at least almost failed apart from a roughly 2% admixture of first wave modern humans with the ancestors of modern Papuans and negrito populations of the Philippines. 

Wednesday, January 25, 2017

N=8 SUGRA Adds Only RH Neutrinos To The Fermion Content Of The Standard Model

It may be useful in higher N versions of supersymmetry (SUSY) and supergravity (SUGRA) as a foundation for "within the Standard Model" theory that demonstrates deeper relationships between the components of the Standard Model, particularly because these theories are exceptions to an important "no-go" theorem in theoretical physics called the Coleman-Mandula no go theorem.

These goals are more worthwhile than exploring SUSY's best known crude N=1 form which is contrary to experiment, is baroque, and has been explored so heavily due to mathematical laziness, in order to explain the hierarchy non-problem, and in the interest of "naturalness."

Background: What is the Coleman-Mandula no-go theorem?

Basically, the Coleman-Mandula no-go theorem says that any theory that attempts to describe nature in a manner:

(1) consistent with the foundational principles of quantum mechanics, and 
(2) also consistent with special relativity, 
(3) that has massive fundamental particles which are consistent with those observed in real life at low energies: 

(4) must have particle interactions that can be described in terms of a Lie Group, and 
(5) can't have laws governing particle interactions that depend upon the laws of special relativity in a manner different from the way that they do in the Standard Model.

Since almost any realistic beyond the Standard Model theory must meet all three of the conditions for the no-go theorem to apply in order to meet rigorously tested experimental constraints, the conclusion of the theorem requires all such theories to have a single kind of core structure. This largely turns the process of inventing beyond the Standard Model theories of physics from an open ended inquiry into an elaborate multi-choice question. For example, while the theorem does not prescribe the conservation laws that are allowed in such a theory, all of its conservation laws must follow a very particular mathematical form.

More technically, this no-go theorem can be summed up as follows:
Every quantum field theory satisfying the assumptions, 
1. Below any mass M, there are only finite number of particle types
2. Any two-particle state undergoes some reaction at almost all energies
3. The amplitude for elastic two body scattering are analytic functions of scattering angle at almost all energies. 
and that has non-trivial interactions can only have a Lie group symmetry which is always a direct product of the Poincaré group and an internal group if there is a mass gap: no mixing between these two is possible. As the authors say in the introduction to the 1967 publication, "We prove a new theorem on the impossibility of combining space-time and internal symmetries in any but a trivial way. . . . 
Since "realistic" theories contain a mass gap, the only conserved quantities, apart from the generators of the Poincaré group, must be Lorentz scalars.
The Poincaré group is a mathematical structure the defines the geometry of Minkowski space, which is the most basic space in which physical theories that are consistent with Einstein's theory of special relativity must follow.

A Lorentz scalar is "is a scalar which is invariant under a Lorentz transformation. A Lorentz scalar may be generated from multiplication of vectors or tensors. While the components of vectors and tensors are in general altered by Lorentz transformations, scalars remain unchanged. A Lorentz scalar is not necessarily a scalar in the strict sense of being a (0,0)-tensor, that is, invariant under any base transformation. For example, the determinant of the matrix of base vectors is a number that is invariant under Lorentz transformations, but it is not invariant under any base transformation."

Notable Lorentz scalars include the "length" of a position vector, the "length" of a velocity vector, the inner product of acceleration and the velocity vector, the 4-momentum of a particle, the energy of a particle, the rest mass of a particle, the 3-momentum of a particle, and the 3-speed of a particle.

But, SUSY and SUGRA are important exceptions to the Coleman-Mandula no-go theorem. (There are also a few other exceptions to this no-go theorem which are beyond the scope of this post which have quite different applications.) 

So, there are a variety of interesting ideas that one might want to try to implement in a beyond the Standard Model theory that it has been proved can only be implemented within the context of a SUSY or SUGRA model.

The Post
The latest CERN Courier has a long article by Hermann Nicolai, mostly about quantum gravity. Nicolai makes the following interesting comments about supersymmetry and unification:
To the great disappointment of many, experimental searches at the LHC so far have found no evidence for the superpartners predicted by N = 1 supersymmetry. However, there is no reason to give up on the idea of supersymmetry as such, since the refutation of low-energy supersymmetry would only mean that the most simple-minded way of implementing this idea does not work. Indeed, the initial excitement about supersymmetry in the 1970s had nothing to do with the hierarchy problem, but rather because it offered a way to circumvent the so-called Coleman–Mandula no-go theorem – a beautiful possibility that is precisely not realised by the models currently being tested at the LHC.
In fact, the reduplication of internal quantum numbers predicted by N = 1 supersymmetry is avoided in theories with extended (N > 1) supersymmetry. Among all supersymmetric theories, maximal N = 8 supergravity stands out as the most symmetric. Its status with regard to perturbative finiteness is still unclear, although recent work has revealed amazing and unexpected cancellations. However, there is one very strange agreement between this theory and observation, first emphasised by Gell-Mann: the number of spin-1/2 fermions remaining after complete breaking of supersymmetry is 48 = 3 × 16, equal to the number of quarks and leptons (including right-handed neutrinos) in three generations (see “The many lives of supergravity”). To go beyond the partial matching of quantum numbers achieved so far will, however, require some completely new insights, especially concerning the emergence of chiral gauge interactions.
I think this is an interesting perspective on the main problem with supersymmetry, which I’d summarize as follows. In N=1 SUSY you can get a chiral theory like the SM, but if you get the SM this way, you predict for every SM particle a new particle with the exact same charges (behavior under internal symmetry transformation), but spin differing by 1/2. This is in radical disagreement with experiment. What you’d really like is to use SUSY to say something about internal symmetry, and this is what you can do in principle with higher values of N. The problem is that you don’t really know how to get a chiral theory this way. That may be a much more fruitful problem to focus on than the supposed hierarchy problem.
 From Not Even Wrong (italics in original, boldface emphasis mine). 

Monday, January 23, 2017

Was There An Almost Failed First Modern Human Out Of Africa Wave?

Pagani (2016) makes the case based upon the most recent common ancestry date determined by comparing parts of Papuan autosomal DNA, compared to the TMRCA of that DNA in other modern humans, that the ancestors of the Papuans admixed with humans from an earlier wave of modern human migrants to Asia ca. 100,000 years ago. This is from a population which is also one of the few to show signs of admixture with Denisovans, a form of archaic hominin that diverged from modern humans before the oldest genetic or archaeological evidence of the existence of anatomically modern humans.

This new data point could be the solution to a potentially vexing paradox. 

There has long been archaeological evidence of a modern human presence in places like the Levant from 100,000 to 75,000 years ago. But, more recently, archaeological evidence of a modern human presence has been found in the Arabian interior from 100,000 to 125,000 years ago, in South Asia from more than 75,000 years ago, and arguably even China from 100,000 to 125,000 years ago. 

But, analysis of modern human DNA, and efforts to date Neanderthal admixture with modern humans including efforts based on non-mutation rate methods using ancient DNA, put a common ancestor of all non-Africans at more like 50,000 to 65,000 years ago, which corresponds to archaeological evidence of the first modern human presence in Australia and Papua New Guinea (which were a single land mass at the time).

There is then a gap in archaeological record in the Levant from around 75,000 years before present to about 50,000 years before present, so until very recently, at least, the mainstream explanation for the earlier archaeological evidence in the Levant, used to be that the early Levantine archaeological remains were an "Out of Africa that failed" and that all modern non-Africans descend from a second Out of Africa that prospered wave. 

But, the increasingly widespread archaeological evidence for a modern human presence in this 25,000 year gap period, genetic evidence in Altai Neanderthal ancient DNA indicating an admixture with modern humans ca. 100,000 years ago, and now this new data point in Pagani (2016), suggest that the simple version of the "Out of Africa that failed" theory are wrong.

Pagani (2016) instead suggests that there was a first wave of pre-Upper Paleolithic humans that spread across parts of Eurasia who admixed with Neanderthals and didn't make much of an ecological difference (although arguably, they could have led to the extinction of Homo Erectus in Asia). This first wave of modern humans outside Africa provided only a small part of the ancestry of a small subset of modern human. But, tens of thousands of years later, the remnants of these first wave modern humans did admix with the second wave of more successful Upper Paleolithic modern humans who ended up in Papua New Guinea, a wave whose expansion into Eurasia was permanent and successful, even thought they were mostly replaced by these second wave modern humans.
High-coverage whole-genome sequence studies have so far focused on a limited number of geographically restricted populations, or been targeted at specific diseases, such as cancer. Nevertheless, the availability of high-resolution genomic data has led to the development of new methodologies for inferring population history and refuelled the debate on the mutation rate in humans. Here we present the Estonian Biocentre Human Genome Diversity Panel (EGDP), a dataset of 483 high-coverage human genomes from 148 populations worldwide, including 379 new genomes from 125 populations, which we group into diversity and selection sets. We analyse this dataset to refine estimates of continent-wide patterns of heterozygosity, long- and short-distance gene flow, archaicadmixture, and changes in effective population size through time as well as for signals of positive or balancing selection. We find a genetic signature in present-day Papuans that suggests that at least 2% of their genome originates from an early and largely extinct expansion of anatomically modern humans (AMHs) out of Africa. Together with evidence from the western Asian fossil record, and admixture between AMHs and Neanderthals predating the main Eurasian expansion, our results contribute to the mounting evidence for the presence of AMHs out of Africa earlier than 75,000 years ago. 
Pagani, et al., "Genomic analyses inform on migration events during the peopling of Eurasia", Nature (Published online 21 September 2016). Hat tip: Marnie at Linear Population Model.

The paper is not open access, but Marnie at Linear Population Model provides quotes from some key passages of the paper and follows up with full citations included abstracts of some of the key sources cited therein.

For example, Marnie provides the citation and abstract for the Altai Neanderthal paper (which I am reprinting in a reformatted manner with emphasis added):
It has been shown that Neanderthals contributed genetically to modern humans outside Africa 47,000–65,000 years ago. Here we analyse the genomes of a Neanderthal and a Denisovan from the Altai Mountains in Siberia together with the sequences of chromosome 21 of two Neanderthals from Spain and Croatia. We find that a population that diverged early from other modern humans in Africa contributed genetically to the ancestors of Neanderthals from the Altai Mountains roughly 100,000 years ago. By contrast, we do not detect such a genetic contribution in the Denisovan or the two European Neanderthals. We conclude that in addition to later interbreeding events, the ancestors of Neanderthals from the Altai Mountains and early modern humans met and interbred, possibly in the Near East, many thousands of years earlier than previously thought.
Martin Kuhlwilm, et al., "Ancient gene flow from early modern humans into Eastern Neanderthals", Nature, Volume 530, Pages 429-433 (25 February 2016).  

Marnie also provides some of the language clarifying the key original insights of Pagani (2016) (citations and internal cross references omitted without editorial indication, emphasis mine):
Using fineSTRUCTURE, we find in the genomes of Papuans and Philippine Negritos more short haplotypes assigned as African than seen in genomes for individuals from other non-African populations. This pattern remains after correcting for potential confounders such as phasing errors and sampling bias. These shorter shared haplotypes would be consistent with an older population split. Indeed, the Papuan–Yoruban median genetic split time (using multiple sequential Markovian coalescent (MSMC)) of 90 kya predates the split of all mainland Eurasian populations from Yorubans at ~75 kya. This result is robust to phasing artefacts. Furthermore, the Papuan–Eurasian MSMC split time of ~40 kya is only slightly older than splits between west Eurasian and East Asian populations dated at ~30 kya. The Papuan split times from Yoruba and Eurasia are therefore incompatible with a simple bifurcating population tree model. 
At least two main models could explain our estimates of older divergence dates for Sahul populations from Africa than mainland Eurasians in our sample: 1) admixture in Sahul with a potentially un-sampled archaic human population that split from modern humans either before or at the same time as did Denisova and Neanderthal; or 2) admixture in Sahul with a modern human population (extinct OoA line; xOoA) that left Africa after the split between modern humans Africa after the split between modern humans.

We consider support for these two non-mutually exclusive scenarios. Because the introgressing lineage has not been observed with aDNA, standard methods are limited in their ability to distinguish between these hypotheses. Furthermore, we show that single-site statistics, such as Patterson’s D, and sharing of non-African Alleles (nAAs), are inherently affected by confounding effects owing to archaic introgression in non-African populations. Our approach therefore relies on multiple lines of evidence using haplotype-based MSMC and fineSTRUCTURE comparisons (which we show should have power at this timescale). 
We located and masked putatively introgressed Denisova haplotypes from the genomes of Papuans, and evaluated phasing errors by symmetrically phasing Papuans and Eurasians genomes. Neither modification changed the estimated split time (based on MSMC) between Africans and Papuans. MSMC dates behave approximately linearly under admixture, implying that the hypothesized lineage may have split from most Africans around 120 kya. 
We compared the effect on the MSMC split times of an xOoA or a Denisova lineage in Papuans by extensive coalescent simulations. We could not simulate the large Papuan–African and Papuan–Eurasian split times inferred from the data, unless assuming an implausibly large contribution from a Denisova-like population. Furthermore, while the observed shift in the African–Papuan MSMC split curve can be qualitatively reproduced when including a 4% genomic component that diverged 120 kya from the main human lineage within Papuans, a similar quantity of Denisova admixture does not produce any significant effect. This favours a small presence of xOoA lineages rather than Denisova admixture alone as the likely cause of the observed deep African–Papuan split. We also show that such a scenario is compatible with the observed mitochondrial DNA and Y chromosome lineages in Oceania, as also previously argued. 
We further tested our hypothesized xOoA model by analysing haplotypes in the genomes of Papuans that show African ancestry not found in other Eurasian populations. We re-ran fineSTRUCTURE adding the Denisova, Altai Neanderthal and the Human Ancestral Genome sequences to a subset of the diversity set. FineSTRUCTURE infers haplotypes that have a most recent common ancestor (MRCA) with another individual. Papuan haplotypes assigned as African had, regardless, an elevated level of non-African derived alleles (that is, nAAs fixed ancestral in Africans) compared to such haplotypes in Eurasians. They therefore have an older mean coalescence time with our African samples.
I find no fault in the analysis in Pagani (2016) which is thoughtfully done by a method that should be reliable.

Sunday, January 22, 2017

A Bronze Age Flood In The British Isles

There was a catastrophic weather even which hit Ireland and Wales during the period 2354 BC and 2345 BC. Among other things, this event permanently flooded a village and forest Cardigan Bay in Wales.

There is a Gaelic legend that explains it, which has very close parallels across Ireland and Wales in other locations as well.
The Laigin people from Ireland at one stage controlled Llyn peninsula. I wonder if this is how we find almost identical "let the tap running" explanation for the legend in both Ireland and Wales?

"...practically every lake in Wales has some story or other connected with it. The story about the lake Glasfryn is very interesting. The story says that in the olden times there was a well where the lake is now, and this well, kept by a maiden named "Grassi," was called "Grace's Well." Over the well was a door, presumablv a trapdoor, which Grassi used to open when people wanted water, and shut immediately afterwards. One day Grassi forgot to shut the door, and the water overflowed and formed a lake. For her carelessness Grassi was turned into a swan, and her ghost is still said to haunt Glasfryn House and Cal-Ladi. This little lake is now the home and breeding-place of countless swans..." . . .
[I]n "Historical and descriptive notices of the city of Cork and its vicinity" first published in 1839 by John Windele. On Pages 42-43 we can read this:
A short distance to the south west, from the City, is Lough na famog, (probably the Lough Ere of the Hajiology,) now called the Lough of Cork, a considerable sheet of water supplied by streams from the adjoining hills; the high road runs along its eastern shore, and the other sides are skirted by grounds, unhappily without tree or shrub, to add a feature of beauty or interest to the picture. It is the scene of one of CROKER'S charming Fairy Legends, detailing the bursting forth of the lake, through the negligence of the princess Fioruisge (Irish: Fior-uisge - spring water), daughter of King Corc. In taking water from the charmed fountain, she forgot to close the mouth of the well, and the court, the gardens, the King, and his people, were buried beneath the flowing waters.

The incident is common to almost every lake in Ireland.

Six centuries ago, Cambrensis had a similar legend concerning Lough Neagh, which Hollinshed has repeated in a less diffusive style. "There was," he says, "in old time, where the pool now standeth, vicious and beastlie inhabitants. At which time was there an old saw in everie man his mouth, that as soon as a well there springing, (which for the superstitious reverence they bare it, was continuallie covered and signed,) were left open and unsigned, so soone would so much water gush out of that well, as would forthwith overwhelme the whole territorie. It happened at length, that an old trot came thither to fetch water, and hearing her childe whine, she ran with might and maine to dandle her babe, forgetting the observance of the superstitious order tofore used: But as she was returning backe, to have covered the spring, the land was so farre overflown, as that it past hir helpe; and shortly after, she, hir suckling, and all those that were within the whole territorie, were drowned; and this seemeth to carie more likelihood with it, because the fishers in a cleare sunnie daie, see the steeples and other piles plainlie and distinctlie in the water."
The legend that there was an inundated settlement in Cardigan Bay was corroborated a few years ago when a winter storm cleared away sands in the bay that had concealed it. Tree rings dated the event and confirmed that it happened at the same time as parallel events in Ireland.

It isn't implausible, however, that the modern neglected well legends derive not from a direct memory of the actual event, but from a similar winter storm that revealed the inundated settlement and demanded an explanation, much like the one a few years ago that led to the modern archaeological discovery.

This also begs the question of whether there was a global sea level rise in the Atlantic Ocean at this time, perhaps due to some glacial dam finally breaking and flooding the ocean, that might have a connection to Plato's Atlantis myth, or even to the Biblical flood myth.

Friday, January 20, 2017

Physical v. Mathematical Constants

Some of the most memorable constants in mathematics like pi and e are transcendental numbers.

Is this true of some or all of the physical constants?

How would you know?

Even the most precisely measured of the physical constants is only known to a dozen or two digits - too few to directly determine whether it was transcendental or rational, or even to make a reasonable guess.

But, if you could come up with a formula from which a physical constant could be determined that had plausible reasons to be correct, perhaps you could know from the form of the formula, even if it wasn't actually possible to calculate the formula numerically to much more precision than the experimental measurement.

Of course, any physical constant with a factor of pi or e in it would be transcendental, regardless of the nature of the remaining factor (except in the modulo unique case where the remaining factor contained the inverse of pi or e as the case might be, for example).

There should be a term for a number that is still transcendental, even after factoring out well defined, purely mathematical constants. Physically transcendental perhaps?

Thursday, January 19, 2017

Amateurs Can Made A Difference Too

While rare, there are examples of individuals without credentials making important contributions to science. J.B.S. Haldane, one of the biggest B-list name in genetics, is one such individual.
John Burdon Sanderson (JBS) Haldane (1892-1964) was a leading science popularizer of the twentieth century. Sir Arthur C. Clarke described him as the most brilliant scientific popularizer of his generation. Haldane was a great scientist and polymath who contributed significantly to several sciences although he did not possess an academic degree in any branch of science. He was also a daring experimenter who was his own guinea pig in painful physiological experiments in diving physiology and in testing the effects of inhaling poisonous gases.

Lattice QCD Makes Decent Approximations Of Experimental Data And QCD Does Axions

A new preprint compares a wide variety of recent lattice QCD predictions (and post-dictions) of the properties of various mesons and baryons.

The results show a solid array of accurate results. Neither the predictions nor, in many cases, the experimental results, are terribly precise, but the lattice QCD results do consistently make lots of accurate predictions. This tends to disfavor the possibility that there are beyond the Standard Model physics at work in QCD, and to support the hypothesis that while doing the math to determine what the equations of QCD imply in the real world is hard, that the underlying theory is basically sound.

Supercomputers applying Lattice QCD have also made progress in establishing the mass of a hypothetical particle call the axion under a beyond the Standard Model modification introduced to explain the fact that CP violation is non-existent or negligible in strong force interactions. The final result is the axion mass should be between 50 and 1500 * 10-6 eV/c2. Hat tip to Backreaction.

This is on the same order of magnitude as the expected mass of the lightest neutrino mass in a normal mass hierarchy, or perhaps up to about 20-40 times lighter. By comparison the second lightest neutrino mass is not less than about 8000 * 10-6 eV/c2, and the heaviest neutrino mass is not less than about 52,000 * 10-6 eV/c2.

These limits are considerably more narrow than the state of the art as of 2014 as I explained in a blog post at that time:

A new pre-print by Blum, et al., examines observational limits on the axion mass and axion decay constant due to Big Bang Nucleosynthesis, because the role that the axion plays in strong force interactions would impact the proportions of light atoms of different types created in the early universe.
The study concludes that (1) the product of the axion mass and axion decay constant must be approximately 1.8*10^-9 GeV^2, and (2) that in order to solve the strong CP problem and be consistent with astronomy observations, that axion mass must be between 10^-16 eV and 1 eV in mass (with a 10^-12 eV limitation likely due to the hypothesis that the decay constant is less than the Planck mass). The future CASPEr2 experiment could place a lower bound on axion mass of 10^-12 eV to 10^-10 eV and would leave the 1 eV upper bound unchanged. 
Other studies argue that the axion decay constant must be less than 10^9 GeV (due to constraints from observations of supernovae SN1987A) and propose an axion mass on the order of 6 meV (quite close to the muon neutrino mass if one assumes a normal hierarchy and a small electron neutrino mass relative to the muon neutrino-electron neutrino mass difference) or less. Estimates of the axion mass in the case of non-thermal production of axions, which are favored if it is a dark matter particle, are on the order of 10^-4 to 10^-5 eV. There are also order of magnitude estimates of the slight predicted coupling of axions to photons. 
Other studies placing observational limitations on massive bosons as dark matter candidates apply only to bosons much heavier than the axion.
A narrower theoretically possible target, in turn, makes experimental confirmation or rejection of the axion hypothesis much easier. 

Sunday, January 15, 2017

Population Density In Modern Africa


A declassified CIA map showing African population density as of 2009 CE

Africa is currently in an arid phase and has been since most of the later half of the Holocene era. So, while Africa's population has grown greatly in the last several thousand years, the relative population density of Africa's regions has probably been pretty similar. I made a previous post along the same lines in 2012, but this is a better map.

Several aspects of this are notable.

1. Lots of Africa (the Sahara, the Kalahari, and parts of the Congo) are virtually uninhabited.

2. The biogeographic divide between North Africa and Subsaharan Africa is real. The population of North Africa tightly hugs the Mediterranean coast. Even the Atlantic and Mediterranean coast are basically barren and uninhabitable for long stretches.

3. Jungles are mostly not completely uninhabitable, even though they have low population densities at time.

4. I had not realized that the eastern part of South Africa was so much more habitable than the western part.

5. Many of the densely populated parts of Africa track fresh water sources - the Niger, the Nile and the Great Rift Valley. But, central Africa is still surprisingly thinly populated despite having two major lakes.

6. While my mental image of Ethiopia is of a place that is rural and barren, it is actually one of the most densely populated parts of Africa.

Friday, January 13, 2017

A Recap Of The State Of Muon g-2 Physics

The Current Experimental Result

The most definitive measurement of the anomalous magnetic moment of the muon (g-2) was conducted by the Brookhaven National Laboratory E821 experiment which announced its results on January 8, 2004.

This measurement was  which differs somewhat from a precisely calculated theoretical value.

In units of 10-11 and combining the errors in quadrature, the experimental result was:

E821        116 592 091 ± 63

Dividing the total value by the error gives an error of 540 parts per million.

The Current Theoretical Prediction

The current state of the art theoretical prediction from the Standard Model of particle physics regarding the value of muon g-2 in units of 10-11 is:

QED   116 584 718.95 ± 0.08
HVP                6 850.6 ± 43
HLbL                    105 ± 26
EW                     153.6 ± 1.0
Total SM 116 591 828 ± 49
The main contribution comes by far from QED, which is known to five loops (tenth order) and has a small, well-understood uncertainty. Sensitivity at the level of the electroweak (EW) contribution was reached by the E821 experiment. 
The hadronic contribution dominates the uncertainty (0.43 ppm compared to 0.01 ppm for QED and EW grouped together). This contribution splits into two categories, hadronic vacuum polarization (HVP) and hadronic light-by-light (HLbL). 
The HVP contribution dominates the correction, and can be calculated from e + e − → hadrons cross-section using dispersion relations. 
The HLbL contribution derives from model-dependent calculations. 
Lattice QCD predictions of these two hadronic contributions are becoming competitive, and will be crucial in providing robust uncertainty estimates free from uncontrolled modeling assumptions. Lattice QCD predictions have well-understood, quantifiable uncertainty estimates. Model-based estimates lack controlled uncertainty estimates, and will always allow a loophole in comparisons with the SM.
In other words, unsurprisingly, almost all of the uncertainty in the theoretical prediction comes from the QCD part of the calculations. One part of that calculation has a precision of ± 0.6% (roughly the precision with which the strong force coupling constant is known), the other part of that calculation has a precision of only ± 25%.

A 2011 paper suggested that most of the discrepancy was between theory and experiment was probably due to errors in the theoretical calculation.

The Discrepancy

In the same units, the experimental result from 2004 exceeds the theoretical prediction by:

Discrepancy          263

How Significant Is This Discrepancy?

This is a 3.3 sigma discrepancy, which is notable in physics, but not considered definitive proof of beyond the Standard Model either. 

The discrepancy is small enough that it could easily be due to some combination of a statistical fluke in the measurements and underestimated systemic and theoretical calculation errors. But, this discrepancy has been viewed by physicists as one of the most notably in all of physics for the last thirteen years.

The QED prediction [for the electron g-2] agrees with the experimentally measured value to more than 10 significant figures, making the magnetic moment of the electron the most accurately verified prediction in the history of physics.
This is an accuracy on the order of one part per billion and the theoretical and experimental results for the electron g-2 are consistent at slightly less than the two sigma level. The five loop precision calculations of the electron g-2 have a theoretical uncertainty roughly three times as great as the current experimental uncertainty in that measurement.

So, physicists naturally expect the muon g-2 to also reflect stunning correspondence between the Standard Model theoretical prediction and experiment.

On the other hand, this discrepancy shouldn't be overstated either. The discrepancy between the theoretically predicted value and the experimentally measured value is still only 2.3 parts per million. As I noted in a 2013 post at this blog:
The discrepancy is simultaneously (1) one of the stronger data points pointing towards potential beyond the Standard Model physics (with the muon magnetic moment approximately 43,000 times more sensitive to GeV particle impacts on the measurement than the electron magnetic moment) and (2) a severe constraint on beyond the Standard Model physics, because the absolute difference and relative differences are so modest that any BSM effect must be very subtle.
In particular, my 2013 post made the following observations with regard to the impact of this discrepancy on SUSY theories:
The muon g-2 limitations on supersymmetry are particularly notable because unlike limitations from collider experiments, the muon g-2 limitations tend to cap the mass of the lightest supersymmetric particle, or at least to strongly favor lighter sparticle masses in global fits to experimental data of SUSY parameters. As a paper earlier this year noted: 
"There is more than 3 sigma deviation between the experimental and theoretical results of the muon g-2. This suggests that some of the SUSY particles have a mass of order 100 GeV. We study searches for those particles at the LHC with particular attention to the muon g-2. In particular, the recent results on the searches for the non-colored SUSY particles are investigated in the parameter region where the muon g-2 is explained. The analysis is independent of details of the SUSY models."
The LHC, of course, has largely ruled out SUSY particles with masses on the order of 100 GeV. Another fairly thoughtful reconciliation of the muon g-2 limitations with Higgs boson mass and other LHC discovery constraints can be found in a February 28, 2013 paper which in addition to offering its own light sleptons, heavy squark solution also catalogs other approaches that could work.
Regrettably, I have not located any papers examining experimental boundaries on SUSY parameter space that also include limitations from the absence of discovery of proton decay of less than a certain length of time, and the current thresholds of non-discovery of neutrinoless double beta decay. The latter, like muon g-2 limitations, generically tends to disfavor heavy sparticles, although one can design a SUSY model that addresses this reality. 
Some studies do incorporate the lack of positive detections of GeV scale WIMPS in direct dark matter searches by XENON 100 that have been made more definitive by the recent LUX experiment results. Barring "blind spots" in Tevatron and LHC and LEP experiments at low masses, a sub-TeV mass plain vanilla SUSY dark matter candidate is effectively excluded by current experimental results.
The failure of collider experiments at the LHC to discovery any new particles other than the Higgs boson since 2004, is one of the factors the suggests that the discrepancy is probably due to theoretical and experimental errors, rather than due to new physics. The discrepancy is sufficiently small that if it was due to new physics, that new physics should have been apparent at energies we have already probed by now.

What Now?

This year, the Fermilab E989 experiment will begin that process of replicating that measurement with greater precision. As the abstract to the paper describing the new experiment explains:
The upcoming Fermilab E989 experiment will measure the muon anomalous magnetic moment aµ. This measurement is motivated by the previous measurement performed in 2001 by the BNL E821 experiment that reported a 3-4 standard deviation discrepancy between the measured value and the Standard Model prediction. The new measurement at Fermilab aims to improve the precision by a factor of four reducing the total uncertainty from 540 parts per billion (BNL E821) to 140 parts per billion (Fermilab E989). This paper gives the status of the experiment.
Put another way, in units of 10-11 the target is to reduce the experimental error to ± 16.3

Meanwhile, the body of the paper notes that:
The uncertainties in the theory calculation are expected to improve by a factor of two on the timescale of the E989 experiment. This improvement will be achieved taking advantage of new data to improve both the HVP (BESIII [7], VEPP2000 [8] and B-factories data) and HLBL (KLOE-2 [9] and BESIII data), the latter gaining from the modeling improvements made possible with the new data. On the lattice QCD side, new ways of computing aµ from first principles and an increase in computing capability will provide the expected gains. 
Put another way, in units of 10-11 the expectation is to reduce theoretical uncertainty to ± 24.5

Thus, in units of 10-11 a one sigma discrepancy between the theoretical result and the experimental result will be ± 29.4 at Fermilab E989. As a result, the body of the paper notes that:
Given the anticipated improvements in both experimental and theoretical precision, if the central values remain the same there is a potential 7 standard deviation between theory and measurement (5 standard deviation with only experimental improvement).
Potential Implications Of New Experimental Results

If there are in fact no new physics and all of the previous discrepancy between the theoretical value of muon g-2 and the experimentally measured value was due to theoretical calculation uncertainty, statistical errors, and systemic errors, then the newly measured experimental value of muon g-2 should fall and improved theoretical calculations may nudge up the theoretically expected result a bit.

If that does happen it will put a nail in the coffin of a huge swath of beyond the Standard Model theories. On the other hand, if the discrepancy grows in statistical significance, which in principle it has the experimental power to do, it will be a strong indicator that there are at least some BSM physics out there to be found that have not yet been observed at the LHC or anywhere else.

Wednesday, January 11, 2017

The Founding Americans Hung Out In Beringia Before Moving South

In another New World pre-history paradigm confirming result, archaeology has confirmed that humans were present in Beringia for 10,000 years (which was pre-Last Glacial Maximum), before migrating to North and South America when melting glaciers finally made this possible, six thousand years after the LGM.


The timing of the first entry of humans into North America is still hotly debated within the scientific community. Excavations conducted at Bluefish Caves (Yukon Territory) from 1977 to 1987 yielded a series of radiocarbon dates that led archaeologists to propose that the initial dispersal of human groups into Eastern Beringia (Alaska and the Yukon Territory) occurred during the Last Glacial Maximum (LGM). This hypothesis proved highly controversial in the absence of other sites of similar age and concerns about the stratigraphy and anthropogenic signature of the bone assemblages that yielded the dates. The weight of the available archaeological evidence suggests that the first peopling of North America occurred ca. 14,000 cal BP (calibrated years Before Present), i.e., well after the LGM. Here, we report new AMS radiocarbon dates obtained on cut-marked bone samples identified during a comprehensive taphonomic analysis of the Bluefish Caves fauna. Our results demonstrate that humans occupied the site as early as 24,000 cal BP (19,650 ± 130 14C BP). In addition to proving that Bluefish Caves is the oldest known archaeological site in North America, the results offer archaeological support for the “Beringian standstill hypothesis”, which proposes that a genetically isolated human population persisted in Beringia during the LGM and dispersed from there to North and South America during the post-LGM period.

Interestingly, the artifacts found in the caves can be associated with a particular Northeast Siberian archaeological culture, called the Dyuktai culture (named after the type artifacts found at Dyuktai cave on the Aldan River in Siberia), whose links to the New World founding population have been seriously considered, at least since the publication of Seonbok Yi, et al., "The 'Dyuktai Culture' and New World Origins", 26(1) Current Anthropology (1985), and with less specificity as early as 1937. Professional opinion is divided over whether the similarities in the artifacts really support the conclusion that this is the source culture for the New World Founding population, however. A map at page 27 of the (badly scanned) linked monograph illustrates the location of this site and related sites of the early phase of this culture.

The materials uncovered also indicate a more or less continuous although intermittent human occupation of the cave during the period when Beringia is believed to have been inhabited by the predecessors of the Founding population of the Americas.
Small artefact series were excavated from the loess in Cave I (MgVo-1) and Cave II (MgVo-2) and rich faunal assemblages were recovered from all three caves [2327]. The lithic assemblages (which number about one hundred specimens) include microblades, microblade cores, burins and burin spalls as well as small flakes and other lithic debris [2326]. Most of the artefacts were recovered from the loess of Cave II at a depth comprised between about 30 to 155 cm. The deepest diagnostic pieces–a microblade core (B3.3.17), a burin (B3.6.1) and a core tablet (B4.16.4) found inside Cave II, as well as a microblade (E3.3.2) found near the cave entrance–derive from the basal loess at a depth of about 110 to 154 cm below datum, according to the CMH archives [28]. While the artefacts cannot be dated with precision [24, 25, 29], they are typologically similar to the Dyuktai culture which appears in Eastern Siberia about 16–15,000 cal BP, or possibly earlier, ca. 22–20,000 cal BP [30]. 
There are no reported hearth features [24]. Palaeoenvironmental evidence, including evidence of herbaceous tundra vegetation [31, 32] and vertebrate fauna typical of Pleistocene deposits found elsewhere in Eastern Beringia [27, 33, 34], is consistent with previously obtained radiocarbon dates which suggest that the loess layer was deposited between 10,000 and 25,000 14C BP (radiocarbon years Before Present), i.e., between 11,000 and 30,000 cal BP [2327, 35] (Table 1).

The conclusion to the current article (a part of which is quoted below with citations omitted) notes that:
[T]he Bluefish Caves, like other Beringian cave sites, were probably only used occasionally as short-term hunting sites. Thus, they differ from the open-air sites of the Tanana valley in interior Alaska and the Little John site in the Yukon Territory, where hearth features, large lithic collections, bone tools and animal butchery have been identified, reflecting different cultural activities and a relatively longer-term, seasonal occupation. 
In conclusion, while the Yana River sites indicate a human presence in Western Beringia ca. 32,000 cal BP, the Bluefish Caves site proves that people were in Eastern Beringia during the LGM, by at least 24,000 cal BP, thus providing long-awaited archaeological support for the “Beringian standstill hypothesis”. According to this hypothesis, a human population genetically isolated existed in Beringia from about 15,000 to 23,000 cal BP, or possibly earlier, before dispersing into North and eventually South America after the LGM. 
Central Beringia may have sustained human populations during the LGM since it offered relatively humid, warmer conditions and the presence of woody shrubs and occasional trees that could be used for fuel. However, this putative core region was submerged at the end of the Pleistocene by rising sea levels making data collection difficult. Bluefish Caves, situated in Eastern Beringia, may have been located at the easternmost extent of the standstill population’s geographical range. The seasonal movements of human hunters from a core range, hypothetically located in Central Beringia, into adjoining, more steppic regions such as Eastern Beringia would explain the sporadic nature of the occupations at Bluefish Caves. 
The scarcity of archaeological evidence for LGM occupations in both Western and Eastern Beringia suggests that the standstill population was very small. This is consistent with the genetic data, which suggest that the effective female population was only about 1000–2000 individuals and that the standstill population didn’t exceed a few tens of thousands of people in Beringia. The size of the standstill population is thought to have increased after the LGM, leading to renewed dispersals into the Americas. Our results indicate that human hunters continued to use Bluefish Caves as the climate improved. While some prey species became extinct by ca. 14,000 cal BP (e.g. horse), human hunters could continue to rely on different species such as caribou, bison and wapiti.
By around 15–14,000 cal BP an ice-free corridor formed between the Laurentide and Cordilleran ice sheets potentially allowing humans to disperse from Beringia to continental North America; arguably, this corridor wouldn’t have been biologically viable for human migration before ca. 13–12,500 cal BP, however. It is now more widely recognized that the first inhabitants of Beringia probably dispersed along a Pacific coastal route, possibly as early as ca. 16,000 cal BP, and settled south of the ice sheets before the ice-free corridor became a viable route.

Our results, therefore, confirm that Bluefish Caves is the oldest known archaeological site in North America and furthermore, lend support to the standstill hypothesis. 
The bottom line is that we increasingly understand the settlement of the Americas by humans in considerable geographical and chronological detail backed up by multiple consistent lines of solid evidence.