Tuesday, January 31, 2017

Astronomy Data Strongly Disfavors Inverted Neutrino Mass Hierarchy

One of the major outstanding questions in neutrino physics is whether the neutrino masses have a "normal hierarchy", or an "inverted hierarchy." We are close to having an answer to that question.

The sum of the neutrino masses in an inverted hierarchy, we know from neutrino oscillation data, cannot be less than 98.6 meV. In a normal hierarchy, the sum of the neutrino masses must be at least a bit more than 65.34 meV. A global look at various kinds of astronomy data suggests that there is a 95% chance that the sum of the three neutrino masses is, in fact, 90 meV or less. This strongly favors a "normal hierarchy" of neutrino masses.

The state of the art measurements of the difference between the first and second neutrino mass eigenstate is roughly 8.66 +/- 0.12 meV, and the difference between the second and third neutrino mass eigenstate is roughly 49.5 +/- 0.5 meV, which implies that the sum of the three neutrino mass eigenstates cannot be less than about 65.34 meV with 95% confidence.

So, this also gives us absolute neutrino mass estimates that have relative precision comparable to that of the experimental value of lighter quark masses and precision in absolute terms that is truly remarkable. It narrows the range of each of the neutrino masses to a nearly perfectly correlated window only a bit larger than +/- 4 meV. This is a roughly 33% improvement over the previous state of the art precision with which the absolute neutrino masses can be estimated.

The bottom line is that the range of the three neutrino masses that would be consistent with experimental data is approximately as follows (with the location of each mass within the range being highly correlated with the other two and the sum):

Mv1  0-7.6 meV
Mv2  8.42-16.1 meV
Mv3  56.92-66.2 meV

Sum of all three neutrino masses should be in the range: 65.34-90 meV

Realistically, I think that most people familiar with the question would subjectively favor results at the low end of these ranges, not least because the best fit value for the sum of the three neutrino masses based on astronomy data is significantly lower than the 95% confidence interval upper bound on the sum of the three neutrino masses.

In a Majorana neutrino mass type model, the neutrino mass is very tightly related to the rate of neutrinoless double beta decay that occurs, something that has not yet been observed with credible experimental data. The narrow window for the absolute neutrino masses also results in a narrow window for the expected rate of neutrinoless double beta decay that is not terribly far from the current experimental upper bound on the rate of that phenomena from neutrinoless double beta decay experiments. So, we may be very close to proving or disproving the Majorana v. Dirac neutrino mass question.

If neutrinos do have a "normal hierarchy", as every other class of Standard Model fermions does, this also suggests that whatever mechanism gives rise to the relative fermion masses in the Standard Model (including the neutrinos) inherently leads to a normal hierarchy, rather than being arbitrary.

The paper and its abstract are as follows:
Using some of the latest cosmological datasets publicly available, we derive the strongest bounds in the literature on the sum of the three active neutrino masses, Mν. In the most conservative scheme, combining Planck cosmic microwave background (CMB) temperature anisotropies and baryon acoustic oscillations (BAO) data, as well as the up-to-date constraint on the optical depth to reionization (τ), the tightest 95% confidence level (C.L.) upper bound we find is Mν<0.151~eV. The addition of Planck high-ℓ polarization tightens the bound to Mν<0.118~eV. Further improvements are possible when a less conservative prior on the Hubble parameter is added, bringing down the 95%~C.L. upper limit to ∼0.09~eV. The three aforementioned combinations exclude values of Mν larger than the minimal value allowed in the inverted hierarchical (IH) mass ordering, 0.0986~eV, at ∼82%~C.L., ∼91%~C.L., and ∼96%~C.L. respectively. A proper model comparison treatment shows that the same combinations exclude the IH at ∼64%~C.L., ∼71%~C.L., and ∼77%~C.L. respectively. We test the stability of the bounds against the distribution of the total mass Mν among the three mass eigenstates, finding that these are relatively stable against the choice of distributing the total mass among three (the usual approximation) or one mass eigenstate. Finally, we compare the constraining power of measurements of the full-shape galaxy power spectrum versus the BAO signature, from the BOSS survey. Even though the latest BOSS full shape measurements cover a larger volume and benefit from smaller error bars compared to previous similar measurements, the analysis method commonly adopted results in their constraining power still being less powerful than that of the extracted BAO signal. (abridged)
Sunny Vagnozzi, et al., "Unveiling ν secrets with cosmological data: neutrino masses and mass hierarchy" (January 27, 2017).

Sunday, January 29, 2017

Year Of The Chicken

Now that we are in the Chinese Year of the Chicken, it is appropriate to note that the chicken was domesticated ca. 5400 BCE, and that the oldest evidence of a domesticated chicken is in Taiwan.

Friday, January 27, 2017

Undeniable Evidence Of Austronesian-South American Pre-Columbian Contact

I have probably blogged this before, but if I have, it still bears repeating. The data come from a 2014 article in Current Biology that was recently blogged about by Bernard

The autosomal genetics of the indigenous people of Polynesian Easter Island show three contributions: the largest is typical of other Polynesians. The best available evidence is that Easter Island was colonized by 30 to 100 people around 1200 CE. 

Recent studies have also cast considerable light on the mechanism by which the Polynesian blend came to have a mix of about 25% Papuan and about 75% indigenous Taiwanese Austronesian genetics. This probably involved a conquest of a previously purely Austronesian Lapita population in Tonga and Vanuatu by Papuan men, probably around 500 CE, after which the resulting mixed Polynesian population expanded further into Oceania.

Two other components of Easter Islander DNA are largely absent from Polynesians, one South American, and one European.

A very homogeneous 6% of the autosomal genes of indigenous Easter Islanders are Native American. This component results from an admixture date ca. 1310 CE to 1420 CE based upon Native American DNA segment lengths and the homogeneity of the contribution. This is clearly pre-Columbian and pre-Easter Island's first contact with Europeans. 

Two other strong pieces of physical evidence corroborate this genetic evidence of pre-European contact between Easter Island Polynesians and South America. On one hand, the kumara (a sweet potato native to South America), which showed up in the Polynesian diet at about the right time with a name linguistically similar to its name in South American languages. In the other direction:
[C]hicken bones were also found in Chile in pre-Columbian sites before the arrival of Europeans. Mitochondrial DNA of these Chilean remains belong to the same lines as the remains of the Oceania Lapita culture, which seems to show that the Polynesians reached America before the Europeans.
A European genetic contribution is also present in proportions that vary greatly from person to person and is made up of longer segments, which averages about 16% of autosomal Easter Islander DNA. The estimated age of this contribution is 1850 CE to 1870 CE, based upon the same methods used to determine the age of the Native American contribution. The first European contact with Easter Island was on Easter Sunday in 1722 (at which point there was a population of about 4,000), and there was major Peruvian contact in the 1860s. European diseases and other factors reduced the indigenous population of Easter Island to 110 people.

Notably, this still can't explain Paleo-Asian genetic contributions found in low percentages in a few recently contacted central South American jungle tribes, which differ from the Polynesian genetic component and also appears to be more ancient, if this is really a signal at all and not an unnoticed experimental or analysis error.

The Easter Island-South American connection seems to have had only a minimal impact on the South Americans in the long run, although it did have a discernible genetic impact on the Easter Islanders and a discernible dietary impact on all of Polynesia.

Thursday, January 26, 2017

Direct Proof Of Early Modern Human Presence In Arabia?

Researchers claim to have found a 90,000 year old human bone in Saudi Arabia, which would be consistent with previous findings of tools thought to be associated with modern humans in Arabia and with evidence of human remains in the Levant from around that era. This would be the oldest example of actual human remains ever found in Arabia and within about 10,000-20,000 years of the oldest example of human remains ever found outside of Africa.

The linked source does not provide a journal article reference, however, and at least unless an ancient DNA sample can be extracted (a hit or miss possibility at best), there is no well to tell if this individual is from a population that provides most of the ancestry of non-African modern humans, or if this sample is from an earlier wave of modern human migrants which some have argued represents an "Out of Africa that failed" or at least almost failed apart from a roughly 2% admixture of first wave modern humans with the ancestors of modern Papuans and negrito populations of the Philippines. 

Wednesday, January 25, 2017

N=8 SUGRA Adds Only RH Neutrinos To The Fermion Content Of The Standard Model

It may be useful in higher N versions of supersymmetry (SUSY) and supergravity (SUGRA) as a foundation for "within the Standard Model" theory that demonstrates deeper relationships between the components of the Standard Model, particularly because these theories are exceptions to an important "no-go" theorem in theoretical physics called the Coleman-Mandula no go theorem.

These goals are more worthwhile than exploring SUSY's best known crude N=1 form which is contrary to experiment, is baroque, and has been explored so heavily due to mathematical laziness, in order to explain the hierarchy non-problem, and in the interest of "naturalness."

Background: What is the Coleman-Mandula no-go theorem?

Basically, the Coleman-Mandula no-go theorem says that any theory that attempts to describe nature in a manner:

(1) consistent with the foundational principles of quantum mechanics, and 
(2) also consistent with special relativity, 
(3) that has massive fundamental particles which are consistent with those observed in real life at low energies: 

(4) must have particle interactions that can be described in terms of a Lie Group, and 
(5) can't have laws governing particle interactions that depend upon the laws of special relativity in a manner different from the way that they do in the Standard Model.

Since almost any realistic beyond the Standard Model theory must meet all three of the conditions for the no-go theorem to apply in order to meet rigorously tested experimental constraints, the conclusion of the theorem requires all such theories to have a single kind of core structure. This largely turns the process of inventing beyond the Standard Model theories of physics from an open ended inquiry into an elaborate multi-choice question. For example, while the theorem does not prescribe the conservation laws that are allowed in such a theory, all of its conservation laws must follow a very particular mathematical form.

More technically, this no-go theorem can be summed up as follows:
Every quantum field theory satisfying the assumptions, 
1. Below any mass M, there are only finite number of particle types
2. Any two-particle state undergoes some reaction at almost all energies
3. The amplitude for elastic two body scattering are analytic functions of scattering angle at almost all energies. 
and that has non-trivial interactions can only have a Lie group symmetry which is always a direct product of the Poincaré group and an internal group if there is a mass gap: no mixing between these two is possible. As the authors say in the introduction to the 1967 publication, "We prove a new theorem on the impossibility of combining space-time and internal symmetries in any but a trivial way. . . . 
Since "realistic" theories contain a mass gap, the only conserved quantities, apart from the generators of the Poincaré group, must be Lorentz scalars.
The Poincaré group is a mathematical structure the defines the geometry of Minkowski space, which is the most basic space in which physical theories that are consistent with Einstein's theory of special relativity must follow.

A Lorentz scalar is "is a scalar which is invariant under a Lorentz transformation. A Lorentz scalar may be generated from multiplication of vectors or tensors. While the components of vectors and tensors are in general altered by Lorentz transformations, scalars remain unchanged. A Lorentz scalar is not necessarily a scalar in the strict sense of being a (0,0)-tensor, that is, invariant under any base transformation. For example, the determinant of the matrix of base vectors is a number that is invariant under Lorentz transformations, but it is not invariant under any base transformation."

Notable Lorentz scalars include the "length" of a position vector, the "length" of a velocity vector, the inner product of acceleration and the velocity vector, the 4-momentum of a particle, the energy of a particle, the rest mass of a particle, the 3-momentum of a particle, and the 3-speed of a particle.

But, SUSY and SUGRA are important exceptions to the Coleman-Mandula no-go theorem. (There are also a few other exceptions to this no-go theorem which are beyond the scope of this post which have quite different applications.) 

So, there are a variety of interesting ideas that one might want to try to implement in a beyond the Standard Model theory that it has been proved can only be implemented within the context of a SUSY or SUGRA model.

The Post
The latest CERN Courier has a long article by Hermann Nicolai, mostly about quantum gravity. Nicolai makes the following interesting comments about supersymmetry and unification:
To the great disappointment of many, experimental searches at the LHC so far have found no evidence for the superpartners predicted by N = 1 supersymmetry. However, there is no reason to give up on the idea of supersymmetry as such, since the refutation of low-energy supersymmetry would only mean that the most simple-minded way of implementing this idea does not work. Indeed, the initial excitement about supersymmetry in the 1970s had nothing to do with the hierarchy problem, but rather because it offered a way to circumvent the so-called Coleman–Mandula no-go theorem – a beautiful possibility that is precisely not realised by the models currently being tested at the LHC.
In fact, the reduplication of internal quantum numbers predicted by N = 1 supersymmetry is avoided in theories with extended (N > 1) supersymmetry. Among all supersymmetric theories, maximal N = 8 supergravity stands out as the most symmetric. Its status with regard to perturbative finiteness is still unclear, although recent work has revealed amazing and unexpected cancellations. However, there is one very strange agreement between this theory and observation, first emphasised by Gell-Mann: the number of spin-1/2 fermions remaining after complete breaking of supersymmetry is 48 = 3 × 16, equal to the number of quarks and leptons (including right-handed neutrinos) in three generations (see “The many lives of supergravity”). To go beyond the partial matching of quantum numbers achieved so far will, however, require some completely new insights, especially concerning the emergence of chiral gauge interactions.
I think this is an interesting perspective on the main problem with supersymmetry, which I’d summarize as follows. In N=1 SUSY you can get a chiral theory like the SM, but if you get the SM this way, you predict for every SM particle a new particle with the exact same charges (behavior under internal symmetry transformation), but spin differing by 1/2. This is in radical disagreement with experiment. What you’d really like is to use SUSY to say something about internal symmetry, and this is what you can do in principle with higher values of N. The problem is that you don’t really know how to get a chiral theory this way. That may be a much more fruitful problem to focus on than the supposed hierarchy problem.
 From Not Even Wrong (italics in original, boldface emphasis mine). 

Monday, January 23, 2017

Was There An Almost Failed First Modern Human Out Of Africa Wave?

Pagani (2016) makes the case based upon the most recent common ancestry date determined by comparing parts of Papuan autosomal DNA, compared to the TMRCA of that DNA in other modern humans, that the ancestors of the Papuans admixed with humans from an earlier wave of modern human migrants to Asia ca. 100,000 years ago. This is from a population which is also one of the few to show signs of admixture with Denisovans, a form of archaic hominin that diverged from modern humans before the oldest genetic or archaeological evidence of the existence of anatomically modern humans.

This new data point could be the solution to a potentially vexing paradox. 

There has long been archaeological evidence of a modern human presence in places like the Levant from 100,000 to 75,000 years ago. But, more recently, archaeological evidence of a modern human presence has been found in the Arabian interior from 100,000 to 125,000 years ago, in South Asia from more than 75,000 years ago, and arguably even China from 100,000 to 125,000 years ago. 

But, analysis of modern human DNA, and efforts to date Neanderthal admixture with modern humans including efforts based on non-mutation rate methods using ancient DNA, put a common ancestor of all non-Africans at more like 50,000 to 65,000 years ago, which corresponds to archaeological evidence of the first modern human presence in Australia and Papua New Guinea (which were a single land mass at the time).

There is then a gap in archaeological record in the Levant from around 75,000 years before present to about 50,000 years before present, so until very recently, at least, the mainstream explanation for the earlier archaeological evidence in the Levant, used to be that the early Levantine archaeological remains were an "Out of Africa that failed" and that all modern non-Africans descend from a second Out of Africa that prospered wave. 

But, the increasingly widespread archaeological evidence for a modern human presence in this 25,000 year gap period, genetic evidence in Altai Neanderthal ancient DNA indicating an admixture with modern humans ca. 100,000 years ago, and now this new data point in Pagani (2016), suggest that the simple version of the "Out of Africa that failed" theory are wrong.

Pagani (2016) instead suggests that there was a first wave of pre-Upper Paleolithic humans that spread across parts of Eurasia who admixed with Neanderthals and didn't make much of an ecological difference (although arguably, they could have led to the extinction of Homo Erectus in Asia). This first wave of modern humans outside Africa provided only a small part of the ancestry of a small subset of modern human. But, tens of thousands of years later, the remnants of these first wave modern humans did admix with the second wave of more successful Upper Paleolithic modern humans who ended up in Papua New Guinea, a wave whose expansion into Eurasia was permanent and successful, even thought they were mostly replaced by these second wave modern humans.
High-coverage whole-genome sequence studies have so far focused on a limited number of geographically restricted populations, or been targeted at specific diseases, such as cancer. Nevertheless, the availability of high-resolution genomic data has led to the development of new methodologies for inferring population history and refuelled the debate on the mutation rate in humans. Here we present the Estonian Biocentre Human Genome Diversity Panel (EGDP), a dataset of 483 high-coverage human genomes from 148 populations worldwide, including 379 new genomes from 125 populations, which we group into diversity and selection sets. We analyse this dataset to refine estimates of continent-wide patterns of heterozygosity, long- and short-distance gene flow, archaicadmixture, and changes in effective population size through time as well as for signals of positive or balancing selection. We find a genetic signature in present-day Papuans that suggests that at least 2% of their genome originates from an early and largely extinct expansion of anatomically modern humans (AMHs) out of Africa. Together with evidence from the western Asian fossil record, and admixture between AMHs and Neanderthals predating the main Eurasian expansion, our results contribute to the mounting evidence for the presence of AMHs out of Africa earlier than 75,000 years ago. 
Pagani, et al., "Genomic analyses inform on migration events during the peopling of Eurasia", Nature (Published online 21 September 2016). Hat tip: Marnie at Linear Population Model.

The paper is not open access, but Marnie at Linear Population Model provides quotes from some key passages of the paper and follows up with full citations included abstracts of some of the key sources cited therein.

For example, Marnie provides the citation and abstract for the Altai Neanderthal paper (which I am reprinting in a reformatted manner with emphasis added):
It has been shown that Neanderthals contributed genetically to modern humans outside Africa 47,000–65,000 years ago. Here we analyse the genomes of a Neanderthal and a Denisovan from the Altai Mountains in Siberia together with the sequences of chromosome 21 of two Neanderthals from Spain and Croatia. We find that a population that diverged early from other modern humans in Africa contributed genetically to the ancestors of Neanderthals from the Altai Mountains roughly 100,000 years ago. By contrast, we do not detect such a genetic contribution in the Denisovan or the two European Neanderthals. We conclude that in addition to later interbreeding events, the ancestors of Neanderthals from the Altai Mountains and early modern humans met and interbred, possibly in the Near East, many thousands of years earlier than previously thought.
Martin Kuhlwilm, et al., "Ancient gene flow from early modern humans into Eastern Neanderthals", Nature, Volume 530, Pages 429-433 (25 February 2016).  

Marnie also provides some of the language clarifying the key original insights of Pagani (2016) (citations and internal cross references omitted without editorial indication, emphasis mine):
Using fineSTRUCTURE, we find in the genomes of Papuans and Philippine Negritos more short haplotypes assigned as African than seen in genomes for individuals from other non-African populations. This pattern remains after correcting for potential confounders such as phasing errors and sampling bias. These shorter shared haplotypes would be consistent with an older population split. Indeed, the Papuan–Yoruban median genetic split time (using multiple sequential Markovian coalescent (MSMC)) of 90 kya predates the split of all mainland Eurasian populations from Yorubans at ~75 kya. This result is robust to phasing artefacts. Furthermore, the Papuan–Eurasian MSMC split time of ~40 kya is only slightly older than splits between west Eurasian and East Asian populations dated at ~30 kya. The Papuan split times from Yoruba and Eurasia are therefore incompatible with a simple bifurcating population tree model. 
At least two main models could explain our estimates of older divergence dates for Sahul populations from Africa than mainland Eurasians in our sample: 1) admixture in Sahul with a potentially un-sampled archaic human population that split from modern humans either before or at the same time as did Denisova and Neanderthal; or 2) admixture in Sahul with a modern human population (extinct OoA line; xOoA) that left Africa after the split between modern humans Africa after the split between modern humans.

We consider support for these two non-mutually exclusive scenarios. Because the introgressing lineage has not been observed with aDNA, standard methods are limited in their ability to distinguish between these hypotheses. Furthermore, we show that single-site statistics, such as Patterson’s D, and sharing of non-African Alleles (nAAs), are inherently affected by confounding effects owing to archaic introgression in non-African populations. Our approach therefore relies on multiple lines of evidence using haplotype-based MSMC and fineSTRUCTURE comparisons (which we show should have power at this timescale). 
We located and masked putatively introgressed Denisova haplotypes from the genomes of Papuans, and evaluated phasing errors by symmetrically phasing Papuans and Eurasians genomes. Neither modification changed the estimated split time (based on MSMC) between Africans and Papuans. MSMC dates behave approximately linearly under admixture, implying that the hypothesized lineage may have split from most Africans around 120 kya. 
We compared the effect on the MSMC split times of an xOoA or a Denisova lineage in Papuans by extensive coalescent simulations. We could not simulate the large Papuan–African and Papuan–Eurasian split times inferred from the data, unless assuming an implausibly large contribution from a Denisova-like population. Furthermore, while the observed shift in the African–Papuan MSMC split curve can be qualitatively reproduced when including a 4% genomic component that diverged 120 kya from the main human lineage within Papuans, a similar quantity of Denisova admixture does not produce any significant effect. This favours a small presence of xOoA lineages rather than Denisova admixture alone as the likely cause of the observed deep African–Papuan split. We also show that such a scenario is compatible with the observed mitochondrial DNA and Y chromosome lineages in Oceania, as also previously argued. 
We further tested our hypothesized xOoA model by analysing haplotypes in the genomes of Papuans that show African ancestry not found in other Eurasian populations. We re-ran fineSTRUCTURE adding the Denisova, Altai Neanderthal and the Human Ancestral Genome sequences to a subset of the diversity set. FineSTRUCTURE infers haplotypes that have a most recent common ancestor (MRCA) with another individual. Papuan haplotypes assigned as African had, regardless, an elevated level of non-African derived alleles (that is, nAAs fixed ancestral in Africans) compared to such haplotypes in Eurasians. They therefore have an older mean coalescence time with our African samples.
I find no fault in the analysis in Pagani (2016) which is thoughtfully done by a method that should be reliable.

Sunday, January 22, 2017

A Bronze Age Flood In The British Isles

There was a catastrophic weather even which hit Ireland and Wales during the period 2354 BC and 2345 BC. Among other things, this event permanently flooded a village and forest Cardigan Bay in Wales.

There is a Gaelic legend that explains it, which has very close parallels across Ireland and Wales in other locations as well.
The Laigin people from Ireland at one stage controlled Llyn peninsula. I wonder if this is how we find almost identical "let the tap running" explanation for the legend in both Ireland and Wales?

"...practically every lake in Wales has some story or other connected with it. The story about the lake Glasfryn is very interesting. The story says that in the olden times there was a well where the lake is now, and this well, kept by a maiden named "Grassi," was called "Grace's Well." Over the well was a door, presumablv a trapdoor, which Grassi used to open when people wanted water, and shut immediately afterwards. One day Grassi forgot to shut the door, and the water overflowed and formed a lake. For her carelessness Grassi was turned into a swan, and her ghost is still said to haunt Glasfryn House and Cal-Ladi. This little lake is now the home and breeding-place of countless swans..." . . .
[I]n "Historical and descriptive notices of the city of Cork and its vicinity" first published in 1839 by John Windele. On Pages 42-43 we can read this:
A short distance to the south west, from the City, is Lough na famog, (probably the Lough Ere of the Hajiology,) now called the Lough of Cork, a considerable sheet of water supplied by streams from the adjoining hills; the high road runs along its eastern shore, and the other sides are skirted by grounds, unhappily without tree or shrub, to add a feature of beauty or interest to the picture. It is the scene of one of CROKER'S charming Fairy Legends, detailing the bursting forth of the lake, through the negligence of the princess Fioruisge (Irish: Fior-uisge - spring water), daughter of King Corc. In taking water from the charmed fountain, she forgot to close the mouth of the well, and the court, the gardens, the King, and his people, were buried beneath the flowing waters.

The incident is common to almost every lake in Ireland.

Six centuries ago, Cambrensis had a similar legend concerning Lough Neagh, which Hollinshed has repeated in a less diffusive style. "There was," he says, "in old time, where the pool now standeth, vicious and beastlie inhabitants. At which time was there an old saw in everie man his mouth, that as soon as a well there springing, (which for the superstitious reverence they bare it, was continuallie covered and signed,) were left open and unsigned, so soone would so much water gush out of that well, as would forthwith overwhelme the whole territorie. It happened at length, that an old trot came thither to fetch water, and hearing her childe whine, she ran with might and maine to dandle her babe, forgetting the observance of the superstitious order tofore used: But as she was returning backe, to have covered the spring, the land was so farre overflown, as that it past hir helpe; and shortly after, she, hir suckling, and all those that were within the whole territorie, were drowned; and this seemeth to carie more likelihood with it, because the fishers in a cleare sunnie daie, see the steeples and other piles plainlie and distinctlie in the water."
The legend that there was an inundated settlement in Cardigan Bay was corroborated a few years ago when a winter storm cleared away sands in the bay that had concealed it. Tree rings dated the event and confirmed that it happened at the same time as parallel events in Ireland.

It isn't implausible, however, that the modern neglected well legends derive not from a direct memory of the actual event, but from a similar winter storm that revealed the inundated settlement and demanded an explanation, much like the one a few years ago that led to the modern archaeological discovery.

This also begs the question of whether there was a global sea level rise in the Atlantic Ocean at this time, perhaps due to some glacial dam finally breaking and flooding the ocean, that might have a connection to Plato's Atlantis myth, or even to the Biblical flood myth.

Friday, January 20, 2017

Physical v. Mathematical Constants

Some of the most memorable constants in mathematics like pi and e are transcendental numbers.

Is this true of some or all of the physical constants?

How would you know?

Even the most precisely measured of the physical constants is only known to a dozen or two digits - too few to directly determine whether it was transcendental or rational, or even to make a reasonable guess.

But, if you could come up with a formula from which a physical constant could be determined that had plausible reasons to be correct, perhaps you could know from the form of the formula, even if it wasn't actually possible to calculate the formula numerically to much more precision than the experimental measurement.

Of course, any physical constant with a factor of pi or e in it would be transcendental, regardless of the nature of the remaining factor (except in the modulo unique case where the remaining factor contained the inverse of pi or e as the case might be, for example).

There should be a term for a number that is still transcendental, even after factoring out well defined, purely mathematical constants. Physically transcendental perhaps?

Thursday, January 19, 2017

Amateurs Can Made A Difference Too

While rare, there are examples of individuals without credentials making important contributions to science. J.B.S. Haldane, one of the biggest B-list name in genetics, is one such individual.
John Burdon Sanderson (JBS) Haldane (1892-1964) was a leading science popularizer of the twentieth century. Sir Arthur C. Clarke described him as the most brilliant scientific popularizer of his generation. Haldane was a great scientist and polymath who contributed significantly to several sciences although he did not possess an academic degree in any branch of science. He was also a daring experimenter who was his own guinea pig in painful physiological experiments in diving physiology and in testing the effects of inhaling poisonous gases.

Lattice QCD Makes Decent Approximations Of Experimental Data And QCD Does Axions

A new preprint compares a wide variety of recent lattice QCD predictions (and post-dictions) of the properties of various mesons and baryons.

The results show a solid array of accurate results. Neither the predictions nor, in many cases, the experimental results, are terribly precise, but the lattice QCD results do consistently make lots of accurate predictions. This tends to disfavor the possibility that there are beyond the Standard Model physics at work in QCD, and to support the hypothesis that while doing the math to determine what the equations of QCD imply in the real world is hard, that the underlying theory is basically sound.

Supercomputers applying Lattice QCD have also made progress in establishing the mass of a hypothetical particle call the axion under a beyond the Standard Model modification introduced to explain the fact that CP violation is non-existent or negligible in strong force interactions. The final result is the axion mass should be between 50 and 1500 * 10-6 eV/c2. Hat tip to Backreaction.

This is on the same order of magnitude as the expected mass of the lightest neutrino mass in a normal mass hierarchy, or perhaps up to about 20-40 times lighter. By comparison the second lightest neutrino mass is not less than about 8000 * 10-6 eV/c2, and the heaviest neutrino mass is not less than about 52,000 * 10-6 eV/c2.

These limits are considerably more narrow than the state of the art as of 2014 as I explained in a blog post at that time:

A new pre-print by Blum, et al., examines observational limits on the axion mass and axion decay constant due to Big Bang Nucleosynthesis, because the role that the axion plays in strong force interactions would impact the proportions of light atoms of different types created in the early universe.
The study concludes that (1) the product of the axion mass and axion decay constant must be approximately 1.8*10^-9 GeV^2, and (2) that in order to solve the strong CP problem and be consistent with astronomy observations, that axion mass must be between 10^-16 eV and 1 eV in mass (with a 10^-12 eV limitation likely due to the hypothesis that the decay constant is less than the Planck mass). The future CASPEr2 experiment could place a lower bound on axion mass of 10^-12 eV to 10^-10 eV and would leave the 1 eV upper bound unchanged. 
Other studies argue that the axion decay constant must be less than 10^9 GeV (due to constraints from observations of supernovae SN1987A) and propose an axion mass on the order of 6 meV (quite close to the muon neutrino mass if one assumes a normal hierarchy and a small electron neutrino mass relative to the muon neutrino-electron neutrino mass difference) or less. Estimates of the axion mass in the case of non-thermal production of axions, which are favored if it is a dark matter particle, are on the order of 10^-4 to 10^-5 eV. There are also order of magnitude estimates of the slight predicted coupling of axions to photons. 
Other studies placing observational limitations on massive bosons as dark matter candidates apply only to bosons much heavier than the axion.
A narrower theoretically possible target, in turn, makes experimental confirmation or rejection of the axion hypothesis much easier. 

Sunday, January 15, 2017

Population Density In Modern Africa


A declassified CIA map showing African population density as of 2009 CE

Africa is currently in an arid phase and has been since most of the later half of the Holocene era. So, while Africa's population has grown greatly in the last several thousand years, the relative population density of Africa's regions has probably been pretty similar. I made a previous post along the same lines in 2012, but this is a better map.

Several aspects of this are notable.

1. Lots of Africa (the Sahara, the Kalahari, and parts of the Congo) are virtually uninhabited.

2. The biogeographic divide between North Africa and Subsaharan Africa is real. The population of North Africa tightly hugs the Mediterranean coast. Even the Atlantic and Mediterranean coast are basically barren and uninhabitable for long stretches.

3. Jungles are mostly not completely uninhabitable, even though they have low population densities at time.

4. I had not realized that the eastern part of South Africa was so much more habitable than the western part.

5. Many of the densely populated parts of Africa track fresh water sources - the Niger, the Nile and the Great Rift Valley. But, central Africa is still surprisingly thinly populated despite having two major lakes.

6. While my mental image of Ethiopia is of a place that is rural and barren, it is actually one of the most densely populated parts of Africa.

Friday, January 13, 2017

A Recap Of The State Of Muon g-2 Physics

The Current Experimental Result

The most definitive measurement of the anomalous magnetic moment of the muon (g-2) was conducted by the Brookhaven National Laboratory E821 experiment which announced its results on January 8, 2004.

This measurement was  which differs somewhat from a precisely calculated theoretical value.

In units of 10-11 and combining the errors in quadrature, the experimental result was:

E821        116 592 091 ± 63

Dividing the total value by the error gives an error of 540 parts per million.

The Current Theoretical Prediction

The current state of the art theoretical prediction from the Standard Model of particle physics regarding the value of muon g-2 in units of 10-11 is:

QED   116 584 718.95 ± 0.08
HVP                6 850.6 ± 43
HLbL                    105 ± 26
EW                     153.6 ± 1.0
Total SM 116 591 828 ± 49
The main contribution comes by far from QED, which is known to five loops (tenth order) and has a small, well-understood uncertainty. Sensitivity at the level of the electroweak (EW) contribution was reached by the E821 experiment. 
The hadronic contribution dominates the uncertainty (0.43 ppm compared to 0.01 ppm for QED and EW grouped together). This contribution splits into two categories, hadronic vacuum polarization (HVP) and hadronic light-by-light (HLbL). 
The HVP contribution dominates the correction, and can be calculated from e + e − → hadrons cross-section using dispersion relations. 
The HLbL contribution derives from model-dependent calculations. 
Lattice QCD predictions of these two hadronic contributions are becoming competitive, and will be crucial in providing robust uncertainty estimates free from uncontrolled modeling assumptions. Lattice QCD predictions have well-understood, quantifiable uncertainty estimates. Model-based estimates lack controlled uncertainty estimates, and will always allow a loophole in comparisons with the SM.
In other words, unsurprisingly, almost all of the uncertainty in the theoretical prediction comes from the QCD part of the calculations. One part of that calculation has a precision of ± 0.6% (roughly the precision with which the strong force coupling constant is known), the other part of that calculation has a precision of only ± 25%.

A 2011 paper suggested that most of the discrepancy was between theory and experiment was probably due to errors in the theoretical calculation.

The Discrepancy

In the same units, the experimental result from 2004 exceeds the theoretical prediction by:

Discrepancy          263

How Significant Is This Discrepancy?

This is a 3.3 sigma discrepancy, which is notable in physics, but not considered definitive proof of beyond the Standard Model either. 

The discrepancy is small enough that it could easily be due to some combination of a statistical fluke in the measurements and underestimated systemic and theoretical calculation errors. But, this discrepancy has been viewed by physicists as one of the most notably in all of physics for the last thirteen years.

The QED prediction [for the electron g-2] agrees with the experimentally measured value to more than 10 significant figures, making the magnetic moment of the electron the most accurately verified prediction in the history of physics.
This is an accuracy on the order of one part per billion and the theoretical and experimental results for the electron g-2 are consistent at slightly less than the two sigma level. The five loop precision calculations of the electron g-2 have a theoretical uncertainty roughly three times as great as the current experimental uncertainty in that measurement.

So, physicists naturally expect the muon g-2 to also reflect stunning correspondence between the Standard Model theoretical prediction and experiment.

On the other hand, this discrepancy shouldn't be overstated either. The discrepancy between the theoretically predicted value and the experimentally measured value is still only 2.3 parts per million. As I noted in a 2013 post at this blog:
The discrepancy is simultaneously (1) one of the stronger data points pointing towards potential beyond the Standard Model physics (with the muon magnetic moment approximately 43,000 times more sensitive to GeV particle impacts on the measurement than the electron magnetic moment) and (2) a severe constraint on beyond the Standard Model physics, because the absolute difference and relative differences are so modest that any BSM effect must be very subtle.
In particular, my 2013 post made the following observations with regard to the impact of this discrepancy on SUSY theories:
The muon g-2 limitations on supersymmetry are particularly notable because unlike limitations from collider experiments, the muon g-2 limitations tend to cap the mass of the lightest supersymmetric particle, or at least to strongly favor lighter sparticle masses in global fits to experimental data of SUSY parameters. As a paper earlier this year noted: 
"There is more than 3 sigma deviation between the experimental and theoretical results of the muon g-2. This suggests that some of the SUSY particles have a mass of order 100 GeV. We study searches for those particles at the LHC with particular attention to the muon g-2. In particular, the recent results on the searches for the non-colored SUSY particles are investigated in the parameter region where the muon g-2 is explained. The analysis is independent of details of the SUSY models."
The LHC, of course, has largely ruled out SUSY particles with masses on the order of 100 GeV. Another fairly thoughtful reconciliation of the muon g-2 limitations with Higgs boson mass and other LHC discovery constraints can be found in a February 28, 2013 paper which in addition to offering its own light sleptons, heavy squark solution also catalogs other approaches that could work.
Regrettably, I have not located any papers examining experimental boundaries on SUSY parameter space that also include limitations from the absence of discovery of proton decay of less than a certain length of time, and the current thresholds of non-discovery of neutrinoless double beta decay. The latter, like muon g-2 limitations, generically tends to disfavor heavy sparticles, although one can design a SUSY model that addresses this reality. 
Some studies do incorporate the lack of positive detections of GeV scale WIMPS in direct dark matter searches by XENON 100 that have been made more definitive by the recent LUX experiment results. Barring "blind spots" in Tevatron and LHC and LEP experiments at low masses, a sub-TeV mass plain vanilla SUSY dark matter candidate is effectively excluded by current experimental results.
The failure of collider experiments at the LHC to discovery any new particles other than the Higgs boson since 2004, is one of the factors the suggests that the discrepancy is probably due to theoretical and experimental errors, rather than due to new physics. The discrepancy is sufficiently small that if it was due to new physics, that new physics should have been apparent at energies we have already probed by now.

What Now?

This year, the Fermilab E989 experiment will begin that process of replicating that measurement with greater precision. As the abstract to the paper describing the new experiment explains:
The upcoming Fermilab E989 experiment will measure the muon anomalous magnetic moment aµ. This measurement is motivated by the previous measurement performed in 2001 by the BNL E821 experiment that reported a 3-4 standard deviation discrepancy between the measured value and the Standard Model prediction. The new measurement at Fermilab aims to improve the precision by a factor of four reducing the total uncertainty from 540 parts per billion (BNL E821) to 140 parts per billion (Fermilab E989). This paper gives the status of the experiment.
Put another way, in units of 10-11 the target is to reduce the experimental error to ± 16.3

Meanwhile, the body of the paper notes that:
The uncertainties in the theory calculation are expected to improve by a factor of two on the timescale of the E989 experiment. This improvement will be achieved taking advantage of new data to improve both the HVP (BESIII [7], VEPP2000 [8] and B-factories data) and HLBL (KLOE-2 [9] and BESIII data), the latter gaining from the modeling improvements made possible with the new data. On the lattice QCD side, new ways of computing aµ from first principles and an increase in computing capability will provide the expected gains. 
Put another way, in units of 10-11 the expectation is to reduce theoretical uncertainty to ± 24.5

Thus, in units of 10-11 a one sigma discrepancy between the theoretical result and the experimental result will be ± 29.4 at Fermilab E989. As a result, the body of the paper notes that:
Given the anticipated improvements in both experimental and theoretical precision, if the central values remain the same there is a potential 7 standard deviation between theory and measurement (5 standard deviation with only experimental improvement).
Potential Implications Of New Experimental Results

If there are in fact no new physics and all of the previous discrepancy between the theoretical value of muon g-2 and the experimentally measured value was due to theoretical calculation uncertainty, statistical errors, and systemic errors, then the newly measured experimental value of muon g-2 should fall and improved theoretical calculations may nudge up the theoretically expected result a bit.

If that does happen it will put a nail in the coffin of a huge swath of beyond the Standard Model theories. On the other hand, if the discrepancy grows in statistical significance, which in principle it has the experimental power to do, it will be a strong indicator that there are at least some BSM physics out there to be found that have not yet been observed at the LHC or anywhere else.

Wednesday, January 11, 2017

The Founding Americans Hung Out In Beringia Before Moving South

In another New World pre-history paradigm confirming result, archaeology has confirmed that humans were present in Beringia for 10,000 years (which was pre-Last Glacial Maximum), before migrating to North and South America when melting glaciers finally made this possible, six thousand years after the LGM.


The timing of the first entry of humans into North America is still hotly debated within the scientific community. Excavations conducted at Bluefish Caves (Yukon Territory) from 1977 to 1987 yielded a series of radiocarbon dates that led archaeologists to propose that the initial dispersal of human groups into Eastern Beringia (Alaska and the Yukon Territory) occurred during the Last Glacial Maximum (LGM). This hypothesis proved highly controversial in the absence of other sites of similar age and concerns about the stratigraphy and anthropogenic signature of the bone assemblages that yielded the dates. The weight of the available archaeological evidence suggests that the first peopling of North America occurred ca. 14,000 cal BP (calibrated years Before Present), i.e., well after the LGM. Here, we report new AMS radiocarbon dates obtained on cut-marked bone samples identified during a comprehensive taphonomic analysis of the Bluefish Caves fauna. Our results demonstrate that humans occupied the site as early as 24,000 cal BP (19,650 ± 130 14C BP). In addition to proving that Bluefish Caves is the oldest known archaeological site in North America, the results offer archaeological support for the “Beringian standstill hypothesis”, which proposes that a genetically isolated human population persisted in Beringia during the LGM and dispersed from there to North and South America during the post-LGM period.

Interestingly, the artifacts found in the caves can be associated with a particular Northeast Siberian archaeological culture, called the Dyuktai culture (named after the type artifacts found at Dyuktai cave on the Aldan River in Siberia), whose links to the New World founding population have been seriously considered, at least since the publication of Seonbok Yi, et al., "The 'Dyuktai Culture' and New World Origins", 26(1) Current Anthropology (1985), and with less specificity as early as 1937. Professional opinion is divided over whether the similarities in the artifacts really support the conclusion that this is the source culture for the New World Founding population, however. A map at page 27 of the (badly scanned) linked monograph illustrates the location of this site and related sites of the early phase of this culture.

The materials uncovered also indicate a more or less continuous although intermittent human occupation of the cave during the period when Beringia is believed to have been inhabited by the predecessors of the Founding population of the Americas.
Small artefact series were excavated from the loess in Cave I (MgVo-1) and Cave II (MgVo-2) and rich faunal assemblages were recovered from all three caves [2327]. The lithic assemblages (which number about one hundred specimens) include microblades, microblade cores, burins and burin spalls as well as small flakes and other lithic debris [2326]. Most of the artefacts were recovered from the loess of Cave II at a depth comprised between about 30 to 155 cm. The deepest diagnostic pieces–a microblade core (B3.3.17), a burin (B3.6.1) and a core tablet (B4.16.4) found inside Cave II, as well as a microblade (E3.3.2) found near the cave entrance–derive from the basal loess at a depth of about 110 to 154 cm below datum, according to the CMH archives [28]. While the artefacts cannot be dated with precision [24, 25, 29], they are typologically similar to the Dyuktai culture which appears in Eastern Siberia about 16–15,000 cal BP, or possibly earlier, ca. 22–20,000 cal BP [30]. 
There are no reported hearth features [24]. Palaeoenvironmental evidence, including evidence of herbaceous tundra vegetation [31, 32] and vertebrate fauna typical of Pleistocene deposits found elsewhere in Eastern Beringia [27, 33, 34], is consistent with previously obtained radiocarbon dates which suggest that the loess layer was deposited between 10,000 and 25,000 14C BP (radiocarbon years Before Present), i.e., between 11,000 and 30,000 cal BP [2327, 35] (Table 1).

The conclusion to the current article (a part of which is quoted below with citations omitted) notes that:
[T]he Bluefish Caves, like other Beringian cave sites, were probably only used occasionally as short-term hunting sites. Thus, they differ from the open-air sites of the Tanana valley in interior Alaska and the Little John site in the Yukon Territory, where hearth features, large lithic collections, bone tools and animal butchery have been identified, reflecting different cultural activities and a relatively longer-term, seasonal occupation. 
In conclusion, while the Yana River sites indicate a human presence in Western Beringia ca. 32,000 cal BP, the Bluefish Caves site proves that people were in Eastern Beringia during the LGM, by at least 24,000 cal BP, thus providing long-awaited archaeological support for the “Beringian standstill hypothesis”. According to this hypothesis, a human population genetically isolated existed in Beringia from about 15,000 to 23,000 cal BP, or possibly earlier, before dispersing into North and eventually South America after the LGM. 
Central Beringia may have sustained human populations during the LGM since it offered relatively humid, warmer conditions and the presence of woody shrubs and occasional trees that could be used for fuel. However, this putative core region was submerged at the end of the Pleistocene by rising sea levels making data collection difficult. Bluefish Caves, situated in Eastern Beringia, may have been located at the easternmost extent of the standstill population’s geographical range. The seasonal movements of human hunters from a core range, hypothetically located in Central Beringia, into adjoining, more steppic regions such as Eastern Beringia would explain the sporadic nature of the occupations at Bluefish Caves. 
The scarcity of archaeological evidence for LGM occupations in both Western and Eastern Beringia suggests that the standstill population was very small. This is consistent with the genetic data, which suggest that the effective female population was only about 1000–2000 individuals and that the standstill population didn’t exceed a few tens of thousands of people in Beringia. The size of the standstill population is thought to have increased after the LGM, leading to renewed dispersals into the Americas. Our results indicate that human hunters continued to use Bluefish Caves as the climate improved. While some prey species became extinct by ca. 14,000 cal BP (e.g. horse), human hunters could continue to rely on different species such as caribou, bison and wapiti.
By around 15–14,000 cal BP an ice-free corridor formed between the Laurentide and Cordilleran ice sheets potentially allowing humans to disperse from Beringia to continental North America; arguably, this corridor wouldn’t have been biologically viable for human migration before ca. 13–12,500 cal BP, however. It is now more widely recognized that the first inhabitants of Beringia probably dispersed along a Pacific coastal route, possibly as early as ca. 16,000 cal BP, and settled south of the ice sheets before the ice-free corridor became a viable route.

Our results, therefore, confirm that Bluefish Caves is the oldest known archaeological site in North America and furthermore, lend support to the standstill hypothesis. 
The bottom line is that we increasingly understand the settlement of the Americas by humans in considerable geographical and chronological detail backed up by multiple consistent lines of solid evidence.

Tuesday, January 10, 2017

De-Domesticating The Cow

Ten thousands years ago, people in the Fertile Crescent domesticated the wild auroch and the result was the domesticated cow (taurine cattle in the Fertile Crescent) and within a couple of thousand years, zebu cattle were also domesticated from wild auroch's in India. Based upon the mtDNA on domesticated taurine cattle, it appears that all domesticated taurine cattle originated from about 80 wild female aurochs in the Near East.

About 7,200 years ago, domesticated cows reached Europe's Pontic-Caspian steppe, and people who had used to hunt wild aurochs, wild horses (three different species), and wild pigs, turned to domesticated cows, sheep and pigs.

The wild auroch species that give rise to domesticated cows went extinct when the last members of the last remaining herd of their kind died in Poland in 1627.

Now, in an enterprise that is part old school selective breeding and part ecological experiment, Europeans are trying to de-domesticate the cow (a process also called "re-wilding") and return a wild auroch-like species in their forests. The process mirrors efforts to restore bison herds and wolf packs to the American West.
Conservationists now believe the loss of the keystone herbivore was tragic for biodiversity in Europe, arguing that the aurochs' huge appetite for grazing provided a natural "gardening service" that maintained landscapes and created the conditions for other species to thrive. . . . 
Ecologist Ronald Goderie launched the Tauros programme in 2008, seeking to address failing ecosystems. The most powerful herbivore in European history seemed to offer a solution. 
"We thought we needed a grazer that is fully self-sufficient in case of big predators...and could do the job of grazing big wild areas," says Goderie. "We reasoned that this animal would have to resemble an auroch." 
Rather than attempt the type of gene editing or high-tech de-extinction approaches being employed for species from woolly mammoths to passenger pigeons, Goderie chose a method known as back-breeding to create a substitute bovine he named "Tauros." 
Auroch genes remain present in various breeds of cattle around the continent, and the team identified descendants in Spain, Portugal, Italy and the Balkans. Geneticists advised breeding certain species together to produce offspring closer to the qualities of an auroch, and then breed the offspring. 
The animals get closer with each generation, and the team have the advantage of being able to test the offspring's DNA against the complete genome of an auroch, which was successfully sequenced at University College Dublin. 
"You could see from the first generation that apart from the horn size, there was enough wild in the breed to produce animals far closer to the auroch than we would have expected," says Goderie. 
The ecologist had predicted that seven generations would be necessary for the desired outcome, which might be achieved by 2025. The program is now in its fourth generation, and pilot schemes across Europe are offering encouragement. 
The Tauros programme connected with Rewilding Europe early on, a group that supports the restoration of natural processes through projects that range from rebuilding rivers to introducing apex predators. . . . "We see progress not only in looks and behavior but also in de-domestication of the animals," he says. "This is a challenging process as they have to adapt to the presence of large packs of wolves." 
Herds of herbivores are habitually decimated by local wolves at the Croatian site, says Goderie. By contrast, the Tauros learned to defend themselves and suffered few losses. 
. . . . 
"Bovines can shape habitats and facilitate other species because of their behavior, and the more primitive and close to the wild the better because it means that eventually they can become part of the natural system." 
Many European landscapes are in dire need of grazing animals, which can otherwise become uninhabitable for other species. 
"Without grazing everything becomes forest, or barren land when there is agriculture," says [Frans] Schepers [managing director of Rewilding Europe]. "The gradients in between are so important for biodiversity, from open soil to grassland and 'mosaic landscapes. The Rhodope Mountains of Bulgaria are among the richest areas for reptiles such as tortoises, snakes and lizards. But they need open spaces or they lose their habitat." . . .
"Questions abound whether primarily wetland forests like the aurochs used to inhabit still exist, whether it could negatively impact wild or domestic plants or animals, and if it might endanger people." 
The successful introduction of bison in the US shows that such initiatives can have a positive impact, says Dr. Eric Dinerstein of conservation group Resolve, but he adds that one intervention can lead to another. 
"If an ecosystem evolved with large herbivores. . . there is not an alternative and you need something in its functional role," he says. "But to introduce aurochs, you may need predators as well."
Perhaps, humans, one of the main predators of aurochs in the Mesolithic era, will return to that role.

Rewilding is one area where North America has an edge over Europe. It has more open spaces, and is much less far removed in time from an era in which humans had a much milder impact on wild species.

Restoring the bison meant rolling back the clock a century, while the restoration of aurochs depends upon gleaning genes from them that entered domesticated cattle many thousands of years ago and restoring habitats that have likewise changed far more dramatically.

Sunday, January 8, 2017

The Bullet Cluster Supports Modified Gravity Theories

The initial reaction to the Bullet Cluster was that it resolved the modified gravity v. particle dark matter debate unequivocally in favor in particle dark matter. But, in fact, the opposite is true. Modified gravity theories can explain the Bullet Cluster, while the Bullet Cluster is deeply problematic for particle dark matter theories.
[C]urious scientists had to tell apart their two hypotheses: Modified gravity or particle dark matter? They needed an observation able to rule out one of these ideas, a smoking gun signal – the Bullet Cluster. 
The theory of particle dark matter had become known as the “concordance model” (also: ΛCDM). It heavily relied on computer simulations which were optimized so as to match the observed structures in the universe. From these simulations, the scientists could tell the frequency by which galaxy clusters should collide and the typical relative speed at which that should happen.

From the X-ray observations, the scientists inferred that the collision speed of the galaxies in the Bullet Cluster must have taken place at approximately 3000 km/s. But such high collision speeds almost never occurred in the computer simulations based on particle dark matter. The scientists estimated the probability for a Bullet-Cluster-like collision to be about one in ten billion, and concluded: that we see such a collision is incompatible with the concordance model. And that’s how the Bullet Cluster became strong evidence in favor of modified gravity. 
However, a few years later some inventive humanoids had optimized the dark-matter based computer simulations and arrived at a more optimistic estimate of a probability of 4.6×10-4 for seeing something like the Bullet-Cluster. Briefly later they revised the probability again to 6.4×10−6
Either way, the Bullet Cluster remained a stunningly unlikely event to happen in the theory of particle dark matter. It was, in contrast, easy to accommodate in theories of modified gravity, in which collisions with high relative velocity occur much more frequently
It might sound like a story from a parallel universe – but it’s true. The Bullet Cluster isn’t the incontrovertible evidence for particle dark matter that you have been told it is. It’s possible to explain the Bullet Cluster with models of modified gravity. And it’s difficult to explain it with particle dark matter.

How come we so rarely read about the difficulties the Bullet Cluster poses for particle dark matter? It’s because the pop sci media doesn’t like anything better than a simple explanation that comes with an image that has “scientific consensus” written all over it. Isn’t it obvious the visible stuff is separated from the center of the gravitational pull?

But modifying gravity works by introducing additional fields that are coupled to gravity. There’s no reason that, in a dynamical system, these fields have to be focused at the same place where the normal matter is. Indeed, one would expect that modified gravity too should have a path dependence that leads to such a delocalization as is observed in this, and other, cluster collisions. And never mind that when they pointed at the image of the Bullet Cluster nobody told you how rarely such an event occurs in models with particle dark matter. 
No, the real challenge for modified gravity isn’t the Bullet Cluster. The real challenge is to get the early universe right, to explain the particle abundances and the temperature fluctuations in the cosmic microwave background. The Bullet Cluster is merely a red-blue herring that circulates on social media as a shut-up argument. It’s a simple explanation. But simple explanations are almost always wrong.
From Sabine Hossenfelder at Backreaction.

The second reference in the quoted material is to the following paper:
To quantify how rare the bullet-cluster-like high-velocity merging systems are in the standard LCDM cosmology, we use a large-volume 27 (Gpc/h)^3 MICE simulation to calculate the distribution of infall velocities of subclusters around massive main clusters. The infall-velocity distribution is given at (1-3)R_{200} of the main cluster (where R_{200} is similar to the virial radius), and thus it gives the distribution of realistic initial velocities of subclusters just before collision. These velocities can be compared with the initial velocities used by the non-cosmological hydrodynamical simulations of 1E0657-56 in the literature. The latest parameter search carried out recently by Mastropietro and Burkert showed that the initial velocity of 3000 km/s at about 2R_{200} is required to explain the observed shock velocity, X-ray brightness ratio of the main and subcluster, and displacement of the X-ray peaks from the mass peaks. We show that such a high infall velocity at 2R_{200} is incompatible with the prediction of a LCDM model: the probability of finding 3000 km/s in (2-3)R_{200} is between 3.3X10^{-11} and 3.6X10^{-9}. It is concluded that the existence of 1E0657-56 is incompatible with the prediction of a LCDM model, unless a lower infall velocity solution for 1E0657-56 with < 1800 km/s at 2R_{200} is found.
Jounghun Lee, Eiichiro Komatsu, "Bullet Cluster: A Challenge to LCDM Cosmology" (May 22, 2010). Later published in Astrophysical Journal 718 (2010) 60-65.

The next referenced paper is:
The Bullet Cluster has provided some of the best evidence for the Λ cold dark matter (ΛCDM) model via direct empirical proof of the existence of collisionless dark matter, while posing a serious challenge owing to the unusually high inferred pairwise velocities of its progenitor clusters. Here, we investigate the probability of finding such a high-velocity pair in large-volume N-body simulations, particularly focusing on differences between halo-finding algorithms. We find that algorithms that do not account for the kinematics of infalling groups yield vastly different statistics and probabilities. When employing the ROCKSTAR halo finder that considers particle velocities, we find numerous Bullet-like pair candidates that closely match not only the high pairwise velocity, but also the mass, mass ratio, separation distance, and collision angle of the initial conditions that have been shown to produce the Bullet Cluster in non-cosmological hydrodynamic simulations. The probability of finding a high pairwise velocity pair among haloes with Mhalo ≥ 1014 M is 4.6 × 10−4 using ROCKSTAR, while it is ≈34 × lower using a friends-of-friends (FoF)-based approach as in previous studies. This is because the typical spatial extent of Bullet progenitors is such that FoF tends to group them into a single halo despite clearly distinct kinematics. Further requiring an appropriately high average mass among the two progenitors, we find the comoving number density of potential Bullet-like candidates to be of the order of ≈10−10 Mpc−3. Our findings suggest that ΛCDM straightforwardly produces massive, high relative velocity halo pairs analogous to Bullet Cluster progenitors, and hence the Bullet Cluster does not present a challenge to the ΛCDM model.
Robert Thompson, et al., "The rise and fall of a challenger: the Bullet Cluster in Λ Cold Dark Matter simulations" (June 29, 2015) and published at MNRAS.

The paper in the final reference is as follows:
We consider the orbit of the bullet cluster 1E 0657-56 in both CDM and MOND using accurate mass models appropriate to each case in order to ascertain the maximum plausible collision velocity. Impact velocities consistent with the shock velocity (~ 4700km/s) occur naturally in MOND. CDM can generate collision velocities of at most ~ 3800km/s, and is only consistent with the data provided that the shock velocity has been substantially enhanced by hydrodynamical effects.
Garry W. Angus and Stacy S. McGaugh, "The collision velocity of the bullet cluster in conventional and modified dynamics" (September 2, 2007) and also published at MNRAS.

In tangentially related new, even Lubos Motl is on record as being at least somewhat agnostic on the question of whether dark matter phenomena are caused by dark matter particles or modifications of gravity (in the context of a debate over who, if anyone should receive a Nobel prize related to dark matter):
[T]hese Zwicky-vs-Rubin disputes aren't too relevant for one reason: We are not terribly certain that dark matter is the right explanation of the anomalies. Given the not quite negligible "risk" that the right explanation is completely different, something like MOND, it could be very strange to give the Nobel prize for "it". What "it" even means? Look at the list of the Nobel prize winners. No one has ever received the Nobel prize for the discovery of "something" that no one knew what it actually was – a new particle? Black holes everywhere? A new term in Newton's gravitational law? The normal contribution rewarded by Nobel prizes is a clearcut theory that was experimentally proven, or the experimental proof of a clear theory. Even though most cosmologists and particle physicists etc. tend to assume dark matter, dark matter-suggesting observations aren't really belonging to this class yet. 
And I think that this is the actual main reason why Vera Rubin hasn't gotten the prize for dark matter – and no one else has received it, either.