Tuesday, June 14, 2016

Service Interruption

I am in the midst of representing clients in two long, back to back trials, as the lawyer who was to be doing what has become my job in one of them is on his honeymoon.  One, to a jury, is so far four days past its scheduled conclusion. I'll come up for air when they're finished, which may be another week or so.

Tuesday, May 31, 2016

John Hawks On Neanderthal Stone Circles

John Hawks muses at length on Neanderthal stone circles recently discovered deep in a cave. His main point is that given that much smaller average population of Neanderthals at any one time, and their greater remoteness in history, it isn't surprising that the archaeological record that might shed light on their culture is thin.

More Quick Hits

* Celtic parts of the U.K. (presumably Scotland, Wales, Northern Ireland), have more steppe ancestry than Southern and Eastern England proper, presumably because Norman invaders ca. 1066 CE had less steppe ancestry than the pre-existing residents of the U.K.  The residents of English proper also have less steppe ancestry than Anglo-Saxon ancient DNA.  Keep in mind, however, that this is a subtle difference that is discernible only because of a huge sample size (N=113,851) in a generally very homogeneous population.

* 48 ancient genomes from Iron Age to 18th century Finland will be available before year end.

Thursday, May 26, 2016

Ancient Phoenician mtDNA Looks European

The mito-genome of a Carthaginian Phoenician has been sequenced...The Tunisia man had the maternal haplotype U5b2c1, which is fairly limited to Europe. It is also found in low frequency in the Phoenician heartland of Lebanon, which was either native there as well or it was cross-pollinated through its colonies in Western and Southern Iberia. Additional sequencing shows some affinity to a person from Portugal.
From Bell Beaker Blogger.

Wednesday, May 25, 2016

Trying To Mix Physics and Religion Can Get Embarassing

Letting money from religiously motivated people drive discussions of physics can lead to absurd crazy-talk.
Some people at Rutgers have decided to show what can go wrong when you have the Templeton Foundation funding “philosophy of physics”. They’ve scheduled a two-day Rutgers Mini-Conference on Multiverse, Theodicy, and Fine-Tuning, during which the speakers will consider the following two topics:
  • Everettian Quantum Mechanics and Evil
    The problem of evil has been around for a long time: How can an all-powerful and all-good God allow evil of the sorts we see in the world? If the Everettian interpretation of quantum mechanics is correct, though, then there is a lot more evil in the world than what we see. This suggest a second problem of evil: If Everettianism is true, how can an all-powerful and all-good God allow evil of the sort we don’t see?
  • A Probability Problem in the Fine-Tuning Argument
    According to the fine-tuning argument: (i) the probability of a life-permitting universe, conditional on the non-existence of God, is low; and (ii) the probability of a life-permitting universe, conditional on the existence of God, is high. I demonstrate that these two claims cannot be simultaneously justified.
From Not Even Wrong.

For those of you who aren't familiar with it, the Everettian interpretation of quantum mechanics is another name for the "Many Worlds Interpretation".

The "fine-tuning" argument argues that physical constants in our current version of the laws of the universe which are derived from a variety of other physical constants, must have precise and absurdly "unlikely" values to cancel out and produce the measured values.

Every now and then I consider absurd questions too, like whether unicorn meat would be kosher. But, I don't try to hold legitimate academic mini-conferences discussing the issue.

Quick Hits

Busy with work and fighting an eight week old spring cough, so I'll be brief:

* Ny's arya blog has an interesting post about the linguistic evidence that the proto-Indo-European homeland had lots of tall mountains, contra the stereotype of proto-Indo-Europeans as purely steppe people.  Likewise, there is lots of sedentary agricultural vocabulary in proto-Indo-European.

This blog also pointed me to a nice meaty article (pdf) on the latest thinking about the Tocharian languages (the easternmost and now extinct branch of the Indo-European languages) by J.P. Mallory, a leading expert in the field.

* There is a new linguistics paper on Dene-Yeniseic by the main proponent of the case that the two language families (one Old World and one New World) are linguistically related.

* Someone found a beer recipe from China ca. 3000 BCE.

* Corded Ware women were more mobile than men per Strontium analysis of remains.

* British Bell Beaker people were mobile within Britain but not so much beyond it. Many more findings from the same paper are discussed here.  And, Beaker people were all over the British Isles:



* In the course of a discussion at Razib's blog over whether Muhammed could have been a Christian or Christian influenced, I did some research on and learned a lot about the Parthian Empire which is relevant to a lot of Iron Age history.  The discussion by Razib, me and multiple others is worth reading and brings out both stark differences of opinion in fact, subtle differences of opinion on importance and characterization of the facts, and plain old lots of historical data points from all sides that you probably didn't hear about, or don't recall, from Western Civ.  My usual focus is pre-Iron Age, but I'm intrigued enough to look more closely into this period.

* I've been largely convinced that you can't meaningfully estimate how many genes influence a continuous genetically determined trait by looking at the genetic variance and assuming that due to the law of averages, if a trait is determined by a large number of genes then it will have low variance in a population.  This is true, at the most basic level, because the law of averages applies only to repeated trials with the same probability and effect, while additive genetic variance does not meet this condition.

* Analysis of whole genomes from all over the world by software that allows for admixture tend to show that South Asians and East Eurasians mostly descend from West Africans with material East African admixture.  But, its hard to know what to make of that as the software is really operating to deal with issues beyond the range of applicability where it is designed to function properly and can't necessarily consider all hypotheses that make sense to explain the data.

* Humans were hunting mastodons in Florida ca. 14,550 years ago (pre-Clovis).

* Fifteen Indus River Valley civilization remains have been sent to labs for ancient DNA analysis and someday we'll learn what they can tell us about that civilization whose population genetic connection with West Eurasia and South Asia is highly disputed.  But, it could take awhile.

* Brown bears and modern humans migrated to similar places over the last 40,000 years (pre-agriculture) based upon brown bear genetics. Mammoth herd migration and genetics are similar informative regarding Paleolithic era human migrations. See also data points on elk.

* Donkeys were late arrivals to Europe, but donkey remains were found from 2500 BCE in Iberia.

* Somebody has unearthed a Dutch Bell Beaker boardwalk along a river.

* Bronze Age instrument construction involved very long range trade networks and the instruments themselves were widely disperse.

* The geology of the Andaman Islands explained.

Tuesday, May 17, 2016

mtDNA R0 is Native to Arabia

A new paper makes a very solid case that mtDNA R0, which is the sister clade to mtDNA HV (which is the dominant West Eurasian and North African Berber mtDNA clade) has a source in Arabia.  It has maximum diversity there in addition to most clades being most common there.  R0 is mostly found in Arabia and to a lesser extent West Asia, but mtDNA R0a1 is found at low frequencies across Europe as well and it as well as mtDNA R0a2'3 dispersed in the Mesolithic a.k.a. Epipaleolithic a.ka. 17kya to 13kya or so.

The full story is more complicated but generally fits a Mesolithic dispersal from Arabia which would have served as a refuge during the Last Glacial Maximum ca. 20kya.

The subclade mtDNA R0a2b2 is strongly associated by location, proximity, and age with the Ethio-Semitic migration from Arabia to East Africa, while R0a2b1 which strongly overlaps with it geographically, could be part of the same migration but might be from an earlier wave of Arabian to East African migration as the clade is two and a half times older than R0a2b2 along with a couple of other rare mtDNA R0 clade of the same age.

A rare clade mtDNA R0a6 is found mostly in Pakistan, mostly in the Kalash, but single individuals with this clade have also been noted in Iran, Palestine and Italy.

Wednesday, May 11, 2016

Einstein Published Theory Of General Relativity 100 Years Ago Today

The details are here. The theory was presented at a conference presentation in November of 1915, but the paper was submitted in March of 1916 and was published on May 11, 1916. Astonishingly little has changed in the theory of gravity since then (there have been refinements and applications of it, but not many).

In other news, the LHC is up and running again after a weasel induced shutdown. Grad students have been put on weasel patrol to prevent a repeat of the shutdown.
For the nearest future, the plan is to have a few inverse femtobarns on tape by mid-July, which would roughly double the current 13 TeV dataset. The first analyses of this chunk of data should be presented around the time of the ICHEP conference in early August. At that point we will know whether the 750 GeV particle is real. Celebrations will begin if the significance of the diphoton peak increases after adding the new data, even if the statistics is not enough to officially announce a discovery. In the best of all worlds, we may also get a hint of a matching 750 GeV peak in another decay channel (ZZ, Z-photon, dilepton, t-tbar,...) which would help focus our model building. On the other hand, if the significance of the diphoton peak drops in August, there will be a massive hangover...

Tuesday, May 10, 2016

A Moment Of Silence For Marcus


His Online Icon (via Mad Magazine)

I never met Marcus in person, but I discussed physics with him at the Physics Forum bulletin board where he presided over the Beyond the Standard Model forum with a special focus on loop quantum gravity (as well as other areas of physics where we shared interests) on countless lengthy occasions for about twelve years and he was always a perfect gentleman while also having great insights into physics.  He was a great mentor and friend.  He joined the board about a year before I did and in that time made about 25,000 comments and started 757 discussion threads.

It is with a heavy heart that I report that he has died of cancer (Friday or thereabouts).  According to his son, "it was esophagus cancer -- we found out about it in September, but by that point it was advanced to a level where not much could be done. We tried anyway -- chemo, radiation, etc. But, well..."

It is a sad day worthy of a moment of silence to reflect on what we shared.  The world is less wonderful without him.

Higher Primates Mostly Went Extinct In Asia When India Crashed Into Asia

Primates (per Wikipedia) can be broken down into a number of clades, one of which is called Simiformes (aka "anthropoids") which excludes tarsiers, lemurs and lorises (the so called "lower primates"), but includes humans, chimpanzees, gorillas, orangutans, gibbons, Old World Monkeys and New World Monkeys.

A new study, cited below, out of China provides insights into why "higher primates" survived and evolved into various clades including humans in Africa, while they went extinct leaving only "lower primates" in Asia.

The key event took place 34 million years ago when the collision of the continental plate that is now the Indian subcontinent hit the rest of Asia resulting in climate changes that destroyed jungles that had been home to primates until then. "Lower primates" managed to survive in the less lush forests that remained, while "higher primates" could not survive without a thriving tropical jungle.
A sharply cooler and drier climate at that time, combined with upheavals of landmasses that forged the Himalayas and the Tibetan Plateau, destroyed many tropical forests in Asia. That sent surviving primates scurrying south, say paleontologist Xijun Ni of the Chinese Academy of Sciences in Beijing and his colleagues. New Chinese finds provide the first fossil evidence that the forerunners of monkeys, apes and humans, also known as anthropoids, were then largely replaced in Asia by creatures related to modern lemurs, lorises and tarsiers. . . . 
“The focal point of anthropoid evolution shifted at some point from Asia to Africa, but we didn’t understand when and why the shift occurred until now,” says paleontologist and study coauthor K. Christopher Beard of the University of Kansas in Lawrence.
But the scarcity of Asian primate fossils from that time relative to those from Africa leaves the matter unsettled. Egyptian sites in particular have yielded numerous primate fossils dating from around 37 million to 30 million years ago. 
Excavations from 2008 to 2014 in southern China produced 48 teeth, some still held in jaw fragments, from six new fossil primate species, Beard says. These primates were tree dwellers and had assembled in a region located far enough south to retain forested areas. The new finds provide a rare glimpse of Asian primates that managed to weather the climate shift. Fossil teeth of one ancient species look much like those of modern tarsiers. These tiny, bug-eyed primates now live on Southeast Asian islands. “Tarsiers are ‘living fossils’ that can trace their evolutionary history back tens of millions of years in Asia,” Beard says. 
Only one Chinese fossil primate comes from an anthropoid, Ni’s group concludes. The researchers classify that animal as part of a line of Asian anthropoids previously identified from roughly 40-million-year-old tooth and jaw fragments found in Myanmar, just across China’s southwestern border (SN: 10/16/99, p. 244). 
Only one other Asian site, in Pakistan, has yielded anthropoid fossils of comparable age to the Chinese finds. The Pakistan fossils consist solely of teeth. Asian anthropoids died out a few million years after the continent’s tropical forests began to shrink, Beard suspects. 
Investigators already knew that primates’ forest homes in Africa survived the ancient cool down better than those in Asia. 
The Chinese team also argues, with less definitive support, for an asian origin of higher primates around 55 million years ago. But, "Lemur and loris ancestors must have lived in equatorial Africa and Madagascar by 34 million years ago, as lemurs and loris relatives do today. . . . The oldest known primate fossils, from 56 million to 55 million years ago, come from Asia, Europe, Morocco and North America." So, it is much harder to determine where primates, in general, first evolved.

The new paper is:

X. Ni et al. "Oligocene primates from China reveal divergence between African and Asian primate evolution." 352 Science 673 (May 6, 2016).

Is The 750 GeV Resonance Real? I Don't Think So

Hundreds of papers have been written about an apparent 750 GeV diphoton resonance seen at some moderate significance by both of the LHC experiments which Lubos likes to call the Cernette.

These bare facts would suggest a boson rather than a fermion (since the spin of the decay products is even), probably of spin-0 or spin-2 (because spin-1 bosons can't decay to two photons), and an intermediate electrically charged state of an otherwise electrically neutral particle, so it can couple to photons.

A second round of experimental data earlier this year confirmed initial indications of a "bump" that was too mild to be sure evidence of a new particle, but could conceivable be one, with a possible secondary Z boson-photon bump at 375 GeV.

The data are inconclusive at this point on the question of whether this resonance is narrow or wide (resonances are graphed as bell curves with a peak at the mass of the particle and a width measured half way up the peak at nearby masses that corresponds to the mean lifetime of the particle).

This paper from January which was updated last week, makes the case for a 4 sigma local significance, before considering look elsewhere effects, resonance with a narrow width and spin-0 (that also couples to gluons and hence must have strong force color charge as well).  It notes that:
1. The required cross section to fit the anomaly reported by ATLAS is in tension with the 8 TeV results, as well as the required cross section to fit the CMS anomaly. 
2. Combining all data sets yields a local significance of ∼ 4.0σ for a 750 GeV spin-0 resonance produced through couplings to gluons or heavy quarks. While quoted statistical significances must be taken with a grain of salt, as they are obtained using binned data without inclusion of systematic errors, I find the combination yields a net increase in the statistical significance as compared to the ATLAS data alone.

3. The spin-2 interpretation is mildly disfavored compared to a spin-0 mediator. This is due to correlations in the photon momenta which results in a relative decrease in the ATLAS acceptance compared to CMS.

4. The combination of ATLAS and CMS 13 TeV data has a slight statistical preference for a spin-0 mediator with a natural width much smaller than the experimental resolution, as compared to the Γ = 45 GeV preferred by ATLAS alone. When the 8 TeV data is added, there is a slight statistical preference for a wide resonance over the narrow option, as it is easier to hide a wide resonance in the 8 TeV background. . . .

When considering the 750 GeV diphoton excess, the theoretical community must balance its natural exuberance with the recognition that the statistical size of the anomalies are very small. As a result, any further slicing of data will yield at best modest statistical preferences for the phenomenological questions that we in the community want answers to. That said, given that this excess is the most significant seen at the LHC since the discovery of the 125 GeV Higgs, and the resulting avalanche of theoretical papers which shows no sign of slowing, it is still a useful exercise to carefully analyze the available data and determine what we do – and do not – know at this stage. While there is some useful information to be gleaned from this exercise, we are fortunate that the continuation of Run-II will be upon us shortly.

From the existing data, we can conclude the following:

1. Explaining the anomaly through a spin-0 resonance is preferred over a spin-2 mediator, though this preference is less than 1σ in most cases.

2. Combining the 8 and 13 TeV data from ATLAS and CMS sets yields a ∼ 4.0σ statistical preference for a signal of ∼ 4(10) fb, assuming a narrow (wide) spin-0 resonance. This ignores the look-elsewhere effect, as discussed. Given that the significance of my fits to individual ATLAS and CMS data-sets are underestimates when compared to the full experimental results, it is possible that the actual statistical preferences are larger than these quoted values. However this would require a combined analysis performed by the ATLAS and CMS Collaborations.

3. The cross sections needed for the Atlas13, Cms13, and Cms13/0T data sets are incompatible at the two sigma level, though they agree in mass. The most straightforward reading of this (while maintaining a new physics explanation for the anomalies) is that the larger Atlas13 cross section constitutes a modest upward fluctuation from the “true” cross section, which is more in line with the Cms13 value. The reverse is also possible of course, but would bring the diphoton excess in the 13 TeV data in greater tension with the 8 TeV null results.

4. When considering only the 13 TeV data, the Cms13 data does not share the Atlas13 preference for a 45 GeV width. I find that the “wide” interpretation of the resonance has a statistical significance in the combo13 data set which is approximately 0.5σ less likely than the “narrow” interpretation. The corresponding likelihood ratio shows no preference for either width. Thus, while the theoretical challenge of a wide resonance may be appealing, the data in no way requires any new physics explanation to have the unusually large width of Γ ∼ 45 GeV.

5. Combining the 13 TeV data with the 8 TeV, I find that gluon-initiated mediators are preferred, due to having the largest ratio of relevant p.d.f.s. In particular, the combination of all six data sets for a gluon-initiated narrow resonance has the same statistical preference for a signal as the Combo13 data alone does, though the best-fit cross section decreases slightly when the 8 TeV data is added (∼ 4.0σ for a ∼ 4 fb signal). In the narrow width assumption, heavy quark-initiated mediators have slightly smaller statistical preference, and a light-quark coupling has a fairly significant decrease in statistical preference, indicating a more serious conflict between the 13 and 8 TeV data.

6. Combining the 13 and 8 TeV data sets under the Γ = 45 GeV spin-0 model increases the statistical preference for signal as compared to the Combo13 result, as the excess can be more easily absorbed by the background model here. Combining all the data sets in this way results in a ∼ 3.5σ preference for a ∼ 10 fb signal (a 0.5σ increase over the Combo13 wide-resonance fit), with a likelihood ratio of ∼ 20 rejecting the narrow interpretation. Again, these statistical preferences are relatively small, thus theorists are free to explore the options, but should keep in mind that the experimental results are inconclusive.

The conclusions of this paper are perhaps not a surprise. There is clear tension between the Atlas13 and Cms13 results, as well as with the non-observation in 8 TeV data. The question of the width is especially puzzling; but further slicing of the data, as I have demonstrated, leads to somewhat conflicting results which do not have a clear statistical preference towards any one solution. I note that if the ATLAS excess is indeed an upward fluctuation from a signal which is more in line with the Cms13 value, then perhaps this could also give a spurious signal of large width. However, the true answers will only come with more data, though I note that, if the signal is indeed real, but on the order of 4 fb, then we may need 10-20 fb−1 for a single experiment to have 5σ discovery.
Some theorists have also hypothesized that this is really a quad-photon resonance with pairs of photons so close to each other that they register as single photons in the detectors.

Despite the superficial similarities of a spin-0 750 GeV particle to a Higgs boson, the resonance is very un-Higgs like.  For example, the primary decay of the Higgs boson is to b quark pairs, even though the diphoton channel is easier to see.  And, pairs of b quarks with 371 GeV of momentum (plus 4 GeV or so of rest mass) would be hard to miss.

Lubos who is a bit generous about fitting the data to numbers because he thinks this is probably real (he gives it a 50% chance of being real), also asks his readers to:
Recall that the ATLAS diphoton graph also shows an excess near 1.6TeV, close to two times 750GeV, and 375GeV is just a bit higher than twice the top quark mass, 173GeV.
I'm not as impressed with "almost" twice the mass, particularly when the 750 GeV v. 375 GeV pairing seems to be exact (although he also points to a possible 340 GeV mass for the lower bump). Twice the top quark mass is about 346 GeV, and twice that is about 692 GeV which is pretty much impossible to reconcile with 750 GeV unless you have a composite particle with some a lot of strong force binding energy to make toponium and very little to go further and make a toponium molecule or top tetraquark at 750 GeV that in turn almost instantly annihilates into photons.

But, of course, if you had a top quark-top antiquark composite particle, you ought to be seeing decays predominantly to pairs of b quarks and not to photons and Z bosons the way you do when pairs to top quarks that aren't bound to each other in hadrons are product.  To get those decays you'd need toponium to experience a matter-antimatter annihilation to photons before the constituent top quarks could experience weak force decay.  And, you'd probably want a process that produces the toponium from extremely high energy gluon fusion or something like that.  (Glueball annihilation is also naively attractive, but the predicted glue ball masses aren't anywhere near a 750 GeV resonance; they're much too light.)

The diphoton channel at that energy level doesn't have a lot of potential background noise, so even a pretty modest number of diphoton detections at that energy scale could be significant.

But, there are hundreds of papers, rather than just a few leading ones, because a resonance at this mass, with the characteristics it has and the lack of strong signals in other channels at the same energy level, is not very well motivated in any of the leading extension of the Standard Model of particle physics.  Also, the different data sets used to infer its existence are in some tension with each other.

For the most part, models that can accommodate a particle that decays to a 750 GeV diphoton resonance while having few other decay modes, require the existence of whole classes of other new particles to go with it (even if it is composite, rather than fundamental).  This gives rise to a very baroque new beyond the Standard Model theory.

For that reason, despite the notability of the bump statistically, my money is on this turning out to be a statistical fluke or systemic experimental error, rather than an actual new particle.

We'll learn if this prediction is right or not within a year or two, depending on how many more weasel, baguettes and other mishaps interfere with the LHC experimental schedule.

Did Human Female Pelvis Evolution Create Another Biological Clock?

The obstetrical dilemma hypothesis states that the human female pelvis represents a compromise between designs most suitable for childbirth and bipedal locomotion, respectively. This hypothesis has been challenged recently on biomechanical, metabolic, and biocultural grounds. Here we provide evidence for the pelvis’ developmental adaptation to the problem of birthing large-headed/large-bodied babies. 
We show that the female pelvis reaches its obstetrically most adequate morphology around the time of maximum fertility but later reverts to a mode of development similar to that of males, which significantly reduces the dimensions of the birth canal. These developmental changes are likely mediated by hormonal changes during puberty and menopause, indicating “on-demand” adjustment of pelvic shape to the needs of childbirth.
Via John Hawks.

Everyone know that women can only have children from puberty to menopause, and most people who have lived in the social world of college students know that female fertility declines markedly with age before menopause.  All of this, however, is basically directly hormonal and biochemical, not mechanical and morphological according to hormonal cues.

The new paper suggests that shifts in women's hips over their life cycle provides an independent biological clock that influences fertility and provides a biological bias towards having children in your 20s, rather than your 30s or later.  Of course, medically safe C-sections can now bypass the limits of the birth canal on safe births later in life.  But, this study ought to give pause to women who think that vaginal births for mothers of advanced maternal age are desirable because they are more "natural."

The issue of large-headed/large-bodies babies is a big one because many evolutionary anthropologists see large headed babies as a critical piece of evolution that has facilitated higher IQ in modern humans than in other primates.  And, it is widely accepted that higher IQ is a key factor in modern human selective fitness.

Given the ongoing relevance of IQ and the existence of safe C-sections, is it conceivable that we could evolve to a state where safe vaginal births are no longer possible because selection for high IQ leads to selection for large headed babies and hip size is not longer a meaningful constraint on that tendency?

This paper is also a welcome reminder that genetically driven phenotypes are not necessarily an all or nothing thing.  People's bodies change over their lifespans and our genes are clever enough to adapt one way at one age and another at a later age for maximum selective fitness.

Tuesday, May 3, 2016

Planet Nine Lacks A Good Origin Story

While a variety of investigators have been hot on the trail of determine where Planet Nine might be and what characteristics it might have, all of the origin stories for it are low probability ones.

Dilution Or Selection?

A new paper from Nature focuses on 51 ancient genomes from the Upper Paleolithic.

One notable observation is that Neanderthal admixture falls from 3%-6% in the early Upper Paleolithic to current levels of about 2%. This is attributed to selection, although dilution with less admixed populations could produce the same top line result. (Oase1 is an outlier at 10% from 40kya).

The paper's analysis suggests also that most Neanderthal admixture is quite old (long before cohabitation of Europe) and decayed slowly and steadily through slight natural selection, presumably in West Asia or SW Asia, rather than Europe. Indeed, the model is consistent with zero admixture of modern humans and Neanderthals in Europe itself.

Eurogenes captures many observations from the comments.

Broad brush, modern populations are really only in continuity with historic populations in Europe back to the Epipaleolithic era ca. 14,000 years ago when Europe's population was replaced following the Last Glacial Maximum when the Western Hunter Gatherer (WGH) autosomal population starts to gel.  Earlier individuals loosely cluster around MA1 from 24,000 years ago with one individual from ca. 19,000 years ago looking like a transitional figure.

This is one Epipaleolithic individual from Northern Italy ca. 14,000 year ago with Y-DNA R1b.  There are stronger Asia affinities in European individuals than would be expected for most of the Upper Paleolithic.

The picture form uniparental ancient DNA until now has been one of a very narrow, homogeneous gene pool with a small effective population size.  To the extent that this is true, it is a post-LGM phenomena as the ancient autosomal DNA over the tens of thousands of years and thousands of miles spanned by the sample shows only fairly loose affinity.

Two out of 21 pre-LGM samples are mtDNA-M (now found almost exclusively in East Eurasia).  Of 13 pre-LGM Y-DNA samples, three of C (now found almost exclusively in East Eurasia) and four are haplogroups that precede the East-West divide in Y-DNA haplogroups.

Saturday, April 30, 2016

Frank Wilczek Teases Renormalization Breakthrough?

As I discussed recently, renormalization is a technique at the heart of modern Standard Model physics, and our inability to renormalize what we naively expect the equations of quantum gravity should look like is a major conundrum in theoretical physics.

Peter Woit, at his blog "Not Even Wrong" calls attention to a recent interview with Frank Wilczek, in which Wilczek suggests that he is right on the brink a making a major break through.

This breakthrough sounds very much like he and his collaborators have finally managed to develop a mathematically rigorous understanding of renormalization that has eluded some of the brightest minds in physics for four decades. First and foremost among them was Richard Feynman who was instrumental in inventing renormalization in the first place and who frequently and publicly expressed concerns about this foundational technique;s lack of mathematical rigor.

This topic came up not so long ago in a discussion in the comments at the 4graviton's blog. The blog's author (a graduate student in theoretical physics) noted in that regard that "Regarding renormalization, the impression I had is that defining it rigorously is one of the main obstructions to the program of Constructive Quantum Field Theory." The linked survey article on the topic conludes as follows:
It is evident that the efforts of the constructive quantum field theorists have been crowned with many successes. They have constructed superrenormalizable models, renormalizable models and even nonrenormalizable models, as well as models which fall outside of that classification scheme since they apparently do not correspond to some classical Lagrangian. And they have found means to extract rigorously from these models physically and mathematically crucial information. In many of these models the HAK and the Wightman axioms have been verified. In the models constructed to this point, the intuitions/hopes of the quantum field theorists have been largely confirmed. 
However, local gauge theories such as quantum electrodynamics, quantum chromodynamics and the Standard Model — precisely the theories whose approximations of various kinds are used in a central manner by elementary particle theorists and cosmologists — remain to be constructed. These models present significant mathematical and conceptual challenges to all those who are not satisfied with ad hoc and essentially instrumentalist computation techniques.

Why haven’t these models of greatest physical interest been constructed yet (in any mathematically rigorous sense which preserves the basic principles constantly evoked in heuristic QFT and does not satisfy itself with an uncontrolled approximation)? Certainly, one can point to the practical fact that only a few dozen people have worked in CQFT. This should be compared with the many hundreds working in string theory and the thousands who have worked in elementary particle physics. Progress is necessarily slow if only a few are working on extremely difficult problems. It may well be that patiently proceeding along the lines indicated above and steadily improving the technical tools employed will ultimately yield the desired rigorous constructions.

It may also be the case that a completely new approach is required, though remaining within the CQFT program as described in Section 1, something whose essential novelty is analogous to the differences between the approaches in Section 2, 3, 5 and 6. It may even be the case that, as Gurau, Magnen and Rivasseau have written, “perhaps axiomatization of QFT might have been premature”; in other words, perhaps the Wightman and HAK axioms do not provide the proper mathematical framework for QED, QCD, SM, even though, as the constructive quantum field theorists have so convincingly demonstrated, that framework is quite suitable for so many models of such varying types and, as the algebraic quantum field theorists have just as convincingly demonstrated, that framework is flexible and powerful when dealing with the conceptual and mathematical problems in QFT which go beyond mathematical existence. 
But it is possible that the mathematically and conceptually essential core of a rigorous formulation of QFT that can include the missing models lies somewhere else. Certainly, there are presently many attempts to understand aspects of QFT from the perspective of mathematical ideas which are quite unexpected when seen from the vantage point of current QFT and even from the vantage point of quantum theory itself, as rigorously formulated by von Neumann and many others. These speculations, as suggestive as some may be, are currently beyond the scope of this article.
The 4gravitons author may also have been referring to papers like this one (suggesting way to rearrange an important infinite series widely believed to be divergent, into a set of several convergent infinite series).

Wilczek says in the interview (conducted the day after his most recent preprint was posted):
What I’ve been thinking about today specifically is something of a potential breakthrough in understanding our fundamental theories of physics. We have something called a standard model, but its foundations are kind of scandalous. We have not known how to define an important part of it mathematically rigorously, but I think I have figured out how to do that, and it’s very pretty. I’m in the middle of calculations to check it out....
It’s a funny situation where the theory of electroweak or weak interactions has been successful when you calculate up to a certain approximation, but if you try to push it too far, it falls apart. Some people have thought that would require fundamental changes in the theory, and have tried to modify the theory so as to remove the apparent difficulty. 
What I’ve shown is that the difficulty is only a surface difficulty. If you do the mathematics properly, organize it in a clever way, the problem goes away. 
It falsifies speculative theories that have been trying to cure a problem that doesn’t exist. It’s things like certain kinds of brane-world models, in which people set up parallel universes where that parallel universe's reason for being was to cancel off difficulties in our universe—we don’t need it. It's those kinds of speculations about how the foundations might be rotten, so you have to do something very radical. It’s still of course legitimate to consider radical improvements, but not to cure this particular problem. You want to do something that directs attention in other places.
There are other concepts in the standard model whose foundations aren't terribly solid.  But, I'd be hard pressed to identify a bitter fit to his comments than renormalization. Also his publications linking particle physics to condensed matter physics make this a plausible target, because renormalization is a technique borrowed by analogy by particle physicists from condensed matter physics. 

His first new preprint in nearly two years came out earlier this month and addresses this subject, so I took a look there to see if I can unearth any more hints lurking there. Here's what he says about renormalization in his new paper:
- the existence of sharp phase transitions, accompanying changes in symmetry. 
Understanding the singular behavior which accompanies phase transitions came from bringing in, and sharpening, sophisticated ideas from quantum field theory (the renormalization group). The revamped renormalization group fed back in to quantum field theory, leading to asymptotic freedom, our modern theory of the strong interaction, and to promising ideas about the unification of forces. The idea that changes in symmetry take place through phase transitions, with the possibility of supercooling, is a central part of the inflationary universe scenario.
But, there was nothing in the paper that obviously points in the direction I'm thinking, so perhaps I am barking up the wrong tree about this breakthrough.

Friday, April 29, 2016

Martyr Weasel Shuts Down LHC, Japanese Satellite Kills Itself

* In a maneuver bearing an uncanny resemblance to Luke Skywalker's single handed defeat of the Death Star with a tiny rebel fighter, a seemingly ordinary weasel without any tools whatsoever, has managed to single handedly shut down the entire Large Hadron Collider, the largest machine every built by man in the history of the world.

Really!

It will take days, if not weeks, to repair the damage.

Alas, the heroic weasel paid with his life for his anti-science crusade.

* In other bad news for science, the Japanese Hitomi space satellite, which was on a ten year, $286 million mission, catastrophically failed in an unforced error, just five weeks after it was launched and after just three days on the job making observations.
Confused about how it was oriented in space and trying to stop itself from spinning, Hitomi's control system apparently commanded a thruster jet to fire in the wrong direction — accelerating, rather than slowing, the craft's rotation.

On 28 April, the Japan Aerospace Exploration Agency (JAXA) declared the satellite, on which it had spent ¥31 billion (US$286 million), lost. At least ten pieces — including both solar-array paddles that had provided electrical power — broke off the satellite’s main body.

Hitomi had been seen as the future of X-ray astronomy. “It’s a scientific tragedy,” says Richard Mushotzky, an astronomer at the University of Maryland in College Park.
One of the greatest challenges of "rocket science" is that you absolutely, positively have to get it right the first time because once something goes wrong in space, generally speaking, there is very little that you can do to fix the problem.

Wednesday, April 27, 2016

Bounds On Discreteness In Quantum Gravity From Lorentz Invariance

Sabine Hossenfelder has an excellent little blog post discussing the limitations that the empirical reality of Lorentz Invariance to an extreme degree of precision places upon quantum gravity theories with a minimum unit of length.  Go read it.

Key takeaway points:

1.  The post defines some closely related bits of terminology related to this discussion (emphasis added):
Lorentz-invariance is the symmetry of Special Relativity; it tells us how observables transform from one reference frame to another. Certain types of observables, called “scalars,” don’t change at all. In general, observables do change, but they do so under a well-defined procedure that is by the application of Lorentz-transformations.We call these “covariant.” Or at least we should. Most often invariance is conflated with covariance in the literature. 
(To be precise, Lorentz-covariance isn’t the full symmetry of Special Relativity because there are also translations in space and time that should maintain the laws of nature. If you add these, you get Poincaré-invariance. But the translations aren’t so relevant for our purposes.)
Lorentz-transformations acting on distances and times lead to the phenomena of Lorentz-contraction and time-dilatation. That means observers at relative velocities to each other measure different lengths and time-intervals. As long as there aren’t any interactions, this has no consequences. But once you have objects that can interact, relativistic contraction has measurable consequences.
2.  A minimum unit of length is attractive because it tames infinities in quantum field theories associated with extremely high momentum.  These infinities in the quantum field theory are particularly hard to manage mathematically if you naively try to create a quantum field theory that mimics Einstein's equations of general relativity.

In general, neither the lattice methods used to describe low energy QCD systems (which are used because the usual approach based upon "renormalization" requires too many calculations to get an accurate result to be practical to use to make low energy calculations), nor the "renormalization" technique that is used to make calculations for all three of the Standard Model forces (although usually only in high energy systems in QCD), work for quantum gravity.

A naive lattice method applied to quantum gravity is attractive because it makes it possible to do calculations without having to rely on the "renormalized" infinite series approximation used for QED, the weak nuclear force and high energy QCD calculations which are too hard for scientists to calculate in the case of low energy QCD. But, in the case of quantum gravity, unlike the Standard Model QFTs, using a naive lattice method doesn't work because it gives rise to significant Lorentz invariance violations that are not observed in nature.

Renormaliation also doesn't work for quantum gravity either for reasons discussed below in the footnote.

3.  There are a couple of loopholes to the general rule that a minimum length scale implies a Lorentz invariance violation. One forms a basis for string theory which is derivative of some of the special features of the Planck length.  The other of involves "Causal Sets" which is one of the subfields of "Loop Quantum Gravity" approaches to quantum gravity.

Guess where the action is in quantum gravity research.

4.  A couple of experimental data points that strongly constrain any possible Lorentz invariance violations in the real world are provided:
A good example is vacuum Cherenkov radiation, that is the spontaneous emission of a photon by an electron. This effect is normally – i.e. when Lorentz-invariance is respected – forbidden due to energy-momentum conservation. It can only take place in a medium which has components that can recoil. But Lorentz-invariance violation would allow electrons to radiate off photons even in empty space. No such effect has been seen, and this leads to very strong bounds on Lorentz-invariance violation. 
And this isn’t the only bound. There are literally dozens of particle interactions that have been checked for Lorentz-invariance violating contributions with absolutely no evidence showing up. Hence, we know that Lorentz-invariance, if not exact, is respected by nature to extremely high precision. And this is very hard to achieve in a model that relies on a discretization.
Footnote on Renormalization In Quantum Gravity

Gravity is also not renormalizable, for reasons that are a bit more arcane.  To understand this, you first have to understand why quantum field theories use renormalization in the first place.

Quantum Field Theory Calculations

Pretty much all calculations in quantum mechanics involve adding up an infinite series of path integrals (which sums up the values of a function related to the probability of a particle taking a particular path from point A to point B) which represent all possible paths from point A to point B with the simpler paths (with fewer "loops") generally making a larger contribution to the total than the more complicated paths (with more "loops").  

In practice, you calculate the probability you're interested in using as many terms of the infinite series as you can reasonably manage to calculate and then make an estimate of the uncertainty in the final result that you calculate as a result of leaving out all of the rest of the terms in the infinite series.

It turns out that when you do these calculations for the electromagnetic force (quantum electrodynamics) and the weak nuclear force, that calculating a pretty modest number of terms provides an extremely accurate answer because the terms in the series quickly get smaller.  As a result these calculations can usually be done to arbitrary accuracy up to or beyond the current limits of experimental accuracy (beyond which we do not have accurate measurements of the relevant fundamental constants of the Standard Model making additional precision in the calculations spurious).

But, when you do comparable calculations involve the quarks and gluons of QCD, calculating a very large number of terms still leaves you with a wildly inaccurate calculation because the terms get smaller much more slowly.  There are a variety of reasons for this, but one of the main ones is that gluon-gluon interactions that don't exist in the analogous QED equations make the number of possible paths that contribute materially to the ultimate outcome much greater.

(Of course, another possibility is that QCD calculations are hard not because they are inherntly more difficult, but instead mostly because lots of terms in the infinite series cancel out in a manner that is not obvious given the way that we usually arrange the terms in the infinite series we are using to calculate our observables.  If that is true, then if we cleverly arranged those terms in some other order, we might discern a way to cancel many more of them out against each other subject to only very weak constraints. This is the basic approach behind the amplituhedron concept.)

While this isn't the reason that quantum gravity isn't renormalizable, even if it was renormalizable, renormalization still wouldn't be a practical way to do quantum gravity calculations in all likelihood because the fact that gravitons in quantum gravity theories can interact with other gravitons, just as gluons can interact with other gluons in QCD, means that the number of terms in the infinite series that must be included to get an accurate result is very, very large.

Renormalization

The trouble is that almost every single one of the path integrals you have to calculate produces an answer that is non-sense unless you employ a little mathematical trickery that works for Standard Model physics calculations even though it isn't entirely clear under what circumstances this trick works in general, because it hasn't been proven in a mathematically rigorous way to work for all conceivable quantum field theories.
Naively, even the simplest quantum field theories (QFT) are useless because the answer to almost any calculation is infinite. . . .  The reason we got this meaningless answer was the insistence on integrating all the way to infinity in momentum space. This does not make sense physically because our quantum field theory description is bound to break at some point, if not sooner then definitely at the Planck scale (p ∼ M Planck). One way to make sense of the theory is to introduce a high energy cut off scale Λ where we think the theory stops being valid and allow only for momenta smaller than the cutoff to run in the loops. But having done that, we run into trouble with quantum mechanics, because usually, the regularized theory is no longer unitary (since we arbitrarily removed part of the phase space to which there was associated a non-zero amplitude.) We therefore want to imagine a process of removing the cutoff but leaving behind “something that makes sense.” A more formal way of describing the goal is the following. We are probing the system at some energy scale ΛR (namely, incoming momenta in Feynman graphs obey p ≤ ΛR) while keeping in our calculations a UV cutoff Λ (Λ ≫ ΛR because at the end of the day we want to send Λ → ∞.) If we can make all physical observables at ΛR independent of Λ then we can safely take Λ → ∞.
It turns out that there is a way to make all of the physical observables in the quantum field theories of the Standard Model (and a considerably broader generalization of possible theories that are similar in form to the QFTs of the Standard Model).

A side effect of renormalization which gives us confidence that this is a valid way to do QFT calculations is that when you renormalize, key physical constants used in your calculations take on different values depending upon the momentum scales of the interacting particles. These mathematical tools are not energy scale invariant.

Amazingly, this unexpected quirk arising from the mathematical shortcut that we invented strictly for the purpose of making an otherwise intractable calculation, we aren't even really sure is mathematically proper if you are being rigorous to use in making these calculations, and which we initially assumed was a bug, turns out to be a feature. Because, in real life, we actually do observe the basic physical constants of the laws of nature change based upon the energy scale of the particles involved in an experiment.

Why Quantum Gravity Isn't Renormalizable

But, back to the task at hand: the unavailability of renormalization as a tool in quantum gravity theories.

Basically, the problem is that in the case of QED, the weak force and QCD, there is nothing inherently unique about an extremely high energy particle, so long as it doesn't have infinite energy. It isn't different in kind from a similar high energy particle with just a little bit less energy. The impact of the extremely high energy states that are ignored in these calculations on the overall result of the calculation is tiny and has a magnitude that is easily determined to set a margin of error on the calculation, because these extremely high energy states simply correspond to possible paths of the particles whose behavior you are calculating that are so extremely unlikely to occur during the lifetime of the universe so far, that it is basically artificial to even worry about these possibilities because they are so ephemeral and basically never happen anyway, if you set the energy cutoff to a high enough arbitrary number.

In contrast, with a naive set of quantum gravity formulas based directly upon general relativity, the concentration of extremely high energies into very small spaces have a real physical effect which is very well understood in classical general relativity. 

If you pack too much energy into too small a space in a quantum gravity equation derived naively from the equations of general relativity, you get a black hole, which is a different in kind physical state that generates infinities even in the non-quantum version of the theory and corresponds to a physical reality that is a different physical state from which no light can escape (apart from "Hawking radiation").

For reasons related to the way that the well known formula for the entropy of a black hole affects the mathematical properties of terms in the calculations that are performed that correspond to very high energy states in quantum gravity, it turns out that you can't simply ignore the contribution of high energy black hole states in quantum gravity calculations in the way that you can in ordinary Standard Model quantum field theories and similar theories with the same mathematical properties (mostly various "Yang-Mills theories" in addition to the Yang-Mills theories found in the Standard Model).

If you could ignore these high energy states, you could then estimate the loss of precision price that you pay for ignoring them based upon the energy scale cutoff you use to renormalize the way that you can for calculations of the other three Standard Model forces.  

But, it turns out that black hole states in quantum gravity are infinite in a physically real way that can't be ignored, rather than in a mathematically artificial way that doesn't correspond to the physical reality as is the case with the other three Standard Model forces. And, of course, black holes don't actually pop into existence each of the myriad times that two particles interact with each other via gravity.  

So, something is broken, and that something looks like it needs to use either the string theory loophole related to the Planck length, or the Causal Sets loophole, to solve the problem with a naive quantum gravity formulation based on Einstein's equations.  The former is the only known way to get a minimum length cutoff, and the latter is another way to use discrete methods.

One More Potential Quantum Gravity Loophole

Of course, if Einstein's equations are wrong in some respect, it is entirely possible that renormalizing the naive quantum gravity generalization of the correct modification of Einstein's equations is renormalizable and that the fact that the math doesn't work with Einstein's equations is a big hint from nature that we have written down the equations for gravity in a way that is not quite right even though it is very close to the mark.

The trick to this loophole, of course, is to figure out how to modify Einstein's equations in a way that doesn't impair the accuracy of general relativity in the circumstances where it has been demonstrated to work experimentally, while tweaking the results in some situation where there are not tight experimental constraints, ideally in a respect that would explain dark matter, dark energy, inflation, or some other unexplained problems in physics as well.

My own personal hunch is that Einstein's equations are probably inaccurate as applied to high mass systems that lack spherically symmetry because the way that these classical equations implicitly model graviton-graviton interactions and localized gravitational energy associated with gravitons is at odds with how nature really works.

It makes sense that Einstein's equations, which have as a fundamental feature the fact that the energy of the gravitational field can't be localized, can't be formulated in a mathematically rigorous way as a quantum field theory in which gravity arises from the exchange of gravitons, which are the very embodiment of localized gravitational energy.  And, it similarly makes sense that in a modification of Einstein's equations in which the energy of the gravitational field could be localized, that some of these problems might be resolved.

Furthermore, I suspect that this issue is (1) producing virtually all of the phenomena that have been attributed to dark matter, (2) much (if not all) of the phenomena attributed to dark energy, (3) makes the quantum gravity equations possible to calculate with, (4) might add insights into cosmology and inflation, and (5) when implemented in a quantum gravity context may even tweak the running of the three Standard Model gauge coupling constants with energy scale in a manner that leads to gauge unification somewhere around the SUSY GUT scale.  (I don't think that this is likely to provide much insight into the baryon asymmetry in the universe, baryogenesis or leptogenesis.)

But, I acknowledge that I don't have the mathematical capacity to demonstrate that this is the case or to evaluate papers that have suggested this possibility with the rigor of a professional.

Monday, April 25, 2016

More Documentation Of Rapid Y-DNA Expansions

* Eurogenes reports on a new (closed access) paper in Nature Genetics documenting a number of punctuated expansions of Y-DNA lineages including R1a-Z93 taking place ca. 2500 BCE to 2000 BCE in South Asia, which is commonly associated with Indo-Aryan expansion.  Ancient DNA argues for a steppe rather than South Asian origin for this Y-DNA lineage near a strong candidate for the Proto-Indo-European Urheimat.

Holocene expansions of Y-DNA H1-M52 (particularly common among the Kalash people of Pakistan) and L-M11 in South Asia are also discussed (incidentally L-M11 is also common among the Kalash people, which makes them plausible candidates for a Harappan substrate in this genetically unique and linguistically Indo-Aryan population).  See also a nice table from a 2003 article on South Asian genetics showing frequencies of both H1-M52 and L-M11 in various populations.


Locus M-11 is one of several defining loci for L-M20 whose geographic range is illustrated in the map above from the L-M20 wikipedia article which is linked above.

Naively, H1-M52 and L-M11 both look to me like pre-Indo-Aryan Harappan expansions.

Some people have argued that the linguistic and geographic distribution of Y-DNA subclade L-M76 argues for an indigenous South Asian origin of the Dravidian languages, which would be consistent with the current linguistic status of the Dravidian languages as not belonging to any larger linguistic family (although it is inconsistent with the relative youth of the Dravidian language family as illustrated by the close linguistic similarities between its member languages, which has to be explained by some other means). However, given the possibility of language shift, it is hard to draw a definitive conclusion from genetics alone.

Also, honestly, the distribution of H1-M52 is probably a better fit to the Dravidian linguistic range than L-M76.

* Dienekes' Anthropology blog catches another controversial conclusion of the paper, that Y-DNA E originated outside Africa.
When the tree is calibrated with a mutation rate estimate of 0.76 × 10-9 mutations per base pair per year, the time to the most recent common ancestor (TMRCA) of the tree is ~190,000 years, but we consider the implications of alternative mutation rate estimates below. 
Of the clades resulting from the four deepest branching events, all but one are exclusive to Africa, and the TMRCA of all non-African lineages (that is, the TMRCA of haplogroups DE and CF) is ~76,000 years (Fig. 1, Supplementary Figs. 18 and 19, Supplementary Table 10, and Supplementary Note). 
We saw a notable increase in the number of lineages outside Africa ~50–55 kya, perhaps reflecting the geographical expansion and differentiation of Eurasian populations as they settled the vast expanse of these continents. Consistent with previous proposals, a parsimonious interpretation of the phylogeny is that the predominant African haplogroup, haplogroup E, arose outside the continent. This model of geographical segregation within the CT clade requires just one continental haplogroup exchange (E to Africa), rather than three (D, C, and F out of Africa). Furthermore, the timing of this putative return to Africa—between the emergence of haplogroup E and its differentiation within Africa by 58 kya—is consistent with proposals, based on non–Y chromosome data, of abundant gene flow between Africa and nearby regions of Asia 50–80 kya.
I can't say I'm strongly persuaded, although there might be some merit in the details of the TMRCA analysis.  

The CT clade breaks into a DE clade and a CF clade.  So, you can have a CF clade (providing the predominant source of Eurasian Y-DNA) and a D clade (providing a minor source of Eurasian Y-DNA with a quirky distribution with concentrations in the Andaman Islands, Tibet, Japan, and to a lesser extent Siberia and the region between the Andaman Islands and Tibet).  In my view, the quirky distribution of Y-DNA D is consistent with a separate wave of migration, probably thousands of years after the CF expansion, and not with a single CT expansion where parsimony is served by a single unified wave of expansion.

Basal Y-DNA DE is found in both West Africa and Tibet, not strongly favoring either side of the debate.  Y-DNA E in Europe shows a clear African source.

* Razib captures better that larger scope of the paper and attempts to puts it in the context of cultural evolution. Apart from his association of the R1b expansion with Indo-Europeans, and his largely tongue in cheek suggestion of a Levantine origin for modern humans, I think he's basically on the right track.

Looking closely at the data after reading his piece, I am inclined to think that climate events ca. 2000 BCE and 1200 BCE caused Y-DNA expansions in Europe and South Asia to be much more intense than elsewhere in Asia where climate shocks may have been less intense (in Africa, E1b may have arisen from the simultaneous arrival of farming, herding and metalworking, rather than a phased appearance as elsewhere, made possible only once the food production technologies could bridge the equatorial jungle areas of Africa).

* Meanwhile and off topic, Bernard takes a good look at papers proposing a Northern route for the initial migration of mtDNA M and N. I've been aware of these papers for several weeks now, but have not had a chance to really digest these paradigm shifting proposals before discussing them, and I probably won't get a chance for a while yet to come.

Sunday, April 24, 2016

Thinking Like A Physicist


From here.

This is a bit of an inside joke, but I don't know how to explain it in a way that would keep it funny.

Thursday, April 21, 2016

The Mayans Sacrificed Lots of Children

Sometimes there is simply no way to sugar coat how barbaric some ancient civilizations really were (and make no mistake, ancient Western civilizations like the Greeks and the cult of Baal were often equally barbaric.)
Grim discoveries in Belize’s aptly named Midnight Terror Cave shed light on a long tradition of child sacrifices in ancient Maya society. 
A large portion of 9,566 human bones, bone fragments and teeth found on the cave floor from 2008 to 2010 belonged to individuals no older than 14 years. . . . Many of the human remains came from 4- to 10-year-olds. . . . [T]hese children were sacrificed to a rain, water and lightning god that the ancient Maya called Chaac.

Radiocarbon dating of the bones indicates that the Maya deposited one or a few bodies at a time in the cave over about a 1,500-year-period, starting at the dawn of Maya civilization around 3,000 years ago. . . . At least 114 bodies were dropped in the deepest, darkest part of the cave, near an underground stream. Youngsters up to age 14 accounted for a minimum of 60 of those bodies. Ancient Maya considered inner cave areas with water sources to be sacred spaces, suggesting bodies were placed there intentionally as offerings to Chaac. The researchers found no evidence that individuals in the cave had died of natural causes or had been buried.

Until now, an underground cave at Chichén Itzá in southern Mexico contained the only instance of large-scale child sacrifices by the ancient Maya. . . . Other researchers have estimated that 51 of at least 101 individuals whose bones lay scattered in Chichén Itzá’s “sacred well” were children or teens. Researchers have often emphasized that human sacrifices in ancient Central American and Mexican civilizations targeted adults. “Taken together, however, finds at Chichén Itzá and Midnight Terror Cave suggest that about half of all Maya sacrificial victims were children[.]".
M.G. Prout. Subadult human sacrifices in Midnight Terror Cave, Belize. Annual meeting of the American Association of Physical Anthropologists, Atlanta, April 15, 2016 via Science News.

Another recent study, comparing Oceanian societies with and without human sacrifices, conclude that human sacrifice played a critical role in the formation and maintenance of organized chiefdoms with social class stratification.