Pages

Saturday, April 30, 2016

Frank Wilczek Teases Renormalization Breakthrough?

As I discussed recently, renormalization is a technique at the heart of modern Standard Model physics, and our inability to renormalize what we naively expect the equations of quantum gravity should look like is a major conundrum in theoretical physics.

Peter Woit, at his blog "Not Even Wrong" calls attention to a recent interview with Frank Wilczek, in which Wilczek suggests that he is right on the brink a making a major break through.

This breakthrough sounds very much like he and his collaborators have finally managed to develop a mathematically rigorous understanding of renormalization that has eluded some of the brightest minds in physics for four decades. First and foremost among them was Richard Feynman who was instrumental in inventing renormalization in the first place and who frequently and publicly expressed concerns about this foundational technique;s lack of mathematical rigor.

This topic came up not so long ago in a discussion in the comments at the 4graviton's blog. The blog's author (a graduate student in theoretical physics) noted in that regard that "Regarding renormalization, the impression I had is that defining it rigorously is one of the main obstructions to the program of Constructive Quantum Field Theory." The linked survey article on the topic conludes as follows:
It is evident that the efforts of the constructive quantum field theorists have been crowned with many successes. They have constructed superrenormalizable models, renormalizable models and even nonrenormalizable models, as well as models which fall outside of that classification scheme since they apparently do not correspond to some classical Lagrangian. And they have found means to extract rigorously from these models physically and mathematically crucial information. In many of these models the HAK and the Wightman axioms have been verified. In the models constructed to this point, the intuitions/hopes of the quantum field theorists have been largely confirmed. 
However, local gauge theories such as quantum electrodynamics, quantum chromodynamics and the Standard Model — precisely the theories whose approximations of various kinds are used in a central manner by elementary particle theorists and cosmologists — remain to be constructed. These models present significant mathematical and conceptual challenges to all those who are not satisfied with ad hoc and essentially instrumentalist computation techniques.

Why haven’t these models of greatest physical interest been constructed yet (in any mathematically rigorous sense which preserves the basic principles constantly evoked in heuristic QFT and does not satisfy itself with an uncontrolled approximation)? Certainly, one can point to the practical fact that only a few dozen people have worked in CQFT. This should be compared with the many hundreds working in string theory and the thousands who have worked in elementary particle physics. Progress is necessarily slow if only a few are working on extremely difficult problems. It may well be that patiently proceeding along the lines indicated above and steadily improving the technical tools employed will ultimately yield the desired rigorous constructions.

It may also be the case that a completely new approach is required, though remaining within the CQFT program as described in Section 1, something whose essential novelty is analogous to the differences between the approaches in Section 2, 3, 5 and 6. It may even be the case that, as Gurau, Magnen and Rivasseau have written, “perhaps axiomatization of QFT might have been premature”; in other words, perhaps the Wightman and HAK axioms do not provide the proper mathematical framework for QED, QCD, SM, even though, as the constructive quantum field theorists have so convincingly demonstrated, that framework is quite suitable for so many models of such varying types and, as the algebraic quantum field theorists have just as convincingly demonstrated, that framework is flexible and powerful when dealing with the conceptual and mathematical problems in QFT which go beyond mathematical existence. 
But it is possible that the mathematically and conceptually essential core of a rigorous formulation of QFT that can include the missing models lies somewhere else. Certainly, there are presently many attempts to understand aspects of QFT from the perspective of mathematical ideas which are quite unexpected when seen from the vantage point of current QFT and even from the vantage point of quantum theory itself, as rigorously formulated by von Neumann and many others. These speculations, as suggestive as some may be, are currently beyond the scope of this article.
The 4gravitons author may also have been referring to papers like this one (suggesting way to rearrange an important infinite series widely believed to be divergent, into a set of several convergent infinite series).

Wilczek says in the interview (conducted the day after his most recent preprint was posted):
What I’ve been thinking about today specifically is something of a potential breakthrough in understanding our fundamental theories of physics. We have something called a standard model, but its foundations are kind of scandalous. We have not known how to define an important part of it mathematically rigorously, but I think I have figured out how to do that, and it’s very pretty. I’m in the middle of calculations to check it out....
It’s a funny situation where the theory of electroweak or weak interactions has been successful when you calculate up to a certain approximation, but if you try to push it too far, it falls apart. Some people have thought that would require fundamental changes in the theory, and have tried to modify the theory so as to remove the apparent difficulty. 
What I’ve shown is that the difficulty is only a surface difficulty. If you do the mathematics properly, organize it in a clever way, the problem goes away. 
It falsifies speculative theories that have been trying to cure a problem that doesn’t exist. It’s things like certain kinds of brane-world models, in which people set up parallel universes where that parallel universe's reason for being was to cancel off difficulties in our universe—we don’t need it. It's those kinds of speculations about how the foundations might be rotten, so you have to do something very radical. It’s still of course legitimate to consider radical improvements, but not to cure this particular problem. You want to do something that directs attention in other places.
There are other concepts in the standard model whose foundations aren't terribly solid.  But, I'd be hard pressed to identify a bitter fit to his comments than renormalization. Also his publications linking particle physics to condensed matter physics make this a plausible target, because renormalization is a technique borrowed by analogy by particle physicists from condensed matter physics. 

His first new preprint in nearly two years came out earlier this month and addresses this subject, so I took a look there to see if I can unearth any more hints lurking there. Here's what he says about renormalization in his new paper:
- the existence of sharp phase transitions, accompanying changes in symmetry. 
Understanding the singular behavior which accompanies phase transitions came from bringing in, and sharpening, sophisticated ideas from quantum field theory (the renormalization group). The revamped renormalization group fed back in to quantum field theory, leading to asymptotic freedom, our modern theory of the strong interaction, and to promising ideas about the unification of forces. The idea that changes in symmetry take place through phase transitions, with the possibility of supercooling, is a central part of the inflationary universe scenario.
But, there was nothing in the paper that obviously points in the direction I'm thinking, so perhaps I am barking up the wrong tree about this breakthrough.

Friday, April 29, 2016

Martyr Weasel Shuts Down LHC, Japanese Satellite Kills Itself

* In a maneuver bearing an uncanny resemblance to Luke Skywalker's single handed defeat of the Death Star with a tiny rebel fighter, a seemingly ordinary weasel without any tools whatsoever, has managed to single handedly shut down the entire Large Hadron Collider, the largest machine every built by man in the history of the world.

Really!

It will take days, if not weeks, to repair the damage.

Alas, the heroic weasel paid with his life for his anti-science crusade.

* In other bad news for science, the Japanese Hitomi space satellite, which was on a ten year, $286 million mission, catastrophically failed in an unforced error, just five weeks after it was launched and after just three days on the job making observations.
Confused about how it was oriented in space and trying to stop itself from spinning, Hitomi's control system apparently commanded a thruster jet to fire in the wrong direction — accelerating, rather than slowing, the craft's rotation.

On 28 April, the Japan Aerospace Exploration Agency (JAXA) declared the satellite, on which it had spent ¥31 billion (US$286 million), lost. At least ten pieces — including both solar-array paddles that had provided electrical power — broke off the satellite’s main body.

Hitomi had been seen as the future of X-ray astronomy. “It’s a scientific tragedy,” says Richard Mushotzky, an astronomer at the University of Maryland in College Park.
One of the greatest challenges of "rocket science" is that you absolutely, positively have to get it right the first time because once something goes wrong in space, generally speaking, there is very little that you can do to fix the problem.

Wednesday, April 27, 2016

Bounds On Discreteness In Quantum Gravity From Lorentz Invariance

Sabine Hossenfelder has an excellent little blog post discussing the limitations that the empirical reality of Lorentz Invariance to an extreme degree of precision places upon quantum gravity theories with a minimum unit of length.  Go read it.

Key takeaway points:

1.  The post defines some closely related bits of terminology related to this discussion (emphasis added):
Lorentz-invariance is the symmetry of Special Relativity; it tells us how observables transform from one reference frame to another. Certain types of observables, called “scalars,” don’t change at all. In general, observables do change, but they do so under a well-defined procedure that is by the application of Lorentz-transformations.We call these “covariant.” Or at least we should. Most often invariance is conflated with covariance in the literature. 
(To be precise, Lorentz-covariance isn’t the full symmetry of Special Relativity because there are also translations in space and time that should maintain the laws of nature. If you add these, you get Poincaré-invariance. But the translations aren’t so relevant for our purposes.)
Lorentz-transformations acting on distances and times lead to the phenomena of Lorentz-contraction and time-dilatation. That means observers at relative velocities to each other measure different lengths and time-intervals. As long as there aren’t any interactions, this has no consequences. But once you have objects that can interact, relativistic contraction has measurable consequences.
2.  A minimum unit of length is attractive because it tames infinities in quantum field theories associated with extremely high momentum.  These infinities in the quantum field theory are particularly hard to manage mathematically if you naively try to create a quantum field theory that mimics Einstein's equations of general relativity.

In general, neither the lattice methods used to describe low energy QCD systems (which are used because the usual approach based upon "renormalization" requires too many calculations to get an accurate result to be practical to use to make low energy calculations), nor the "renormalization" technique that is used to make calculations for all three of the Standard Model forces (although usually only in high energy systems in QCD), work for quantum gravity.

A naive lattice method applied to quantum gravity is attractive because it makes it possible to do calculations without having to rely on the "renormalized" infinite series approximation used for QED, the weak nuclear force and high energy QCD calculations which are too hard for scientists to calculate in the case of low energy QCD. But, in the case of quantum gravity, unlike the Standard Model QFTs, using a naive lattice method doesn't work because it gives rise to significant Lorentz invariance violations that are not observed in nature.

Renormaliation also doesn't work for quantum gravity either for reasons discussed below in the footnote.

3.  There are a couple of loopholes to the general rule that a minimum length scale implies a Lorentz invariance violation. One forms a basis for string theory which is derivative of some of the special features of the Planck length.  The other of involves "Causal Sets" which is one of the subfields of "Loop Quantum Gravity" approaches to quantum gravity.

Guess where the action is in quantum gravity research.

4.  A couple of experimental data points that strongly constrain any possible Lorentz invariance violations in the real world are provided:
A good example is vacuum Cherenkov radiation, that is the spontaneous emission of a photon by an electron. This effect is normally – i.e. when Lorentz-invariance is respected – forbidden due to energy-momentum conservation. It can only take place in a medium which has components that can recoil. But Lorentz-invariance violation would allow electrons to radiate off photons even in empty space. No such effect has been seen, and this leads to very strong bounds on Lorentz-invariance violation. 
And this isn’t the only bound. There are literally dozens of particle interactions that have been checked for Lorentz-invariance violating contributions with absolutely no evidence showing up. Hence, we know that Lorentz-invariance, if not exact, is respected by nature to extremely high precision. And this is very hard to achieve in a model that relies on a discretization.
Footnote on Renormalization In Quantum Gravity

Gravity is also not renormalizable, for reasons that are a bit more arcane.  To understand this, you first have to understand why quantum field theories use renormalization in the first place.

Quantum Field Theory Calculations

Pretty much all calculations in quantum mechanics involve adding up an infinite series of path integrals (which sums up the values of a function related to the probability of a particle taking a particular path from point A to point B) which represent all possible paths from point A to point B with the simpler paths (with fewer "loops") generally making a larger contribution to the total than the more complicated paths (with more "loops").  

In practice, you calculate the probability you're interested in using as many terms of the infinite series as you can reasonably manage to calculate and then make an estimate of the uncertainty in the final result that you calculate as a result of leaving out all of the rest of the terms in the infinite series.

It turns out that when you do these calculations for the electromagnetic force (quantum electrodynamics) and the weak nuclear force, that calculating a pretty modest number of terms provides an extremely accurate answer because the terms in the series quickly get smaller.  As a result these calculations can usually be done to arbitrary accuracy up to or beyond the current limits of experimental accuracy (beyond which we do not have accurate measurements of the relevant fundamental constants of the Standard Model making additional precision in the calculations spurious).

But, when you do comparable calculations involve the quarks and gluons of QCD, calculating a very large number of terms still leaves you with a wildly inaccurate calculation because the terms get smaller much more slowly.  There are a variety of reasons for this, but one of the main ones is that gluon-gluon interactions that don't exist in the analogous QED equations make the number of possible paths that contribute materially to the ultimate outcome much greater.

(Of course, another possibility is that QCD calculations are hard not because they are inherntly more difficult, but instead mostly because lots of terms in the infinite series cancel out in a manner that is not obvious given the way that we usually arrange the terms in the infinite series we are using to calculate our observables.  If that is true, then if we cleverly arranged those terms in some other order, we might discern a way to cancel many more of them out against each other subject to only very weak constraints. This is the basic approach behind the amplituhedron concept.)

While this isn't the reason that quantum gravity isn't renormalizable, even if it was renormalizable, renormalization still wouldn't be a practical way to do quantum gravity calculations in all likelihood because the fact that gravitons in quantum gravity theories can interact with other gravitons, just as gluons can interact with other gluons in QCD, means that the number of terms in the infinite series that must be included to get an accurate result is very, very large.

Renormalization

The trouble is that almost every single one of the path integrals you have to calculate produces an answer that is non-sense unless you employ a little mathematical trickery that works for Standard Model physics calculations even though it isn't entirely clear under what circumstances this trick works in general, because it hasn't been proven in a mathematically rigorous way to work for all conceivable quantum field theories.
Naively, even the simplest quantum field theories (QFT) are useless because the answer to almost any calculation is infinite. . . .  The reason we got this meaningless answer was the insistence on integrating all the way to infinity in momentum space. This does not make sense physically because our quantum field theory description is bound to break at some point, if not sooner then definitely at the Planck scale (p ∼ M Planck). One way to make sense of the theory is to introduce a high energy cut off scale Λ where we think the theory stops being valid and allow only for momenta smaller than the cutoff to run in the loops. But having done that, we run into trouble with quantum mechanics, because usually, the regularized theory is no longer unitary (since we arbitrarily removed part of the phase space to which there was associated a non-zero amplitude.) We therefore want to imagine a process of removing the cutoff but leaving behind “something that makes sense.” A more formal way of describing the goal is the following. We are probing the system at some energy scale ΛR (namely, incoming momenta in Feynman graphs obey p ≤ ΛR) while keeping in our calculations a UV cutoff Λ (Λ ≫ ΛR because at the end of the day we want to send Λ → ∞.) If we can make all physical observables at ΛR independent of Λ then we can safely take Λ → ∞.
It turns out that there is a way to make all of the physical observables in the quantum field theories of the Standard Model (and a considerably broader generalization of possible theories that are similar in form to the QFTs of the Standard Model).

A side effect of renormalization which gives us confidence that this is a valid way to do QFT calculations is that when you renormalize, key physical constants used in your calculations take on different values depending upon the momentum scales of the interacting particles. These mathematical tools are not energy scale invariant.

Amazingly, this unexpected quirk arising from the mathematical shortcut that we invented strictly for the purpose of making an otherwise intractable calculation, we aren't even really sure is mathematically proper if you are being rigorous to use in making these calculations, and which we initially assumed was a bug, turns out to be a feature. Because, in real life, we actually do observe the basic physical constants of the laws of nature change based upon the energy scale of the particles involved in an experiment.

Why Quantum Gravity Isn't Renormalizable

But, back to the task at hand: the unavailability of renormalization as a tool in quantum gravity theories.

Basically, the problem is that in the case of QED, the weak force and QCD, there is nothing inherently unique about an extremely high energy particle, so long as it doesn't have infinite energy. It isn't different in kind from a similar high energy particle with just a little bit less energy. The impact of the extremely high energy states that are ignored in these calculations on the overall result of the calculation is tiny and has a magnitude that is easily determined to set a margin of error on the calculation, because these extremely high energy states simply correspond to possible paths of the particles whose behavior you are calculating that are so extremely unlikely to occur during the lifetime of the universe so far, that it is basically artificial to even worry about these possibilities because they are so ephemeral and basically never happen anyway, if you set the energy cutoff to a high enough arbitrary number.

In contrast, with a naive set of quantum gravity formulas based directly upon general relativity, the concentration of extremely high energies into very small spaces have a real physical effect which is very well understood in classical general relativity. 

If you pack too much energy into too small a space in a quantum gravity equation derived naively from the equations of general relativity, you get a black hole, which is a different in kind physical state that generates infinities even in the non-quantum version of the theory and corresponds to a physical reality that is a different physical state from which no light can escape (apart from "Hawking radiation").

For reasons related to the way that the well known formula for the entropy of a black hole affects the mathematical properties of terms in the calculations that are performed that correspond to very high energy states in quantum gravity, it turns out that you can't simply ignore the contribution of high energy black hole states in quantum gravity calculations in the way that you can in ordinary Standard Model quantum field theories and similar theories with the same mathematical properties (mostly various "Yang-Mills theories" in addition to the Yang-Mills theories found in the Standard Model).

If you could ignore these high energy states, you could then estimate the loss of precision price that you pay for ignoring them based upon the energy scale cutoff you use to renormalize the way that you can for calculations of the other three Standard Model forces.  

But, it turns out that black hole states in quantum gravity are infinite in a physically real way that can't be ignored, rather than in a mathematically artificial way that doesn't correspond to the physical reality as is the case with the other three Standard Model forces. And, of course, black holes don't actually pop into existence each of the myriad times that two particles interact with each other via gravity.  

So, something is broken, and that something looks like it needs to use either the string theory loophole related to the Planck length, or the Causal Sets loophole, to solve the problem with a naive quantum gravity formulation based on Einstein's equations.  The former is the only known way to get a minimum length cutoff, and the latter is another way to use discrete methods.

One More Potential Quantum Gravity Loophole

Of course, if Einstein's equations are wrong in some respect, it is entirely possible that renormalizing the naive quantum gravity generalization of the correct modification of Einstein's equations is renormalizable and that the fact that the math doesn't work with Einstein's equations is a big hint from nature that we have written down the equations for gravity in a way that is not quite right even though it is very close to the mark.

The trick to this loophole, of course, is to figure out how to modify Einstein's equations in a way that doesn't impair the accuracy of general relativity in the circumstances where it has been demonstrated to work experimentally, while tweaking the results in some situation where there are not tight experimental constraints, ideally in a respect that would explain dark matter, dark energy, inflation, or some other unexplained problems in physics as well.

My own personal hunch is that Einstein's equations are probably inaccurate as applied to high mass systems that lack spherically symmetry because the way that these classical equations implicitly model graviton-graviton interactions and localized gravitational energy associated with gravitons is at odds with how nature really works.

It makes sense that Einstein's equations, which have as a fundamental feature the fact that the energy of the gravitational field can't be localized, can't be formulated in a mathematically rigorous way as a quantum field theory in which gravity arises from the exchange of gravitons, which are the very embodiment of localized gravitational energy.  And, it similarly makes sense that in a modification of Einstein's equations in which the energy of the gravitational field could be localized, that some of these problems might be resolved.

Furthermore, I suspect that this issue is (1) producing virtually all of the phenomena that have been attributed to dark matter, (2) much (if not all) of the phenomena attributed to dark energy, (3) makes the quantum gravity equations possible to calculate with, (4) might add insights into cosmology and inflation, and (5) when implemented in a quantum gravity context may even tweak the running of the three Standard Model gauge coupling constants with energy scale in a manner that leads to gauge unification somewhere around the SUSY GUT scale.  (I don't think that this is likely to provide much insight into the baryon asymmetry in the universe, baryogenesis or leptogenesis.)

But, I acknowledge that I don't have the mathematical capacity to demonstrate that this is the case or to evaluate papers that have suggested this possibility with the rigor of a professional.

Monday, April 25, 2016

More Documentation Of Rapid Y-DNA Expansions

* Eurogenes reports on a new (closed access) paper in Nature Genetics documenting a number of punctuated expansions of Y-DNA lineages including R1a-Z93 taking place ca. 2500 BCE to 2000 BCE in South Asia, which is commonly associated with Indo-Aryan expansion.  Ancient DNA argues for a steppe rather than South Asian origin for this Y-DNA lineage near a strong candidate for the Proto-Indo-European Urheimat.

Holocene expansions of Y-DNA H1-M52 (particularly common among the Kalash people of Pakistan) and L-M11 in South Asia are also discussed (incidentally L-M11 is also common among the Kalash people, which makes them plausible candidates for a Harappan substrate in this genetically unique and linguistically Indo-Aryan population).  See also a nice table from a 2003 article on South Asian genetics showing frequencies of both H1-M52 and L-M11 in various populations.


Locus M-11 is one of several defining loci for L-M20 whose geographic range is illustrated in the map above from the L-M20 wikipedia article which is linked above.

Naively, H1-M52 and L-M11 both look to me like pre-Indo-Aryan Harappan expansions.

Some people have argued that the linguistic and geographic distribution of Y-DNA subclade L-M76 argues for an indigenous South Asian origin of the Dravidian languages, which would be consistent with the current linguistic status of the Dravidian languages as not belonging to any larger linguistic family (although it is inconsistent with the relative youth of the Dravidian language family as illustrated by the close linguistic similarities between its member languages, which has to be explained by some other means). However, given the possibility of language shift, it is hard to draw a definitive conclusion from genetics alone.

Also, honestly, the distribution of H1-M52 is probably a better fit to the Dravidian linguistic range than L-M76.

* Dienekes' Anthropology blog catches another controversial conclusion of the paper, that Y-DNA E originated outside Africa.
When the tree is calibrated with a mutation rate estimate of 0.76 × 10-9 mutations per base pair per year, the time to the most recent common ancestor (TMRCA) of the tree is ~190,000 years, but we consider the implications of alternative mutation rate estimates below. 
Of the clades resulting from the four deepest branching events, all but one are exclusive to Africa, and the TMRCA of all non-African lineages (that is, the TMRCA of haplogroups DE and CF) is ~76,000 years (Fig. 1, Supplementary Figs. 18 and 19, Supplementary Table 10, and Supplementary Note). 
We saw a notable increase in the number of lineages outside Africa ~50–55 kya, perhaps reflecting the geographical expansion and differentiation of Eurasian populations as they settled the vast expanse of these continents. Consistent with previous proposals, a parsimonious interpretation of the phylogeny is that the predominant African haplogroup, haplogroup E, arose outside the continent. This model of geographical segregation within the CT clade requires just one continental haplogroup exchange (E to Africa), rather than three (D, C, and F out of Africa). Furthermore, the timing of this putative return to Africa—between the emergence of haplogroup E and its differentiation within Africa by 58 kya—is consistent with proposals, based on non–Y chromosome data, of abundant gene flow between Africa and nearby regions of Asia 50–80 kya.
I can't say I'm strongly persuaded, although there might be some merit in the details of the TMRCA analysis.  

The CT clade breaks into a DE clade and a CF clade.  So, you can have a CF clade (providing the predominant source of Eurasian Y-DNA) and a D clade (providing a minor source of Eurasian Y-DNA with a quirky distribution with concentrations in the Andaman Islands, Tibet, Japan, and to a lesser extent Siberia and the region between the Andaman Islands and Tibet).  In my view, the quirky distribution of Y-DNA D is consistent with a separate wave of migration, probably thousands of years after the CF expansion, and not with a single CT expansion where parsimony is served by a single unified wave of expansion.

Basal Y-DNA DE is found in both West Africa and Tibet, not strongly favoring either side of the debate.  Y-DNA E in Europe shows a clear African source.

* Razib captures better that larger scope of the paper and attempts to puts it in the context of cultural evolution. Apart from his association of the R1b expansion with Indo-Europeans, and his largely tongue in cheek suggestion of a Levantine origin for modern humans, I think he's basically on the right track.

Looking closely at the data after reading his piece, I am inclined to think that climate events ca. 2000 BCE and 1200 BCE caused Y-DNA expansions in Europe and South Asia to be much more intense than elsewhere in Asia where climate shocks may have been less intense (in Africa, E1b may have arisen from the simultaneous arrival of farming, herding and metalworking, rather than a phased appearance as elsewhere, made possible only once the food production technologies could bridge the equatorial jungle areas of Africa).

* Meanwhile and off topic, Bernard takes a good look at papers proposing a Northern route for the initial migration of mtDNA M and N. I've been aware of these papers for several weeks now, but have not had a chance to really digest these paradigm shifting proposals before discussing them, and I probably won't get a chance for a while yet to come.

Sunday, April 24, 2016

Thinking Like A Physicist


From here.

This is a bit of an inside joke, but I don't know how to explain it in a way that would keep it funny.

Thursday, April 21, 2016

The Mayans Sacrificed Lots of Children

Sometimes there is simply no way to sugar coat how barbaric some ancient civilizations really were (and make no mistake, ancient Western civilizations like the Greeks and the cult of Baal were often equally barbaric.)
Grim discoveries in Belize’s aptly named Midnight Terror Cave shed light on a long tradition of child sacrifices in ancient Maya society. 
A large portion of 9,566 human bones, bone fragments and teeth found on the cave floor from 2008 to 2010 belonged to individuals no older than 14 years. . . . Many of the human remains came from 4- to 10-year-olds. . . . [T]hese children were sacrificed to a rain, water and lightning god that the ancient Maya called Chaac.

Radiocarbon dating of the bones indicates that the Maya deposited one or a few bodies at a time in the cave over about a 1,500-year-period, starting at the dawn of Maya civilization around 3,000 years ago. . . . At least 114 bodies were dropped in the deepest, darkest part of the cave, near an underground stream. Youngsters up to age 14 accounted for a minimum of 60 of those bodies. Ancient Maya considered inner cave areas with water sources to be sacred spaces, suggesting bodies were placed there intentionally as offerings to Chaac. The researchers found no evidence that individuals in the cave had died of natural causes or had been buried.

Until now, an underground cave at Chichén Itzá in southern Mexico contained the only instance of large-scale child sacrifices by the ancient Maya. . . . Other researchers have estimated that 51 of at least 101 individuals whose bones lay scattered in Chichén Itzá’s “sacred well” were children or teens. Researchers have often emphasized that human sacrifices in ancient Central American and Mexican civilizations targeted adults. “Taken together, however, finds at Chichén Itzá and Midnight Terror Cave suggest that about half of all Maya sacrificial victims were children[.]".
M.G. Prout. Subadult human sacrifices in Midnight Terror Cave, Belize. Annual meeting of the American Association of Physical Anthropologists, Atlanta, April 15, 2016 via Science News.

Another recent study, comparing Oceanian societies with and without human sacrifices, conclude that human sacrifice played a critical role in the formation and maintenance of organized chiefdoms with social class stratification.

Monday, April 18, 2016

Recent Population Turnover In China

Razib notes in a recent blog post:
[T]he population of modern Sichuan has only weak demographic connections to classical Sichuan, as instability in the 17th century resulted in a population crash to around ~1 million. Subsequent to this over 10 million Han Chinese from the regions directly to the east, Hunan and Hubei, migrated into the region, replenishing its population. This obviously has cultural and genetic implications….(if this was common, as some have asserted, then the low between population differences between Han regions in terms of genetics makes a lot of sense).
Modern anthropologists and historians tend to have a hard time knowing what to make of extreme population turnovers that ancient DNA evidence makes clear have happened in multiple places at multiple times over the course of history. The event in Sichuan, in the historical era, could shed some valuable light on what this kind of population turnover event looks like in more human terms.

There are also some places in Europe where I understand that there has been great population turnover in the last thousand years or so: Hungary and some coastal and island areas around the Balkans immediately come to mind.

Saturday, April 16, 2016

A Viable Alternative To The Standard Model Of Cosmology?

The lamda CDM model also known as the Standard Model of Cosmology, assumes a cosmological constant in the equations of general relativity and cold dark matter (more accurately, any dark matter that doesn't have relativistic velocities).

So far, it has been the reigning model in terms of fits to the data, but there are a variety of tensions with the data. As the paper explains:
During the last two decades observational cosmology has entered an era of unprecedented precision. Cosmic microwave background (CMB) measurements, baryon acoustic oscillations (BAO) and observations of Type Ia Supernovae have shown very good agreement with the predictions of the standard cosmological model (ΛCDM) consisting of dark energy in the form of a cosmological constant Λ and cold dark matter (CDM). 
However this agreement is not perfect: the Planck CMB data are in tension with low redshift data such as cluster counts, redshift space distortions (RSD), weak lensing data and local measurements of the Hubble constant, H0. More specifically, the low redshift probes point towards a lower rate of structure growth (equivalently, a lower σ8) than the Planck results for the base ΛCDM would prefer. The most significant tensions are the ones coming from the cluster and weak lensing data.
Now, researchers have come up with a model that fits all the data that the lamda CDM model fits just as well, while significantly resolving the tensions that it has with the data.

They use a quintessence model, in lieu of a cosmological constant (i.e. they treat dark energy as an energy field rather than as part of the formula for the law of gravity), and in the model, quintessence "couples" with dark matter, i.e. quintessence a.k.a. dark energy interacts with dark matter in a very particular way.  In this model, dark energy can not transfer energy to dark matter, and visa versa, but momentum is exchanged between dark energy and dark matter in an energy neutral exchange.

This tweak to the Standard Model of Cosmology eases the tensions between the Standard Model of Cosmology and the data without detracting from any of its strengths.

It is probably too early to know if there is some hidden flaw in this new model, but it is one of the most promising developments in cosmology for quite a while.

Thursday, April 14, 2016

Dates From French Cave Confirm Paradigm For Cro-Magnon Arrival In Europe

[80 new] Radiocarbon dates for the ancient drawings in the Chauvet-Pont d’Arc Cave . . . [show] that there were two distinct periods of human activity in the cave, one from 37 to 33,500 y ago, and the other from 31 to 28,000 y ago. Cave bears also took refuge in the cave until 33,000 y ago.
Via Dienekes' Anthropology blog.

This is consistent from the overall timeframe established by re-evaluated Neanderthal and modern human dates in the Upper Paleolithic period in Europe, that have shortened the period of overlap between Neanderthals and modern humans in Europe and have generally favored older dates for both Neanderthal and Cro-Magnon sites.

Tuesday, April 12, 2016

New Neutrinoless Double Beta Decay Searches Designed

Some rare decays of radioactive isotypes are atoms such as the decay of calcium-48 to titanium-48, are well understood and derive almost entirely from the well measured properties of the original isotype, the decay product, and the mass of the subatomic particles involved in intermediate steps of the decay (in the case of neutrinoless double beta decay, the absolute Majorana mass of the neutrino).

Proposed experiments with novel isotypes have the potential to greatly improve the threshold at which neutrinoless double beta decay rates can be measured or rule out.

In the Standard Model with Dirac mass neutrinos, neutrinoless double beta decay is categorically forbidden because it violated baryon number and/or lepton number conservation.  In the Standard Model with Majorana neutrinos, one can predict very precisely the expected rate of neutrinoless double beta decay as a function of absolute Majorana neutrino mass in a manner that while not actually model independent, is still quite robust over a broad range of plausible models with Majorana neutrinos.

Current neutrinoless double beta decay detection experiments are not yet sufficiently precise to distinguish between Dirac neutrinos and Majorana neutrinos for masses consistent with the expectations associated with neutrino oscillation measurements and a normal hierarchy of neutrino masses.  

Small neutrino masses imply rare neutrinoless double beta decays, and while experiments can't yet distinguish between Dirac and Majorana neutrinos, we can say with increasing confidence that the sum of the three mass eigenstates for neutrino masses is probably less than 100 meV, which is tiny (about 5.11 million times or more lighter than a single electron, and about 9 billion times lighter than a proton, for one of each of three kinds of neutrino mass combined - the lightest of the three neutrino mass eigenvalues is probably on the order of 1 meV or less).

If the mass of a proton is the U.S. national budget, the lightest neutrino rest mass might be a single dollar or maybe even just spare change.

Theorists love Majorana neutrinos, but my personal conjecture ad belief, for a variety of reasons beyond the scope of this post, is that the likelihood that neutrinos have Majorana mass (i.e. the likelihood that neutrinos are their own anti-particles) is very low.  So, honestly, I'm not terribly excited by this development, because I expect a null result from the new experiments.

For those of you more interested in theory, a recent article has some interesting numerological discussions of the Standard Model mass and mixing angle constants, suggesting that "sum rules" may be at work and can be used to predict the neutrino masses and mixing angles.

More Evidence For A Levantine Out of Africa Route

There are two ways Out of Africa if you have only the ability to walk and use primitive boats.  One is to cross the Sinai Peninsula to the Levant.  The other is the cross the short and shallow bodies of water in the Gate of Tears at the other end of the Red Sea into Southern Arabia. (Arguably, you could also cross the Strait of Gibraltar, but this is a more difficult waterway to cross with a very primitive boat and there is really no evidence of this route being used by modern humans until ca. 40 kya.)

A number of recent archaeological finds have identified stone tools and structures in the interior of Arabia (which is basically enclosed and separated from the coast by a ring of low, old mountains) in the 120 kya to 100 kya range that are strikingly similar to contemporaneous Nubian stone tools and structures found in association with modern human remains in Africa in what is today called Sudan.

Now, these tools have also been found in what is now Israel. This supports a possible Levatine route out of Africa for first wave modern humans (alternately, a Gate of Tears population could have migrated from interior Arabia to the Levant, or both), something that 100 kya anatomically modern human remains in the Levant had already established, and it connects those Levantine remains to the Nubian material culture.  At a minimum, this adds to the body of evidence that the earliest Out of Africa migration by modern humans is much older than what contemporary and ancient DNA genetics based estimates suggest.

It also helps to establish at least one particular African archaeological culture of the Middle Paleolithic as the source, or at least one of the sources, for the Out of Africa population, as opposed to what has until recent years been a purely hypothetical pre-Out of Africa Northeast African population.

It isn't entirely clear if these early Arabian and Levantine modern humans are ancestral to modern humans found later in the same region (and possibly also in South Asia were there are modern human tools pre-75 kya, and East Asia where there is increasingly strong evidence of modern humans pre-100 kya possibly arriving via a northern route*).

TMCRA of non-Africans by genetic means is on the order of 60 kya +/- 10 kya or so, by multiple independent genetic methods.  This could imply (A) a sustained period with a small first wave Out of Africa population that experienced a bottleneck slightly before then that purged other lineages, or (B) it could imply replacement by another wave of Out of Africa modern humans (with the earlier data points being the "Out of Africa that failed" leaving only slight archaeological traces and some admixture with Neanderthals ancestral to the Altai Neanderthals - not necessarily anywhere near the Altai mountain region), or (C) it could imply that there is a methodological flaw, probably in the genetics based dating, that is distorting the estimates.

It isn't clear what archaeological cultures in Northeast Africa and Southwest Asia exist in the early Upper Paleolithic that could be identified with a migration that is a better fit to the genetics based dates.  We also don't have any ancient DNA from Africa, Southwest Asia or East Asia for "first wave Out of Africa" populations.

* I am close to the tipping point where I feel that I can take this evidence seriously, despite its deviation from well established paradigms, based upon scholarly articles and pre-prints that I've read in the last couple of weeks.

Monday, April 11, 2016

Miscellaneous Data Points

* There is strong evidence of a second Viking site, around Y1K, in North America in addition to Lief Erikson's Vinland settlement.

* A Nordic grave from 1400 BCE contains a blue glass bead made in Egypt.

* Scientists have figured out which Alpine pass Hannibal and his elephants and horses and men took in 218 BCE during the Second Punic War, based upon strong evidence of a mass animal presence at precisely 218 BCE in the pass.

* An inscription on a 6th century BCE slab of stone from an Etruscan temple may help scientists decipher the Etruscan language and religion.

* Columbus was able to leverage a lunar eclipse prediction made on February 29, 1504 CE (leap day is not a particularly new invention), to convince the natives to provide supplies to his ships.

* Lead levels in ancient Roman water were 100 times natural levels.

* In Athens, around 400 BCE, somebody left five curses on lead tablets in a young woman's grave, aimed at a husband-wife bartending team in the city. History doesn't tell us if the curses worked.

* A 300 year old mummy from Northern Botswana was from an older individual and shows Sotho-Tswana or Khoesan genetic relatedness based on DNA tests form the mummy.

* A new paper using whole mtDNA sequences largely reaffirms the conventional wisdom about the peopling of the Americas. Its findings on mtDNA lineage loss in South America seem a bit too extraordinary and may point to methodological issues.

Thursday, April 7, 2016

Ancient Neanderthal Y-DNA

We finally have ancient Neanderthal Y-DNA (the previous five autosomal ancient DNA samples from Neanderthals were from women) and it is very much as we expected it to be.

The Neanderthal Y-DNA is much more ancient than the most basal known modern human Y-DNA haplogroup A00 (of which only a dozen or two modern bears have been found) for which TMRCA is about 280,000 years ago.  Y-DNA A00, in turn, is much more ancient than any other modern human Y-DNA haplogroups (the most basal of which are other subtypes of Y-DNA haplogroup A).*  Its mutation estimated age largely corroborates estimates of the split of the most recent common ancestor of modern humans and Neanderthals from autosomal DNA and mtDNA estimates.  The estimated divergence date of Neanderthal mtDNA from modern human mtDNA is 400,000 to 800,000 years ago, with a mean just 12,000 years different from the 588,000 years ago estimate based upon Y-DNA.**

No modern human has any Neanderthal Y-DNA or any Neanderthal mtDNA.  Essentially all non-African modern humans have a low percentage of introgressed Neanderthal autosomal DNA in all other chromosomes.  The paper notes based upon its examination of protein coding differences that could have impacted male hybrid fertility that:
It is tempting to speculate that some of these mutations might have led to genetic incompatibilities between modern humans and Neandertals and to the consequent loss of Neandertal Y chromosomes in modern human populations. Indeed, reduced fertility or viability of hybrid offspring with Neandertal Y chromosomes is fully consistent with Haldane’s rule, which states that “when in the [first generation] offspring of two different animal races one sex is absent, rare, or sterile, that sex is the [heterogametic] sex.”
This also corroborates that Y-DNA A00 is unlikely to be an introgression from an archaic hominin species and is instead simply a basal Y-DNA lineage that has survived at a very low frequency despite the fact that all but one of the modern human Y-DNA lineages that emerged over the intervening ca. 100,000 years between it and the next most basal Y-DNA lineage have gone extinct leaving no descendants.
Sequencing the genomes of extinct hominids has reshaped our understanding of modern human origins. 
Here, we analyze ∼120 kb of exome-captured Y-chromosome DNA from a Neandertal individual from El Sidrón, Spain. 
We investigate its divergence from orthologous chimpanzee and modern human sequences and find strong support for a model that places the Neandertal lineage as an outgroup to modern human Y chromosomes—including A00, the highly divergent basal haplogroup. 
We estimate that the time to the most recent common ancestor (TMRCA) of Neandertal and modern human Y chromosomes is ∼588 thousand years ago (kya) (95% confidence interval [CI]: 447–806 kya). This is ∼2.1 (95% CI: 1.7–2.9) times longer than the TMRCA of A00 and other extant modern human Y-chromosome lineages. 
This estimate suggests that the Y-chromosome divergence mirrors the population divergence of Neandertals and modern human ancestors, and it refutes alternative scenarios of a relatively recent or super-archaic origin of Neandertal Y chromosomes. 
The fact that the Neandertal Y we describe has never been observed in modern humans suggests that the lineage is most likely extinct. 
We identify protein-coding differences between Neandertal and modern human Y chromosomes, including potentially damaging changes to PCDH11Y, TMSB4Y, USP9Y, and KDM5D. Three of these changes are missense mutations in genes that produce male-specific minor histocompatibility (H-Y) antigens. Antigens derived from KDM5D, for example, are thought to elicit a maternal immune response during gestation. It is possible that incompatibilities at one or more of these genes played a role in the reproductive isolation of the two groups.
Fernando L. Mendez, G. David Poznik, Sergi Castellano, Carlos D. Bustamante, "The Divergence of Neandertal and Modern Human Y Chromosomes", AJHG Volume 98, Issue 4, p728–734 (April 7, 2016) (open access).

* Wikipedia summarizes the state of the research on TMCRA date for the most basal Y-DNA other than Y-DNA A00 and the most basal mtDNA (citations omitted):
Current (as of 2015) estimates for the age for the Y-MRCA are roughly compatible with the estimate for the emergence of anatomically modern humans some 200,000 years ago (200 kya), although there are substantial uncertainties. 
Early estimates published during the 1990s ranged between roughly 200 and 300 kya, Such estimates were later substantially corrected downward, as in Thomson et al. 2000, which proposed an age of about 59,000. This date suggested that the Y-MRCA lived about 84,000 years after his female counterpart mt-MRCA (the matrilineal most recent common ancestor), who lived 150,000–200,000 years ago. This date also meant that Y-chromosomal Adam lived at a time very close to, and possibly after, the migration from Africa which is believed to have taken place 50,000–80,000 years ago. One explanation given for this discrepancy in the time depths of patrilineal vs. matrilineal lineages was that females have a better chance of reproducing than males due to the practice of polygyny. When a male individual has several wives, he has effectively prevented other males in the community from reproducing and passing on their Y chromosomes to subsequent generations. On the other hand, polygyny does not prevent most females in a community from passing on their mitochondrial DNA to subsequent generations. This differential reproductive success of males and females can lead to fewer male lineages relative to female lineages persisting into the future. These fewer male lineages are more sensitive to drift and would most likely coalesce on a more recent common ancestor. This would potentially explain the more recent dates associated with the Y-MRCA. 
The "hyper-recent" estimate of significantly below 100 kya was again corrected upward in studies of the early 2010s, which ranged at about 120 kya to 160 kya. This revision was due to the discovery of additional mutations and the rearrangement of the backbone of the Y-chromosome phylogeny following the resequencing of Haplogroup A lineages. In 2013, Francalacci et al. reported the sequencing of male-specific single-nucleotide Y-chromosome polymorphisms (MSY-SNPs) from 1204 Sardinian men, which indicated an estimate of 180,000 to 200,000 years for the common origin of all humans through paternal lineage. . . . Also in 2013, Poznik et al. reported the Y-MRCA to have lived between 120,000 and 156,000 years ago, based on genome sequencing of 69 men from 9 different populations. In addition, the same study estimated the age of Mitochondrial Eve to about 99,000 and 148,000 years. As these ranges overlap for a time-range of 28,000 years (148 to 120 kya), the results of this study have been cast in terms of the possibility that "Genetic Adam and Eve may have walked on Earth at the same time" in the popular press. 
The announcement of yet another discovery of a previously unknown lineage, haplogroup A00, in 2013, resulted in another shift in the estimate for the age of Y-chromosomal. Karmin et al. (2015) dated it to between 192,000 and 307,000 years ago (95% CI). 
The same study reports that non-African populations converge to a cluster of Y-MRCAs in a window close to 50kya (out-of-Africa migration), and an additional bottleneck for non-African populations at about 10kya, interpreted as reflecting cultural changes increasing the variance in male reproductive success (i.e. increased social stratification) in the Neolithic.
** Per Wikipedia (not updated for the most recent data from this paper) the dates of the earliest Neanderthal archaeological record (with a total of abotu 400 sets of Neanderthal remains recovered to date) is as follows (citations omitted):
The first humans with proto-Neanderthal traits are believed to have existed in Eurasia as early as 350,000–600,000 years ago with the first "true Neanderthals" appearing between 200,000 and 250,000 years ago. . . .
Comparison of the DNA of Neanderthals and Homo sapiens suggests that they diverged from a common ancestor between 350,000 and 400,000 years ago. 
This ancestor was probably Homo heidelbergensis. Heidelbergensis originated between 800,000 and 1,300,000 years ago, and continued until about 200,000 years ago. It ranged over Eastern and South Africa, Europe and Western Asia. Between 350,000 and 400,000 years ago the African branch is thought to have started evolving towards modern humans and the Eurasian branch towards Neanderthals. Scientists do not agree when Neanderthals can first be recognised in the fossil record, with dates ranging between 200,000 and 300,000 years BP.
Ancient DNA from H. Heidelbergensis has established that it was an ancestor of the Neanderthals, and that Denisovans, for whom we also have ancient DNA, are not members of the species H. Heidelbergensis.

Post-Script on Neanderthal mtDNA

I've long advocated for Haldane's Law as the source of a lack of Neanderthal Y-DNA in modern humans and it is nice to see that hypothesis largely confirmed based upon the protein coding of actual Neanderthal Y-DNA.

So, why is there no Neanderthal mtDNA in modern humans?

My hypothesis, which is as solid as anything else out there at this point, is that overwhelmingly, Neanderthal-modern human admixture in both directions involved sexual encounters of short duration (probably some combination of rapes and brief flings) that did not continue through the birth of a child. Further, that in any instances where there was a prolonged nuclear family relationship that such relationships were matrilocal (I suspect that this was very rare, but if it did happen at any frequency, it wouldn't change the result.)

In this scenario, mother's of Neanderthal-modern human hybrids would stay with the tribe of the mother, rather than the tribe of the father.  Thus, hybrid Neanderthal-modern human children (all girls or infertile boys) with human mothers and hence modern human mtDNA would be raised in modern human tribes, while those with Neanderthal mothers and hence Neanderthal mtDNA would be raised in Neanderthal tribes.

The hybrid children in modern human tribes with modern human mtDNA contributed to current populations.  The descendants of hybrid children in Neanderthal tribes with Neanderthal mtDNA went extinct with the rest of the Neanderthal species.

It is possible that in some very rare instances a Neanderthal woman with hybrid children was incorporated into a modern human tribe, that her mtDNA was lost due to random drift (or, in part, because the daughters of Neanderthal women may have been at a selective disadvantage in a modern human tribe relative to other girls in the tribe).

But, while mtDNA lineages are lost much more often due to random drift than you would naively expect in stable populations, and are frequently lost in shrinking populations (e.g. those experiencing a population bottleneck), mtDNA lineages are much less likely to be lost due to random drift in expanding populations.  Yet, the modern human population outside Africa was probably expanding rapidly around the time of Neanderthal admixture.

Tuesday, April 5, 2016

When Did Modern Humans Arrive In East Asia?

Zhiren Cave in southern China is an important site for the study of the origin and the environmental background of early modern humans. 
The combination of Elephas kiangnanensis, Elephas maximus, and Megatapirus augustus, indicates an early representative of the typical Asian elephant fauna. 
Previous U-series dating of flowstone calcite has pinpointed an upper age limit for the fossils of about 100 ka. In order to achieve a better comprehension of the chronology of the modern human and contemporaneous faunal assemblage, paleomagnetic, stratigraphic, and optically stimulated luminescence (OSL) dating methods have been applied to the cave sediments. Paleomagnetic analyses reveal that there is a reversed polarity excursion below the fossiliferous layer. This excursion can be regarded as the Blake excursion event, given the U-series ages of the overlying flowstone calcite, the OSL measurements, the virtual geomagnetic pole (VGP) path of the excursion, the two reverse polarity zones within this excursion event, and the characteristic of the fauna assemblage. 
The human remains and mammalian fauna assemblage can be bracketed to 116–106 ka. Application of OSL dating leads to erroneous ages, largely due to the uncertainty associated with the estimation on the dose rates.
Yanjun Cai et al., "The age of human remains and associated fauna from Zhiren Cave in Guangxi, southern China" Quaternary International (March 24, 2016) (emphasis and paragraph breaks added).

This is an update of a find previous made in 2007:
The 2007 discovery of fragmentary human remains (two molars and an anterior mandible) at Zhirendong (Zhiren Cave) in South China provides insight in the processes involved in the establishment of modern humans in eastern Eurasia. The human remains are securely dated by U-series on overlying flowstones and a rich associated faunal sample to the initial Late Pleistocene, >100 kya. 
As such, they are the oldest modern human fossils in East Asia and predate by >60,000 y the oldest previously known modern human remains in the region. 
The Zhiren 3 mandible in particular presents derived modern human anterior symphyseal morphology, with a projecting tuber symphyseos, distinct mental fossae, modest lateral tubercles, and a vertical symphysis; it is separate from any known late archaic human mandible. However, it also exhibits a lingual symphyseal morphology and corpus robustness that place it close to later Pleistocene archaic humans. 
The age and morphology of the Zhiren Cave human remains support a modern human emergence scenario for East Asia involving dispersal with assimilation or populational continuity with gene flow. It also places the Late Pleistocene Asian emergence of modern humans in a pre-Upper Paleolithic context and raises issues concerning the long-term Late Pleistocene coexistence of late archaic and early modern humans across Eurasia.
Wu Liu et al., "Human remains from Zhirendong, South China, and modern human emergence in East Asia", PNAS (2007).

The basic issue here is that given everything else we know about modern human evolution and migration, from archaeology of hominin remains and tools, from genetics, from the fates of megafauna in places where humans appear, and so on, modern humans have no business leaving remains in East Asia between 116,000 and 106,000 years ago.

We have a comparatively uninterrupted period in which modern human remains are found in East Asia starting ca. 40,000 years ago.  We have evidence of modern humans in Australia and Papua New Guinea ca. 50,000 years ago.  We have human remains in Southeast Asia ca. 65,000 years ago.  We have tool assemblies in India similar to those associated with modern humans in Arabia that both predate and post-date the Toba volcano explosion, ca. 75,000 years ago.  We have modern humans just barely out of African in the Levant, ca. 100,000 years ago.  We have tool assemblies and structures in the interior of Arabia similar to tool assemblies and structures found in Nubia contemporaneously in connection with modern human remains that are consistent with a first modern human emergence from Africa, ca. 125,000 years ago.  We have modern human remains in Africa dating back to at least 150,000 years ago.

There are a few other claimed East Asian modern human remains of about the same time period where the dating and species identification are not as reliable as Zhiren Cave which is the most recent and methodologically sound of the finds, and the 2016 result tends to argue strongly against the date being inaccurate, which would be one way to solve the riddle, as its its routine for later investigation to result in the redating of remains found in a cave.

The other possibility is that the very fragmentary evidence two molars and part of a jaw bone, look modern because they contain archaic human remains that have independently evolved derived traits similar to those of modern humans in these teeth and jaws, over the roughly 1.7 million years since Homo Erectus was the first human ancestors to leave Africa, even though the species who left these remains is not actual an anatomically modern human. (Alternately, these could be remains of a non-Out of Africa Homo Erectus derived species and instead represented another archaic species, such as Denisovans, who briefly replaced Homo Erectus in East Asia, only to be replaced in turn by modern humans no so long afterwards.)

A third possibility is that a very small number of modern humans migrated very rapidly from Africa to East Asia, but that they lacked critical mass and went extinct soon after, with modern humans achieving a permanent presence in East Asia only 60,000 years later.  Archaic features in the samples could reflect some degree of archaic introgression into modern humans acquire en route to East Asia.

There are several problems with this third possibility.

* Why is there no other evidence of modern humans pre-100kya, between Arabia and South China? Surely, such a long distance migration would have left some settlements in between?

* If these first wave modern humans could make the trip to South China, albeit to fail there, why is there a 60,000 year gap between the next group of modern humans to appear there?

* Modern humans have thrived and seen their populations grow exponentially in all other virgin territory they encountered, and the population density of archaic hominins during the Homo Erectus era must have been quite low (given their inferior technologies and already low hunter-gatherer population density), so why did modern humans who would have made a very early foray into East Asia not have thrived?  The Zhiren Cave evidence seems to indicate that these hominins where quite successful elephant hunters, after all?  Were these purported anatomically modern humans still behaviorally primitive?  And, if so, what behavioral change so profoundly improved their selective fitness?

* Should early modern humans from ca. 116,000 to 106,000 years BP be almost identical to early modern humans from ca. 100,000 in the Levant, such that no other correspondence would be possible?  There wouldn't be much time for evolutionary differentiation of the two populations, although founder effects and introgression could lead to some rather rapid morphological changes.

One can imagine satisfactory answers to each of these questions. But, we have really no solid evidence to support satisfactory answers to any of them.

Therefore, at this point, in the absence of ancient DNA from purportedly modern human remains in East Asia that are so old, the stark absence of corroborating evidence of a modern human presence in East Asia at this early date for the next 60,000 years or so, and the sharp deviation from a paradigm that explains all of the other data that this interpretation would require, is simply not strong enough to convince me that these remains are really modern human in origin.

A story that could fit this outlier to the rest of the data would have to be truly extraordinary, and I'm not convinced that this evidence is sufficiently extraordinary to support such an extraordinary claim.

I'm not dogmatically opposed to being convinced by future corroborating evidence that a modern human classification at the date determined is the correct interpretation.

One also has other weird data points in South China, like a purportedly archaic set of remains dated to just 14,000 years ago. (Also discussed at this blog here).  It is less weird for a small relict archaic hominin population to survive long after all others of their kind go extinct, in a place that we know was inhabited by at least one, and most probably two species of archaic hominins before modern humans appeared on the scene, particularly prior to the Neolithic revolution, when modern human hunter-gatherers may have had less of a decisive advantage over other archaic hominin species.

But, it is still part of an overall picture in East Asia in which the few pieces we have are all outliers that don't make any sense in isolation and don't even form a really coherent story when viewed together.  This is particularly odd because in the New World, in Europe and in Southeast Asia, new evidence seems to be strengthening the very paradigms that the East Asian finds seem to contradict.

But, the evidence so far just isn't enough, particularly given the lack of clarity over what happened to Homo Erectus in the time frame roughly after 200,000 years BP.  Did Homo Erectus go extinct or suffer diminished numbers in the face of competition from another archaic hominin species such as the Denisovans?  Did Homo Erectus thrive until modern humans arrived and then rapidly go extinct, but left few traces due to insufficient archaeological exploration, low population densities and tools that are too crude to distinguish from mere ordinary rocks?

(The latest evidence in Homo Erectus archaeology is the discovery of Homo Erectus remains in Vietnam dated to ca. 800,000 years ago, which is right when and where we would expect to find them under current paradigms.)

I don't think that we have good enough evidence, yet, to resolve these questions, although I'd like to hope that our knowledge base might improve enough to understand this period much more accurately and definitively in the years to come during my lifetime.