Pages

Monday, October 31, 2016

Extinct Australian Megafauna Were Very Impressive

In honor of Halloween it is a good time to do a post about monsters. Australia still has plenty of them, but before the megafauna extinction that took place when humans arrived there (and got worse when the humans were joined by an Australian's best friend, the dingo), there were a lot more very impressive monsters and giant animals in Australia.

One of those was Zygomaturus trilobus whom the linked article's authors call Ziggy for short.

Ziggy had a big tusk tooth.


Ziggy was also quite large and rather bear-like.

Remains of Ziggy's kind have been found over man places in Australia (the map also shows the distribution of other extinct megafauna).


Happy Halloween!

The Latest In The Dark Matter v. Modified Gravity Debate

Sabine Hossenfelder's latest post at her blog Backreaction reports on some of the latest volleys in the particle dark matter v. modified gravity debate that I have been following for many years at this blog. She does a good job at summing up the competing perspectives involved:
In its simplest form the concordance model has sources which are collectively described as homogeneous throughout the universe – an approximation known as the cosmological principle. In this form, the concordance model doesn’t predict how galaxies rotate – it merely describes the dynamics on supergalactic scales.

To get galaxies right, physicists have to also take into account astrophysical processes within the galaxies: how stars form, which stars form, where do they form, how do they interact with the gas, how long do they live, when and how they go supernova, what magnetic fields permeate the galaxies, how the fields affect the intergalactic medium, and so on. It’s a mess, and it requires intricate numerical simulations to figure out just exactly how galaxies come to look how they look. 
And so, physicists today are divided in two camps. In the larger camp are those who think that the observed galactic regularities will eventually be accounted for by the concordance model. It’s just that it’s a complicated question that needs to be answered with numerical simulations, and the current simulations aren’t good enough. In the smaller camp are those who think there’s no way these regularities will be accounted for by the concordance model, and modified gravity is the way to go.
The simulations of particle dark matter can now reproduce MOND-like relationships that exist in the actual data, but only with considerable fine tuning of the models and many degrees of freedom. MOND-like models, in contrast, only add one or two parameters to the equations of gravity in most cases (which may be derivable from existing physical constants like the Hubble constant and the speed of light). 

New LHC Data Coming This December; Inflation Theory Inventor Now Doubts Theory

Peter Woit's blog reports a couple of notable recent developments:

* First, we can expect new results from the LHC in mid-December reporting on the 2016 experimental date from the LHC which has now recorded lots of data from collisions at 13 TeV energies:
The 2016 LHC proton-proton run is now over, with delivered (41.07 CMS/38.4 ATLAS) and recorded (37.82 CMS/35.5 ATLAS) luminosities (in inverse fb) far above the goal for this year of 25. Together with last year’s data, the experiments now have 41.63 (CMS) and 39.4 (ATLAS) inverse fb recorded at 13 TeV, close to the LHC design energy of 14 TeV. It is likely that preliminary results will be reported at an “end-of-year jamboree” in mid-December, with more to come at the winter conferences.
This experimental data will help physicists determine if the world continues to fit the Standard Model or if there is significant evidence of beyond the Standard Model physics.

Since the new data are from close to peak LHC energies and are substantial enough to draw preliminary conclusions, this is the last moment when there is any real possibility that something dramatically new is likely to appear in the data.

After that, the LHC's reports are likely to be more about reducing statistical error that can confirm or disprove previous findings and make them more precise, and less about finding any radically new phenomena that are observable at the LHC because they have a cutoff energy scale that is below the peak energies that the LHC is capable of probing.

* Second, one of the inventors of the notion of cosmological inflation now things that this theory in its current form is a failure.
Paul Steinhardt gave a colloquium at Fermilab last month with the title Simply Wrong vs. Simple. In it he explained “why the big bang inflationary picture fails as a scientific theory” (it doesn’t work as promised, is not self-consistent and not falsifiable). This is a complicated topic, but Steinhardt is an expert and one of the originators of the theory, so if you want to understand the problems of some common arguments for inflation, watching this talk is highly recommended. Steinhardt’s talk was part of a Fermilab workshop, Simplicity II.
Of course, finding problems with cosmological inflation theory does not imply that a good alternative to explain that phenomena that the theory sought to explain is available. But, as a paper he co-authored late last year explains, he has identified a couple of viable alternatives to the "classic" inflationary paradigm that he helped to establish, which are called "bouncing theories."
The results from Planck2015, when combined with earlier observations from WMAP, ACT, SPT and other experiments, were the first observations to disfavor the "classic" inflationary paradigm. To satisfy the observational constraints, inflationary theorists have been forced to consider plateau-like inflaton potentials that introduce more parameters and more fine-tuning, problematic initial conditions, multiverse-unpredictability issues, and a new 'unlikeliness problem.' Some propose turning instead to a "postmodern" inflationary paradigm in which the cosmological properties in our observable universe are only locally valid and set randomly, with completely different properties (and perhaps even different physical laws) existing in most regions outside our horizon. By contrast, the new results are consistent with the simplest versions of ekpyrotic cyclic models in which the universe is smoothed and flattened during a period of slow contraction followed by a bounce, and another promising bouncing theory, anamorphic cosmology, has been proposed that can produce distinctive predictions.
Anna Ijjas, Paul J. Steinhardt "Implications of Planck2015 for inflationary, ekpyrotic and anamorphic bouncing cosmologies" (30 Dec 2015).

As the introduction to that article explains (emphasis added):
The Planck2015 [1] and Planck2013 [2] observations, combined with the results by the Wilkinson Microwave Anisotropy Probe (WMAP), Atacama Cosmology Telescope (ACT) and South Pole Telescope (SPT) teams have shown that the measured spatial curvature is small; the spectrum of primordial density fluctuations is nearly scale-invariant; there is a small spectral tilt, consistent with a simple dynamical mechanism that caused the smoothing and flattening; and the fluctuations are nearly gaussian. These features are all consistent with the simplest textbook inflationary models. At the same time, Planck2015 also confirmed that r, the ratio of the tensor perturbation amplitude to the scalar perturbation amplitude, is less than 0.1, a result that virtually eliminates all the simplest textbook inflationary models. The development is notable because, as emphasized by Ijjas et al. [3], it is the first time that the classic inflationary picture has been in conflict with observations. 
The results have led theorists to consider alternative plateau-like inflationary models whose parameters can be adjusted to reduce the expected value of r while retaining all the rest of the classic predictions. However, as we explain in this brief review, the remaining models raise new issues. They require more parameters, more tuning of parameters, more tuning of initial conditions, a worsened multiverse-unpredictability problem, and a new challenge that we call the inflationary ‘unlikeliness problem.’ 
One response to these problems has been that they should be ignored. The classic inflationary picture should be abandoned in favor of an alternative ‘postmodern’ inflationary picture that allows enough flexibility to fit any combination of observations. The classic and postmodern inflationary pictures are so different that they ought to be viewed as distinct paradigms to be judged separately. 
A more promising response to Planck2015 has been to develop “bouncing” cosmologies in which the largescale properties of the universe are set during a period of slow contraction and the big bang is replaced by a big bounce. For example, a new, especially simple version of ekpyrotic (cyclic) cosmology has been identified that fits all current observations, including nearly Gaussian fluctuations and small r [4–7]. Also, anamorphic bouncing cosmologies have been introduced that use yet different ways to smooth and flatten the universe during a contracting phase and generate a nearly scale-invariant spectrum of perturbations [8]. 
We will first review the problems that Planck2015 imposes on classic inflation, the version that most observers consider. We will briefly discuss the conceptual problems of initial conditions and multiverse that have been known and unresolved for decades. Then we will turn to the tightening constraints resulting from Planck2015 and other recent experiments. We will review and critique postmodern inflation that some theorists now advocate. Finally, we will turn to the promising new developments in bouncing cosmologies, both ekpyrotic and anamorphic approaches.
The review article follows with a nice brief introduction to "classic" inflation theory, before exploring other topics.

Oldest Pottery Now From China Rather Than Japan

Until recently, the oldest pottery every discovered, dated from 16,500 years ago from the Jomon culture in Japan. But, it has now been outdone by older pottery from Southern China (citing Science News citing the June 29, 2012 edition of the Journal Science - this is not breaking news, just something that caught my attention recently).

In West Eurasia, farming and proto-farming predated the invention (or adoption as the case may be, there is strongly suggestive evidence that pottery is a technology that migrated to West Eurasia across the North Asian steppe) of pottery, so the earliest phases of the Fertile Crescent Neolithic revolution are called "Pre-Pottery Neolithic" or "Pre-Ceramic Neolithic." In the case of the Fertile Crescent Neolithic, carpentry was actually the handmaiden of the Neolithic, while pottery came later (query if wooden containers may have provided models for later ceramic ones).


Context for the illustration can be found at the Bell Beaker Blog.

In East Asia, hunter-gatherer populations had pottery before farming was invented. As noted previously at this blog:
Japan was first inhabited by hominins about 30,000 years ago, and about 16,000 years ago, an archaeological culture known as the Jomon arose either due either to new migration or to in situ cultural development of Japan's existing inhabitants. The timing is after the Last Glacial Maximum (LGM), ca. 20,000 years before present, at which Japan was at its most easily accessible in modern human times due to low sea levels, at around the time that the wild fluctuation in climate the followed the LGM started to stabilize somewhat.[10]
The Jomon were fishermen who also hunted and gathered food. The sedentary lifestyle associate with fishing based subsistence allowed the Jomon to become the first culture to develop pottery. . . . There is even suggestive evidence that implies that all pottery in Eurasia is derived from the Jomon invention of that craft.[12]

According to [13]:
The upper Paleolithic populations, i.e. Jomon, reached Japan 30,000 years ago from somewhere in Asia when the present Japanese Islands were connected to the continent. The separation of Japanese archipelago from the continent led to a long period (∼13,000 – 2,300 years B.P) of isolation and independent evolution of Jomon. The patterns of intraregional craniofacial diversity in Japan suggest little effect on the genetic structure of the Jomon from long-term gene flow stemming from an outside source during the isolation. The isolation was ended by large-scale influxes of immigrants, known as Yayoi, carrying rice farming technology and metal tools via the Korean Peninsula. The immigration began around 2,300 years B.P. . . .
The timeline in Okinawa is potentially consistent in broad brush strokes with the rest of Japan (the oldest human remains are 32,000 years old), but the archaeological record is thinner (there is no archaeological record indicating a human presence of any kind from 18,000 to 6,000 years ago), rice farming arrives only many centuries after the Yayoi do, and earliest historical mention of Okinawa in surviving written documents is from 607 CE.
In the case of Japan's Jomon culture that made a lot of sense. The Jomon culture subsisted on fishing, which, while not farming and herding since it involves collection of wild animals for food, does allow for a relatively sedentary lifestyle in which spending time to manufacture something too fragile to travel well makes sense. This fit the same pattern as Finland and the vicinity, where a relatively sedentary fishing culture also developed pottery before agriculture. (Notably, flour predates agriculture all over the world, because flour made from wild plants, unlike ceramic pottery, travels well.)

The South Chinese example also pre-dated farming and herding.
Pottery making got off to an ancient, icy start in East Asia. Pieces of ceramic containers found in a Chinese cave date to between 19,000 and 20,000 years ago, making these finds from the peak of the last ice age the oldest known examples of pottery.


OLDEST POT This pottery fragment and others found near it in a Chinese cave date to 20,000 years ago, making them the oldest known examples of pottery making.
SCIENCE/AAAS
This new discovery suggests that hunter-gatherers in East Asia used pottery for cooking at least 10,000 years before farming appeared in that part of the world, say archaeologist Xiaohong Wu of Peking University in Beijing, China, and her colleagues. Cooking would have increased energy obtained from starchy foods and meat, a big plus in frigid areas with limited food opportunities, the researchers report in the June 29 Science.
It isn't obvious in the South Chinese case what circumstances made it possible for the hunter-gatherers there to stay in the same place for long enough periods of time to make manufacturing these fragile containers worthwhile that wasn't present in the case of other pre-Neolithic people.

Perhaps the conditions of the Last Glacial Maximum confined them to a geographically compact refuge that required them to rely on storing hunted and gathered food, rather than simply moving on during lean periods. It could also be that a shortage of wood in that area made resort to a ceramic alternative, which may have been considered inferior to wood at the time, necessary.

We also don't know if the South Chinese pottery technology is in continuity with later use of pottery in Asia, or if the technology was lost at some point (perhaps in the Mesolithic era when migration became viable again when food supplies were short) only to be reinvented later. The date associated with the South Chinese pottery would not be inconsistent with it being a source for the earliest Jomon pottery in Japan, about 3,500 years later.

How Long Ago Did Homo Naledi Roam Southern Africa?

H. naledi is the most recent branch of the genus Homo that has been discovered, with remains collected in a cave crypt in South Africa. But, neither ancient DNA nor a reliable radioactive method date of the samples has been possible to date, so scientists have had to resort to old fashioned guesstimating based upon observed features of the H. naledi remains to estimate their age. 

In any case, any time estimate would only establish a point in time at which the species existed at that location, because there is only a single site that appears to involve individuals who died at roughly the same time, plus or minus a few centuries.

For example, despite the evidence that Khoi-san hunter-gatherers who now live nearby had archaic admixture of a type different than Neanderthals and Denisovans from an event definitely in the last 0.1 million years and probably in the last 0.01 million years, the 2.5 million to 0.9 million time depth of H. naledi makes this species a pretty poor candidate for the source of that admixture, although the admixture could have involved a species descended from H. naledi. But, the crypt could have been created at any time in a range from shortly before H. naledi became extinct, to shortly before it evolved as a new species.

There is at least one inaccuracy in the story below. Homo erectus dates to more like 2.0-2.1 million years ago in Africa, rather than 1.8 million years as stated. There are examples of H. erectus in Asia from 1.9 million years ago and the species almost certainly evolved first in Africa somewhat before that date.
Evolutionary trees of ancient hominids statistically reconstructed from skull and tooth measurements indicate that H. naledi lived around 912,000 years ago, say paleoanthropologist Mana Dembo of Simon Fraser University in Burnaby, Canada, and her colleagues. That’s a provisional estimate, since researchers have yet to date either H. naledi’s bones or the sediment in which some of its remains were excavated. 
The new statistical age estimate, described by Dembo’s group in the August Journal of Human Evolution, challenges proposals that H. naledi’s remains come from early in Homo evolution. Researchers who first studied H. naledi bones retrieved from an underground cave in South Africa noted similarities of the skull and several other body parts to early Homo species dating to between 2.5 million and 1.5 million years ago (SN: 10/3/15, p. 6). 
A comparison of H. naledi skull measurements to those of 10 other hominid species, conducted by paleoanthropologist J. Francis Thackeray of the University of the Witwatersrand in Johannesburg, reached the same conclusion. H. naledi lived roughly 2 million years ago, Thackeray proposed in the November/December 2015 South African Journal of Science. 
Dembo disagrees. Her team tested which of 60,000 possible evolutionary trees best fit skull and tooth measurements of H. naledi, 20 other hominid species, gorillas and chimpanzees. The new analysis keeps H. naledi in the genus Homo. But it’s still unclear which of several hominid species — including Homo sapiens, Homo floresiensis (or “hobbits”) and Australopithecus sediba (SN: 8/10/13, p. 26) — is most closely related to the South African species. 
Dembo’s team found no signs that bones assigned to H. naledi represent a variant of Homo erectus, as some scientists have argued. H. erectus originated about 1.8 million years ago in Africa and rapidly spread to West Asia. But Dembo’s statistical model assumes that H. erectus skulls and teeth vary in shape throughout Africa and Asia much less than they actually do, says paleoanthropologist Christoph Zollikofer of the University of Zurich. Bones assigned to H. naledi most likely represent a form of H. erectus, he argues. 
Further statistical comparisons that include measurements of limb and trunk bones may help to clarify H. naledi’s evolutionary relationships, Dembo says. 
Based on geological dates for all hominids except H. naledi, the researchers also calculated the rate at which each species’ skull and tooth features evolved over time. Those results enabled the researchers to estimate H. naledi’s age. 
Homo naledi might be less than a million years old,” Dembo says. She considers that estimate “reasonably robust,” since ages calculated for other hominids in the analysis often fell close to dates gleaned from fossil and sediment studies. In a few cases, though, statistical and geological age estimates differed by 800,000 years or more.
From Science News.

John Hawks has some interesting recent H. naledi remarks, mostly in response to Thackeray's paper, in connection with the published response from his team cited below.

The Science News story cites the following papers:

* M. Dembo et al. The evolutionary relationships and age of Homo naledi: an assessment using dated Bayesian phylogenetic methods. Journal of Human Evolution. Vol. 97, August 2016, p. 17. doi: 10.1016/j.jhevol.2016.04.008.

* J. Hawks and L. Berger. The impact of a date for understanding the importance of Homo naledi. Transactions of the Royal Society of South Africa. Vol. 71, issue 2, 2016. doi: 10.1080/0035919X.2016.1178186.

* J.F. Thackeray. Estimating the age and affinities of Homo naledi. South African Journal of Science. Vol. 111, November/December 2015, p. 3. doi: 10.17159/sajs.2015/a0124.

Friday, October 28, 2016

A New Precision Charm Quark Mass Estimate

We determine the charm quark mass m̂ c from QCD sum rules of moments of the vector current correlator calculated in perturbative QCD at Order (α̂s3). Only experimental data for the charm resonances below the continuum threshold are needed in our approach, while the continuum contribution is determined by requiring self-consistency between various sum rules. Existing data from the continuum region can then be used to bound the theoretic uncertainty. Our result is m̂ c(m̂ c)=1272±8~MeV for α̂s(MZ)=0.1182.
Jens Erler, Pere Masjuan, and Hubert Spiesberger, "Charm Quark Mass with Calibrated Uncertainty" (26 Oct 2016).

This almost as precise as you can get (or at least that it is worth trying to get) under greater precision in measurements of α̂s (i.e. the strong force coupling constant) is available. The paper correctly cites the PDG value for that constant, but omits the margin of error in that measurement which is α̂s(MZ)=0.1182(12). Any greater precision in the charm quark mass would provide only spurious accuracy.

How does this compare to existing global averages?
The c-quark mass corresponds to the ``running'' mass mc (μ = mc) in the MS¯ scheme. We have converted masses in other schemes to the MS¯ scheme using two-loop QCD perturbation theory with αs(μ=mc) = 0.380.38±0.03±0.03. The value 1.27±0.03 GeV for the MS¯ mass corresponds to 1.67±0.07 GeV for the pole mass.
Thus, the precision of this estimate is about four times the global average. 

Extinct Tasmanian Tiger's Brain Analyzed

This is like something out of a Steampunk novel or Jurassic Park. Cool!
The last known Tasmanian tiger (Thylacinus cynocephalus) - aka the thylacine - died in 1936. Because its natural behavior was never documented, we are left to infer aspects of its behavior from museum specimens and unreliable historical recollections. Recent advances in brain imaging have made it possible to scan postmortem specimens of a wide range of animals, even more than a decade old. Any thylacine brain, however, would be more than 100 years old. 
Here, we show that it is possible to reconstruct white matter tracts in two thylacine brains. For functional interpretation, we compare to the white matter reconstructions of the brains of two Tasmanian devils (Sarcophilus harrisii). We reconstructed the cortical projection zones of the basal ganglia and major thalamic nuclei. The basal ganglia reconstruction showed a more modularized pattern in the cortex of the thylacine, while the devil cortex was dominated by the putamen. Similarly, the thalamic projections had a more orderly topography in the thylacine than the devil. These results are consistent with theories of brain evolution suggesting that larger brains are more modularized. Functionally, the thylacine's brain may have had relatively more cortex devoted to planning and decision-making, which would be consistent with a predatory ecological niche versus the scavenging niche of the devil.
Gregory S. Berns, Ken W.S. Ashwell, "RECONSTRUCTION OF THE CORTICAL MAPS OF THE TASMANIAN TIGER AND COMPARISON TO THE TASMANIAN DEVIL" (October 26. 2016).
doi: http://dx.doi.org/10.1101/083592

Inuit Body Fat Gene Derived From Denisovans

A new pre-print finds that a gene that enhances selective fitness in Arctic Inuits related to body fat (which is protective in cold temperatures) has origins in the Denisovan genome. The only prior instance of likely selective fitness enhancing Denisovan introgression (of which I am aware) was of high altitude adaptation genes in Tibetans. Among other things, this suggests non-Melanesians experienced Denisovan introgression in the area where Denisovan remains were found, even if this ancestry was subsequently highly diluted.
A recent study conducted the first genome-wide scan for selection in Inuit from Greenland using SNP chip data. Here, we report that selection in the region with the second most extreme signal of positive selection in Greenlandic Inuit favored a deeply divergent haplotype that is closely related to the sequence in the Denisovan genome, and was likely introgressed from an archaic population. The region contains two genes, WARS2 and TBX15, and has previously been associated with adipose tissue differentiation and body-fat distribution in humans. We show that the adaptively introgressed allele has been under selection in a much larger geographic region than just Greenland. Furthermore, it is associated with changes in expression of WARS2 and TBX15 in multiple tissues including the adrenal gland and subcutaneous adipose tissue, and with regional DNA methylation changes in TBX15.
Fernando Racimo, et al., "Archaic adaptive introgression in TBX15/WARS2" (October 27, 2016).
 doi: http://dx.doi.org/10.1101/033928 .

(Note that earlier versions of the article were released quite a few months ago.)

Thursday, October 27, 2016

mtDNA U3a1

A blog post by Maju calls attention to the interesting mtDNA clade, U3a1, an example of which turned up in ancient Basque mtDNA from ca. 1300 BCE, as part of a quite small sample of ancient mtDNA from Basque Country.

A page on mtDNA U3 from Family Tree DNA reveals this about the clade (emphasis added):
mtDNA haplogroup U3 is present in low percentages throughout Europe and Western Asia. It is an ancient haplogroup arising over 30,000 year ago from the very old haplogroup U. It rises to its greatest frequencies in the Near East and Southern Caucasus, that is the mountainous area of Western Iran, Georgia, Armenia, Azerbaijan, Turkey, Syria, Jordan and Iraq, where the percentages vary between 4 and 8 percent. Currently U3 can be divided into three subclades, U3a, U3b and U3c. The latter is a subclade of the original U3ac and split off from U3a 1000's of years ago. All three subclades occur in the above mentioned areas with the exception of Turkey and Armenia, where U3c appears to be absent and U3a is very rare, those countries being dominated by U3b. 
In Europe U3 is still common in Bulgaria and the eastern most islands of the Mediterranean Sea, Cyprus, Rhodes, Crete, where percentages rise to 3 or 4 percent, but becomes rarer and rarer as one moves west with one exception. Again Bulgaria, the Greek mainland and Etruscan Italy are dominated by U3b, whereas the Mediterranean Islands and the rest of Italy have all three subclades. 
The one exception is one sub-branch of subclade U3a called U3a1 which appears to have originated in Europe. At least at this point no instance of this clade has been observed in the Mid-East. This sub-branch dominates U3 in Western Europe especially along the Atlantic coast making up over 60% of U3 in these areas with frequencies rising to as high as 1% of the total population in Scotland and Wales and as high as 3 or 4% in Iceland. Also well over half of this sub-subclade is made up of one version of U3a1 called U3a1c, with a change at 16356, and which accounts for most of the distribution along the coastline from Norway to Northern Portugal. 
U3 also occurs along the North African coast which borders the Mediterranean. The subclade U3b dominates the eastern countries including Egypt and Ethiopia, whereas both U3a (in its older form found in the Near East) and U3b occur in the Berber occupied areas from Libya through Morocco, where again the percentage rises to as high as 1% of the total population. 
There are isolated pockets in the Near East where U3 occurs at a very high percentage of the population; U3 makes up 16% of the Adegei in the Northern Caucasus, about 18% of Iraqi Jews around Baghdad, 39% of Jordaneans of the Dead Sea Valley, 11% of the Qashqai in Southwest Iran (note these people speak a dialect closely related to Azerbaijani of the Caucasus) 17% in one study of Luri in the Western Zagros, 12% on the Greek island of Rhodes and also among the Romani (Gypsies) of Poland, Lithuania and Spain where percentages vary from nearly 40% to as high as 55%. 
Most of these groups show little variance with one or two sub-haplotypes dominating and appear to be due to founder effects and genetic drift in a small population. This is especially true of the Adegei and the Romani. Again the one exception is the Western Iranian mountains (Zagros) where both the Luri and the Qashqai show not only high percentages but also a high degree of variance including both U3a and U3b with U3c occurring nearby. . . .  
Although the distinction is largely made in the coding region, the subclades U3a, U3b and U3c can usually be distinguished in the HVR1 Region. U3a can almost always be recognised by the presence of 16390A (or simply "390A"). And U3c can almost always be recognised by the presence of 16193T and 16249C. Whereas U3b will have neither of these patterns. The common European subclade U3a1 can only be determined through a full test including the coding region, although U3a1c along the Atlantic coast can usually be recognised by a change at 16356C along with 16390A. Sometimes U3a1c will still be erroneously labeled as U4 by FTDNA and in some early studies, as 16356C is the defining marker of U4, but U3a1c can be distinguished from U4 by the presence of 16343G along with 16356C. 
It is interesting that wherever U3 can be observed both U3a and U3b is also generally observed in the same area although as pointed out above the ratios vary considerably. U3 can be observed as far east as Western Siberia and Southeast into Pakistan, Afghanistan and Northern India but in very low frequencies. U3c, an older version of U3ac, covers quite a surprising wide area given its extreme rarity. It can be observed from South of the Caspian Sea all the way over through the Mediterranean countries to the West Coast of Europe including Scotland. Currently it is believed that U3 entered Europe for the first time during the Neolithic movement of peoples 8 or 9 thousand years ago probably mainly through the Eastern Mediterranean but also up the Danube Valley from the Black sea. It appears to have been embedded within the larger population located somewhere in the mountainous areas surrounding the Levant from before 30,000 ybp and when these populations began to increase and spread about 18 or 20 thousand years ago there was left from the constant appearance of new clades and the going extinct of those already existing only 2 versions of U3: U3b and U3ac, which had been diverging from each other for 1000's of years. 
The distribution of mtDNA U3a1 shows a great deal of similarity with the distribution of mtDNA V, both of which are found in Berbers, Iberians and Scandinavians and generally track close to the Atlantic Coast. This is suggestive of a possible Mesolithic (or Epi-Paleolithic if you will), period of expansion after the Last Glacial Maximum and before the Neolithic Revolution as a minor component of a mix within a population that also included mtDNA V.

This migration would have been separate from the main, perhaps Neolithic, stream that crossed the Balkans and spread further into Italy with the Etruscans, which had both U3a and U3b.

The alternative hypothesis (and, of course, both could have happened, they aren't necessarily inconsistent) is that mtDNA U3a1 was spread during the Bell Beaker era which is a good fit to its high frequency in Scotland and Wales and Iceland.

There seems to be an implication, however, that the mtDNA U3a in Berbers is not U3a1, let alone U3a1c, which which imply that U3a1 could have been perhaps an Iberian mutation spread by Bell Beakers, with remaining U3 clades perhaps tracing to the earlier Mesolithic era.

Of course, my tendency to associate a shared Berber and European presence with the Mesolithic era driven in part by the high frequency of mtDNA V in the Saami of Finland, may not actually be justified.

After all, based on the phylogeny of the leading Berber Y-DNA clade and linguistic evidence, the ethnogenesis of the Berber language and people may date only to 3700 BCE, making it one of the youngest Afro-Asiatic languages, although mtDNA clade in the Berber people would likely have older origins than its Y-DNA clade as male dominated conquests like the Berber one seems to have been tend to have much more modest impacts on mtDNA clades in the conquered population. Berber ethnogenesis is not much older than Bell Beaker ethnogenesis, so it could be the Bell Beaker influences could still have penetrated the Berber founding population before it had completed its expansion. If Berbers originated on the eastern side of the Sahara around the time that it was becoming arid due to the climate change event that brought it to its current state, and then migrated west, the Berbers of far northwest Africa may have arrived quite a few centuries after Berber ethnogenesis, although obviously this would not be true if Berber migration was instead from west to east.

In the bigger picture, based upon haplotype diversity, the Western Zargos mountains look like a likely place of origin for mtDNA U3. The distribution of U3a also suggests that it made it was to Iberia and beyond to the Atlantic Coast via a maritime route that stopped on Mediterranean islands and Southern Italy.

The fact that the subclade associated with the Atlantic Coast is predominantly the very specific mtDNA U3a1c also suggests that this clade has a much shallower time depth than mtDNA 3a's split thousands of years ago.

Monday, October 24, 2016

New ATLAS top quark mass measurements

ATLAS has released new top quark mass measurements using the dilepton and all hadronic channels.

The measurement in the all hadronic channel is:
mtop = 173.80 ± 0.55(stat.) ± 1.01(syst.) GeV is measured, where the systematic uncertainty is dominated by the hadronization modelling (0.64 GeV) and the jet energy scale (0.60 GeV).
The measurement in the dilepton channel is:
mtop = 172.99 ± 0.41(stat.) ± 0.74(syst.) GeV is obtained, where the systematic uncertainty is dominated by the jet energy scale (0.54 GeV). 
This result is combined with the ATLAS top-quark mass measurements in the single-lepton and dilepton channels performed at √ s = 7 TeV [4] using the Best Linear Unbiased Estimate method [5]. The combined measurement gives a combined top-quark mass value of: mtop = 172.84 ± 0.34(stat.) ± 0.61(syst.) GeV. 
A new CMS measurement of the top quark mass in a new channel with low precision was released in August.  A new CMS measurement of the top quark mass in an old, more precise channel was released in March.

Sunday, October 23, 2016

About Time Scale In The Standard Model

The linked video is a powerful illustration of the notion of different orders of magnitude of scale from the human scale on up.

It doesn't go the other direction and only looks at distance, however.

Since I am often guilty of lumping all small time intervals into the tiny "ephemeral" category, I'll do penance by touching on the remarkable orders of magnitude differences is decay rates in particle physics.

There are actually huge disparities of scale between the mean lifetimes of various fundamental particles and hadrons (27 orders of magnitude from the shortest lived to the longest lived unstable particle to be exact). It is hard to get you head around numbers like that. It is particularly hard to do when humans have no ability to consciously distinguish between all but the two or three longest time periods involved.

To help you do so, let's look at all of mean lifetimes for fundamental particles and hadrons that have been measured experimentally, from longest to shortest in mean lifetimes (all data via Wikipedia) in terms of a scale where the mean lifetime of a W boson (the shortest lived particle, tied with the Z boson) is set arbitrarily at 1 W second, which can make this easier to understand (below the break).


Saturday, October 22, 2016

New Data Weakens Case For Dark Energy

This new result is highly unexpected, but seems credible. And, even a modest tweak to existing data could reduce the estimated proportions of "dark energy" substantially.
Five years ago, the Nobel Prize in Physics was awarded to three astronomers for their discovery, in the late 1990s, that the universe is expanding at an accelerating pace. 
Their conclusions were based on analysis of Type Ia supernovae - the spectacular thermonuclear explosion of dying stars - picked up by the Hubble space telescope and large ground-based telescopes. It led to the widespread acceptance of the idea that the universe is dominated by a mysterious substance named 'dark energy' that drives this accelerating expansion. 
Now, a team of scientists led by Professor Subir Sarkar of Oxford University's Department of Physics has cast doubt on this standard cosmological concept. Making use of a vastly increased data set - a catalogue of 740 Type Ia supernovae, more than ten times the original sample size - the researchers have found that the evidence for acceleration may be flimsier than previously thought, with the data being consistent with a constant rate of expansion. . . . 
'However, there now exists a much bigger database of supernovae on which to perform rigorous and detailed statistical analyses. We analysed the latest catalogue of 740 Type Ia supernovae - over ten times bigger than the original samples on which the discovery claim was based - and found that the evidence for accelerated expansion is, at most, what physicists call "3 sigma". This is far short of the "5 sigma" standard required to claim a discovery of fundamental significance. 
There is other data available that appears to support the idea of an accelerating universe, such as information on the cosmic microwave background - the faint afterglow of the Big Bang - from the Planck satellite. However, Professor Sarkar said: 'All of these tests are indirect, carried out in the framework of an assumed model, and the cosmic microwave background is not directly affected by dark energy. Actually, there is indeed a subtle effect, the late-integrated Sachs-Wolfe effect, but this has not been convincingly detected. 
'So it is quite possible that we are being misled and that the apparent manifestation of dark energy is a consequence of analysing the data in an oversimplified theoretical model - one that was in fact constructed in the 1930s, long before there was any real data. A more sophisticated theoretical framework accounting for the observation that the universe is not exactly homogeneous and that its matter content may not behave as an ideal gas - two key assumptions of standard cosmology - may well be able to account for all observations without requiring dark energy. Indeed, vacuum energy is something of which we have absolutely no understanding in fundamental theory.'
From here.

More precision measurements that can test the data independently are on the way in the near future to confirm and disfavor the dark energy concept.
The ‘standard’ model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present — as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these ‘standardisable candles’ indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion.
J.T. Nielsen, A. Guffanti an S. Sarkar, "Marginal evidence for cosmic acceleration from Type Ia supernovae" 6 Scientific Reports 35596 (October 21, 2016) (open access).

Thursday, October 20, 2016

Academic Linguists Cannot Agree On Whether Adverbs Exist

Notwithstanding the universal recognition of the categories by grammar textbooks, there is a dispute within the linguistic academy over whether, empirically, adverbs are really a separate kind of word, or if they are part of the same category as adjectives.

Who knew?

New Kind Of Galaxy Called "Ultradiffuse" Discovered

Like a dwarf galaxy, these galaxies are 99% dark matter (or an equivalent modified gravity effect). But, while a dwarf galaxy is 1% of the mass of the Milky Way, the new kind of galaxy, called "ultradiffuse" has a mass comparable to that of the Milky Way. 

Their detection was made possible by a compound eye telescope using a special filter designed to prevent light scattering which makes ultradiffuse galaxies hard to see with conventional telescopes. 

This kind of astronomy isn't cheap. Parts alone for the prototype "Dragonfly" satellite with 48 lens combined in a compound eye arrangement cost more than $576,000, in addition to countless hours of design and assembly. But, the price tag is still cheap by the standards of Big Science for a new kind of instrument that can see something never before seen with any other kind of telescope, let alone a ground based one.

A preprint of a paper describing one such discovery is as follows::
We report the discovery of three large (R29 >~ 1 arcminute) extremely low surface brightness (mu_(V,0) ~ 27.0) galaxies identified using our deep, wide-field imaging of the Virgo Cluster from the Burrell Schmidt telescope. Complementary data from the Next Generation Virgo Cluster Survey do not resolve red giant branch stars in these objects down to i=24, yielding a lower distance limit of 2.5 Mpc. At the Virgo distance, these objects have half-light radii 3-10 kpc and luminosities L_V=2-9x10^7 Lsun. These galaxies are comparable in size but lower in surface brightness than the large ultradiffuse LSB galaxies recently identified in the Coma cluster, and are located well within Virgo's virial radius; two are projected directly on the cluster core. One object appears to be a nucleated LSB in the process of being tidally stripped to form a new Virgo ultracompact dwarf galaxy. The others show no sign of tidal disruption, despite the fact that such objects should be most vulnerable to tidal destruction in the cluster environment. The relative proximity of Virgo makes these objects amenable to detailed studies of their structural properties and stellar populations. They thus provide an important new window onto the connection between cluster environment and galaxy evolution at the extremes.
Chris Mihos, Patrick R. Durrell, Laura Ferrarese, John J. Feldmeier, Patrick Côté, Eric W. Peng, Paul Harding, Chengze Liu, Stephen Gwyn, and Jean-Charles Cuillandre "Galaxies at the extremes: Ultra-diffuse galaxies in the Virgo Cluster" (July 24, 2015) (to appear in ApJ Letters).

Tuesday, October 18, 2016

The Genetics Of India From ASHG 2016

Archaeological excavations revealed artefacts used by homo Erectus as long as 500-200ky. The moistening at the end of the last glacial period brought expanded subsistence; drying then spread agriculture from 8-5kya, marking some of the earliest migrations and expansions. Around 5ky, the Indus Valley civilization began with the much matured Harappan civilization, whose de-urbanization led to the initiation of the Vedic period. Following this, displacements followed as foreign rulers established dominance in the Indian subcontinent: from Greeks and Scythians, to the first seeds of Muslim invasions, followed by the Mughal Empire. In this phase, India had diverse rulers (including Afghans, Turks, and Mongols). 
The migrations led to widespread admixture of the Indian population, influencing language, culture, caste endogamy, metallurgical technologies, and more, resulting in a complex and differentiated structure. We set out to explore modern genetics correlating with migration routes into the subcontinent, and to study genomic variation in 48,570 SNPs genotyped in 1484 individuals, across 104 population groups. We propose, COGG (Correlation Optimization of Genetics and Geodemographics), a novel optimization method to model genetic relationships with social factors such as castes, languages, occupation, and maximize the correlation with geography. We calculated the shared ancestry between different caste groups in the subcontinent with other reference populations from Eurasia, using a novel approach. We tested different migration theories into the subcontinent using a Linear Discriminant Analysis of redescription clusters and study recombination events shaping the gene pool. 
Our results demonstrate that COGG gives us significantly higher correlations, with p-values lower than 10-8. Identification of significant components among caste, language and genetics simplifies the complex structure. We identify varnas (Brahmins and Kshatriyas) to be closely related to reference Eurasian populations, whereas tribal groups show no shared ancestry with them and conclude that they resided in India before migration from Eurasia happened. We identify probable migration routes from Mongolia through Central Asia, and another via Anatolia into the subcontinent. Tibeto-Burman speaking populations share some ancestry with populations from East Asia; on the other hand, Austro-Asiatic speakers did not share ancestry with other Mon-Khmer language speaking populations.
A. Bose; D.E. Platt; L. Parida; P. Paschou; P. Drineas, "Genetic variation reveals migrations into the Indian subcontinent and its influence on the Indian society." (October 2016).

Much of this clarifies what has already been strongly suspected and nothing her could overcome the impact of whatever ancient DNA results we expect to see from South Asia in the near future. But, the lack of recent Eurasian origins in South Asian tribal populations, while always believed to be indigenous according to conventional wisdoms, had shown early but not definitive genetic indicators of more recent origins followed by regression to a less advanced subsistence strategy. So, that result is notable.

Davidski at Eurogenes has a low opinion of the paper and in particular its Anatolian origin hypothesis, but I'm content to wait and see in this case.

Monday, October 17, 2016

HPV Strain 16A Was Acquired Via Sex With Archaic Hominins

Every human suffers through life a number of papillomaviruses (PVs) infections, most of them asymptomatic. A notable exception are persistent infections by Human Papillomavirus 16 (HPV16), the most oncogenic infectious agent for humans and responsible for most infection-driven anogenital cancers. Oncogenic potential is not homogeneous among HPV16 lineages, and genetic variation within HPV16 exhibits some geographic structure. However, an in-depth analysis of the HPV16 evolutionary history is still wanting. 
We have analysed extant HPV16 diversity and compared the evolutionary and phylogeographical patterns of humans and of HPV16. We show that codivergence with modern humans explains at most 30% of the present viral geographical distribution. The most explanatory scenario suggests that ancestral HPV16 already infected ancestral human populations, and that viral lineages co-diverged with the hosts in parallel with the split between archaic Neanderthal-Denisovans and ancestral modern human populations, generating the ancestral HPV16A and HPV16BCD viral lineages, respectively. 
We propose that after out-of-Africa migration of modern human ancestors, sexual transmission between human populations introduced HPV16A into modern human ancestor populations. We hypothesise that differential coevolution of HPV16 lineages with different but closely related ancestral human populations and subsequent host-switch events in parallel with introgression of archaic alleles into the genomes of modern human ancestors may be largely responsible for the present-day differential prevalence and association with cancers for HPV16 variants.
Ville N. Pimenoff, Cristina Mendes de Oliveira, Ignacio G. Bravo. Transmission Between Archaic and Modern Human Ancestors During the Evolution of the Oncogenic Human Papillomavirus 16. Molecular Biology and Evolution (2016).

This scenario finally explains unsolved questions: why human diversity is largest in Africa, while HPV16 diversity is largest in East-Asia, and why the HPV16A variant is virtually absent in Sub-Saharan Africa while it is by far the most common one in the rest world. . . .  Since HPVs do not infect bones, current Neanderthal and Denisovan genomes do not contain HPVs. As a next step, the authors hope to trace HPVs sequences in ancient human skin remains as a more direct test of their hypothesis.
Analysis

While diseases originating in other species are hardly new, a solid link between archaic admixture and a specific sexually transmitted disease that modern humans received from archaic hominins is unprecedented.

It is also in accord with existing archaic DNA evidence showing some level of admixture between Neanderthals and Denisovans which would have allowed whichever of the species harbored HPV16A (if they did not share it from a common ancestor) to bring it in the other species.

Also, given that archaic admixture took place ca. 50,000 to 75,000 years ago, while HPV16A is still killing tens of thousands, if not more, people each year, it demonstrates that immune response and natural selection are not all powerful, particularly in the case of relatively low lethality infections that often strike after someone has already reproduced.

This development also rekindles curiosity regarding disease exchange following first contact with archaic hominins in general. Did modern human diseases ravage archaic hominins in the way that European first contact with Native Americans did? Did the reverse happen, or were both species seriously impacted by the new diseases that they respectively encountered?

Usually, we assume that modern humans either outcompeted or killed off archaic hominins, or that climate and the like had already established an archaic hominin bottleneck, but new diseases could have similar effects, in most cases through intraspecies transmission of the new diseases even before first contact.

UPDATE October 19, 2016: Vox covers the story with more panache: "Neanderthals may have given us their genital warts. Gee, thanks. To be fair, we may have given them diseases that ultimately led to their extinction." by Brian Resnick.

UPDATE (October 31, 2016): John Hawks has an interesting follow up observation:
There is, by the way, the interesting question of whether Neandertal immune variants might influence susceptibility to the strain in question, which has made little inroad into sub-Saharan Africa.

How, When And Where Did Agriculture Arise In China?


[M]illet cultivation was an auxiliary subsistence strategy in Northern China from 10,000 to 7000 BP; while hunting-gathering was the primary subsistence strategy, the earliest millet-cultivation might have emerged in eastern Inner Mongolia post 7700 BP. 
Millet cultivation transited from a secondary strategy to become dominant in the Guanzhong area of north-central China during 7000-6000 BP, and probably facilitated the development of early Yangshao culture in the middle reaches of the Yellow River valley. 
Intensive millet-based agriculture emerged and widely expanded across the Yellow River valley in northern China during 6000 to 4000 BP. This promoted rapid population growth and cultural evolution in the late Neolithic period, and was key in the subsequent emergence of the ancient Chinese civilization.
From here according to studies of archaebotanical evidence.

Note that the dates are BP and not BCE. This represents a much sustained period of proto-farming than is commonly assumed. It would be interesting to know if climate or evolution of millet or something else were more decisive in the transition from proto-farming to farming.

Apparent Dark Matter Halo Shape Still Tied To Visible Matter Distribution

We constrain the average halo ellipticity of ~2 600 galaxy groups from the Galaxy And Mass Assembly (GAMA) survey, using the weak gravitational lensing signal measured from the overlapping Kilo Degree Survey (KiDS). 
To do so, we quantify the azimuthal dependence of the stacked lensing signal around seven different proxies for the orientation of the dark matter distribution, as it is a priori unknown which one traces the orientation best. On small scales, the major axis of the brightest group/cluster member (BCG) provides the best proxy, leading to a clear detection of an anisotropic signal. 
In order to relate that to a halo ellipticity, we have to adopt a model density profile. We derive new expressions for the quadrupole moments of the shear field given an elliptical model surface mass density profile. Modeling the signal with an elliptical Navarro-Frenk-White (NFW) profile on scales < 250 kpc, which roughly corresponds to half the virial radius, and assuming that the BCG is perfectly aligned with the dark matter, we find an average halo ellipticity of e_h=0.38 +/- 0.12. This agrees well with results from cold-dark-matter-only simulations, which typically report values of e_h ~ 0.3. 
On larger scales, the lensing signal around the BCGs does not trace the dark matter distribution well, and the distribution of group satellites provides a better proxy for the halo's orientation instead, leading to a 3--4 sigma detection of a non-zero halo ellipticity at scales between 250 kpc and 750 kpc. 
Our results suggest that the distribution of stars enclosed within a certain radius forms a good proxy for the orientation of the dark matter within that radius, which has also been observed in hydrodynamical simulations.
Edo van Uitert, et al., "Halo ellipticity of GAMA galaxy groups from KiDS weak lensing" (October 13, 2016).

We already knew that the amount of apparent dark matter in a particular type of galaxy is closely tied to the mass of the galaxy.  A very substantial sample establishes that dark matter halo shapes also tact the distribution of ordinary matter in galaxy groups, a result that actually favors modified gravity theories more than a dark matter interpretation.

On the other hand, the original formulation of MOND is not a fit to the data.

Another interesting new paper looks at the impact of proximity to a filament in the "cosmic web" as a factor that impacts galaxy development. It is focusing on the critical issue of feedback between dark and ordinary matter as is another paper which makes predictions about feedback in dwarf galaxies.

On another front entirely a paper looks at the apparent periodicity of mass extinctions caused by cosmic impacts with Earth and finds a genuine relationship for which they posit a cause.

Friday, October 14, 2016

Are Modified Gravity Theories Credible?

Question

I'm a statistician with a little training in physics and would just like to know the general consensus on a few things.

I'm reading a book by John Moffat which basically tries to state how GR makes failed predictions in certain situations. I know GR is well extremely tested, but I imagine all physicist are aware it doesn't always hold up.

The book tries to put forth modified theories of gravity that make do without the need of dark matter and dark energy to make GR match real world observations. (ie speed of galaxy rotations etc)

Are modified theories of gravity credible?

Is dark energy/matter the 'ether' of the 20th/21st century? Is it likely scientists are looking for something that simply doesn't exist and there are unknown fundamental forces at work? What's the best evidence for it's existence other than observations based on the 'bullet' cluster?

(This is another abridged cross-post from Physics Stack Exchange).

Answer

Yes. Modified gravity theories are credible. While dark matter theories are one way to explain the phenomena described by dark matter and these are generally more popular ways to resolve these issues, there are deep, potentially intractable problems with a dark matter particle approach as well. The weight of the evidence as shifted as astronomers, particle physicists and theorists have provided us with more relevant evidence and with more ideas about how to solve the problem, even in the last few years in this very active area of ongoing research.

This is as it should be because dark matter phenomena constitute the most striking case in existence today where the combination of general relativity and the Standard Model of Particle Physics simply cannot explain the empirical evidence without some kind of new physics of some kind.

1. Any viable dark matter theory has to be able to explain why the distribution of luminous matter in a galaxy predicts observed dark matter phenomena so tightly and with so little scatter in multiple respects such as rotation curves and bulge sizes. These relationships persist even in cases that in a non-gravitational theory should not naturally hold. For example, planetary nebulae distantly rotating ellipical galaxies show the same dynamics of stars at the fringe of spiral galaxies do. Similarly, these relationships persist in gas rich galaxies and dwarf galaxies(which have as predicted about 0.2% ordinary matter if GR is correct in a universe that is overall 17% dark matter) despite that they are beyond the scope of the data used to formulate the theories.

One of the more successful recent efforts to reproduce the baryonic Tully-Fischer relation with CDM models is L.V. Sales, et al., "The low-mass end of the baryonic Tully-Fisher relation" (February 5, 2016). It explains:
[T]he literature is littered with failed attempts to reproduce the Tully-Fisher relation in a cold dark matter-dominated universe. Direct galaxy formation simulations,for example, have for many years consistently produced galaxies so massive and compact that their rotation curves were steeply declining and, generally, a poor match to observation. Even semi-analytic models, where galaxy masses and sizes can be adjusted to match observation, have had difficulty reproducing the Tully-Fisher relation, typically predicting velocities at given mass that are significantly higher than observed unless somewhat arbitrary adjustments are made to the response of the dark halo.
The paper manages to simulate the Tully-Fisher relation only with a model that has sixteen parameters carefully "calibrated to match the observed galaxy stellar mass function and the sizes of galaxies at z = 0" and "chosen to resemble the surroundings of the Local Group of Galaxies", however, and still struggles to reproduce the one parameter fits of the MOND toy-model from three decades ago. Any data set can be described by almost any model so long as it has enough adjustable parameters.

Much of the improvement over prior models has come from efforts to incorporate feedback between baryonic and dark matter into the models, but this has generally been done in a manner than is more ad hoc than it is firmly rooted in rigorous theory or empirical observations of the feedback processes in action.

One of the more intractable problems with simulations based upon a dark matter particle model that has been pointed out, for example, in Alyson M. Brooks, Charlotte R. Christensen, "Bulge Formation via Mergers in Cosmological Simulations" (12 Nov 2015) is that their galaxy and mass assembly model dramatically understates the proportion of spiral galaxies in the real world which are bulgeless, which is an inherent difficulty with the process by which dark matter and baryonic matter proportions are correlated in dark matter particle models which are not a problem for modified gravity models. They note that:
[W]e also demonstrate that it is very difficult for current stellar feedback models to reproduce the small bulges observed in more massive disk galaxies like the Milky Way. We argue that feedback models need to be improved, or an additional source of feedback such as AGN is necessary to generate the required outflows.
General relativity doesn't naturally supply such a feedback mechanism.

2. The fact that it is possible to explain pretty much all galactic rotation curves with a single parameter implies that any dark matter theory also can't be too complex, because otherwise it would take more parameters to fit the data. The relationships that modified gravity theories show exist are real, whether or not the proposed mechanism giving rise to those relationships is real or not. A dark matter theory shouldn't have more degrees of freedom than a toy model theory that can explain the same data. The number of degrees of freedom it takes to explain a data set is insensitive to the particular underlying nature of the correct theory to explain that data.

Also, while I don't have references to them easily at hand at the moment, early dark matter simulations quickly revealed that models with one primary kind of dark matter fit the data much better than those with multiple kinds of dark matter that significantly contribute to these phenomena.

This simplicity requirement greatly narrows the class of dark matter candidates that need to be considered, and hence, the number of viable dark matter particle theories that a modified gravity theory must compete with in a credibility contest.

3. There are fairly tight constraints from astronomy observations on the parameter space of dark matter. Alyson Brooks, "Re-Examining Astrophysical Constraints on the Dark Matter Model" (July 28, 2014). These rule out pretty much all cold dark matter models except "warm dark matter" (WDM) (at a keV scale mass that is at the bottom of the range permitted by the lamdaCDM model) and "self-interacting dark matter" (SIDM) (which escapes problems that otherwise plague cold dark matter models with a fifth force that only acts between dark matter particles requiring at least a beyond the Standard Model fermion and a beyond the Standard Model force carried by a new massive boson with a mass on the order of 1-100 MeV).

4. Direct detection experiments (especially LUX) rule out any dark matter candidates that interact via any of the three Standard Model forces (including the weak force) at masses down to 1 GeV (also here).

5. Another blow is the non-detection of annihilation and decay signatures. Promising data from the Fermi satellite's observation of the galactic center have now been largely ruled out as dark matter signatures in Samuel K. Lee, Mariangela Lisanti, Benjamin R. Safdi, Tracy R. Slatyer, and Wei Xue. "Evidence for unresolved gamma-ray point sources in the Inner Galaxy." Phys. Rev. Lett. (February 3, 2016). And, signs of what looked like a signal of warm dark matter annihilation have likewise proved to be a false alarm.

6. The CMS experiment at the LHC rules out a significant class of low mass WIMP dark matter candidates, while other LHC results exclude essentially all possible supersymmetric candidates for dark matter. If SUSY particles exist, they would be both too heavy to constitute warm dark matter (almost all types of SUSY particles are excluded up to about 40 GeV by the LHC which is too heavy) and they would also lack the right kind of self-interactions force within a SUSY context to be a SIDM candidate. This has particularly broad implications because SUSY is the low energy effective theory of almost all popular GUT theories and viable string theory vacua.

7. While MOND requires dark matter in galactic clusters, including the particularly challenging case of the bullet cluster, this defect is not shared by all modified gravity theories (see, e.g.,here and here). Many of the theories that can successfully explain the bullet cluster are able to do so mostly because the collision can be decomposed into gas and galaxy components that have independent effects from each other under the theories in question. The bullet cluster is also one of the main constraints on SIDM parameter space (which itself basically does modify gravity but just does so in the dark sector, limiting those modifications to dark matter particles only), and is tough to square with manner dark matter particle theories.

8. It is possible in a modified gravity theory but very challenging in a dark matter particle theory, to explain why the mass to luminosity ratio of ellipical galaxies varies by a factor of four, systemically based upon the degree to which they are spherical or not.

9. Many of the modified gravity proposals mature enough to receive attention to their fit to cosmological data can meet that test as well. See, e.g., here.

10. In short, while a dark matter hypothesis alone can explain the apparently missing matter in any given situation, in order to get a descriptive theory, you need to be able to describe the highly specific manner in which it is distributed in the universe relative to the baryonic matter in the universe, ideally in a manner that predicts new phenomena, rather than merely post-dicting already observed results that went into the formulation of the model.

Modified gravity theories have repeatedly been predictive, while dark matter theories have still not figured out how to distribute it properly throughout the universe without "cheating" in how the models testing them are set up, and have failed to make any correct predictions of new phenomena below the cosmic microwave background radiation scale of cosmology.

Conclusion

To be clear, I am not asserting that modified gravity is indeed to correct explanation of all or any of the phenomena attributed to dark matter, nor am I asserting that any of the modified gravity theories currently in wide circulation are actually correct descriptions of Nature.

But, the examples of modified gravity theories that we do have are sufficient to make clear that some kind of modified gravity theory is a credible possible solution to the problem of dark matter phenomena.

It is also a more credible solution than it used to be because the case for the most popular dark matter particle theories has grown steadily less compelling as various kinds of dark matter candidates have been ruled out and as more data has narrowed the parameter space available for the dark matter candidates. The "WIMP miracle" that motivated a lot of early dark matter proposals is dead.

While this post doesn't comprehensively review all possible dark matter candidates and affirmatively rule them out (which is beyond the scope of the question), it does make clear that none of the easy solutions that had been widely expected to work out in the 20th century have survived the test of time into 2016. Over the past six years or so, only a few viable dark matter particle theories have survived, while myriad new modified gravity theories have been developed and not been ruled out.

Thursday, October 13, 2016

Why Aren't Atomic Nuclei Quark Gluon Plasmas?

Question:

The standard picture of the nucleus of atom is that is several distinct nucleons, which themselves are composed of quarks. However, it seems like a much simpler picture is that the nucleus is directly made out of quarks, without having nucleons as substructures. That is, that the nucleus is the ground state of a quark-gluon plasma. Why do atomic nuclei (other than hydrogen-1 which is just a single proton) have nucleons as substructures?

(This is an abridged and reformatted cross-post version of an answer I posted at the Physics Stack Exchange).

Answer:

This is a consequence of the part of the Standard Model of Particle Physics called quantum chromodynamics (QCD), which governs how quarks and gluons interact.

Confinement and its exceptions

One of the core principles of QCD is confinement which means that the strong force between quarks that is mediated by gluons is so pervasive that none of the five kinds of quarks that are not top quarks (top quarks decay so quickly they don't have time to form composite structures which is called hadronizing before they decay) are ever observed in nature outside of a hadron (a composite particle made of quarks bound by gluons) - generally either three quarks (one of each of the three QCD color charges) in a baryon or a quark and an antiquark bound in a meson. Baryons are fermions of spin 1/2+N (an integer), which means that they behave like ordinary matter (i.e. to oversimplify, you can't have two of them in the same place at the same time), while mesons are bosons with integer spin N (i.e. to oversimplify, more than one can be in the same space at the same time).

In principle, QCD allows four or more quark composite particles (tetraquarks, pentaquarks, etc.) and a handful have been observed, but in practice, the four or more quark composite particles are extremely unstable and hard to create in the first place, so even in the rarified environment of a particle collider you see almost entirely mesons and baryons.

A quark-gluon plasma can "fudge" the otherwise rather automatic and complete sorting of quarks into confined hadrons only at extremely high energies because the strong force powerfully constrains them to stay within a particular hadron. So, in the "infrared" environment that we encounter in daily life or even in quite high energy applications, the temperature isn't enough to overcome the strong force's tendency to bind quarks into hadrons.

How hot does it have to be?

The cross-over temperature is about 2⋅1012 K which corresponds to an energy density of a little less than 1 GeV/fm3 (i.e. the temperature has to contribute kinetic energy density comparable in amount to the energy density of the gluons in a proton or neutron to overcome their clutches). This is more than one hundred times as hot as the hottest it gets anywhere inside the Sun (which is about 1.5⋅1010 K).

How hard is it to get something that energetic?

The first time humans were able to artificially create those energy densities was in 2015 at the Large Hadron Collider (although non-definitive hints that we might have done it were seen at other colliders as early as 2005). Nothing in the solar system had ever been that hot previously at any time in the four or five billion years since it came into existence.

What does this imply?

Now, it also turns out that of the hundreds of possible baryons and mesons, no meson has a mean lifetime of more than about a ten millionth of a second, and only two kinds of baryons have mean lifetimes of more than about ten billionths of a second. The proton (which is made up of two up quarks and one down quark bound by gluons) does not decay, and the neutron (which is made up of one up quark and two down quarks) has a mean lifetime of about 15 minutes if free (and can potentially be stable in the right kind of atomic nucleus mostly due to conservation of mass-energy considerations in a bound nucleus).

So, the theoretical reason that atomic nuclei are not quark-gluon plasmas is because: (1)(a) at cool enough temperatures and (1)(b) given the tiny fraction of a second necessary for an unstable hadrons to decay, (2) their constituent quarks are forced into hadrons and (3) the hadrons decay until they consist only of stable protons and neutrons (collectively nucleons).

A residual effect of the strong force (which is mediated mostly by a kind of meson called a pion which is exchanged between nucleons) binds the nucleons together into an atomic nucleus, but far less tightly to each other than the quarks within the nucleons are bound by the strong force.

Now that binding force itself is nothing to sniff at - that is the source of all energy created by nuclear fusion in an H bomb or a star like the Sun, and the nuclear fission in an A bomb or nuclear reactor. But, the strong force binding quarks inside protons and neutrons is much stronger which is why it takes such extreme conditions to overcome.

Caveat

There are multiple, progressively more complex ways to describe what is going on in a hadron. In addition to the "valance quarks" bound exchanging gluons that I have described above, there is also a "sea" of quark-antiquark pairs that cancel out for many purposes but can't be ignored if you want to smash protons together at high speeds or you want to calculate the mass of a hadron from first principles. Thus, for example, you could smash protons only to find that strange or charm or bottom quarks fall out if the collision have enough energy, even though none of the valence quarks of a proton are of that type. But, that level of complexity isn't necessary to understand theoretically based on QCD why quarks in an atomic nuclei are subdivided into protons and neutrons.

Post script

There are a couple of things that make this question interesting.

First, it demonstrates how, for all its complexity, basic qualitative features of QCD and properties of hadrons suffice both to explain the basic internal structure of atomic nuclei and render the vast majority of QCD irrelevant in the vast majority of circumstances despite its subtle and complex edifice from just a handful of basic piece and rules. This does a good job of putting QCD in the proper context and perspective.

In particular, it demonstrates that some aspects of QCD are primarily relevant to circumstances the prevailed in Nature only immediately following the Big Bang, and hence illustrates the surprisingly close connection between QCD and cosmology.

Second, the fact that the temperature at which the quark gluon plasma overcomes the usual rule of confinement corresponds to the point at which kinetic energy exceeds the gluon field energy of a hadron is elegant. It also clarifies that somewhat misleading dogma that quarks are always confined when in fact there are top quark and quark gluon plasma exceptions to that rule.

Wednesday, October 12, 2016

Melanesians Conquered Tonga and Vanuatu

Razib has the straight and scientific account of a recent ancient DNA study which shows that early Austronesian settlements in Oceania were by pureblooded Taiwanese who were then conquered by Melanesian/Papuan men, resulting in the 25% Melanesian admixture found in Papuans today. 

This ancient DNA result was contrary to widespread expectations that the admixture happened before Oceania was settled and in a manner less unfavorable to the Lapita people who were the first wave of Austronesians in the region and widely considered more "advanced" than the Papuans by anthropologists and historians, even if they wouldn't necessary admit to that in those terms.

But, in this case, a curmudgeonly summary tells the story more quickly and effectively:
Polynesians are mostly descended from a population on Taiwan, represented today by Taiwanese aboriginals, and from a Melanesian population similar to New Guinea or the Solomon Islands. They’re about 25% Melanesian autosomally, 6% Melanesian in mtDNA, 65% Melanesian in Y-chromosomes. . . .
Now they’ve looked at ancient DNA from Tonga and Vanuatu. The old samples don’t have any noticeable amount of Melanesian ancestry. So it was like this: the Lapita derived from Taiwan (thru the Philippines), settled Vanuatua and Tonga – then were conquered by some set of Melanesian men, who killed most of the local men and scooped up the women. Probably their sons extended the process, which resulted in a lower percentage of Melanesian ancestry while keeping the Y-chromosomes mostly Melanesian. 
After this conquest, the Polynesians expanded further east, and those later settlement (Tahiti, Marquesas, Hawaii, etc) all had that ~25% Melanesian component.
From here

A rough look at the numbers fits that scenario, with perhaps 44% of the men and 6% of the women in the first generation of conquest being Melanesian, leaving 21% of the Y-DNA mix to come from descendants of the original men (just about half of the original percentage of Melanesian men).

Increasingly, this is looking like an all too common scenario in both the Bronze and Iron Ages. In contrast, the Neolithic era appeared to involve mostly whole families or tribes migrating together as a colonial unit, rather than as a conquering group mostly made up of men. For example, ancient DNA from the Bronze Age conquests by Indo-Europeans of Eastern and Central Europe tells much the same story.

Given the dates of the ancient DNA samples, and the dates that subsequent Polynesian expansions occurred to particular islands, the timing of the Melanesian conquest ought to be possible to bracket to a reasonable narrow range of dates.  The ancient DNA in Vanuatu was from ca. 1100 BCE to 700 BCE.  The ancient DNA from Tonga was from about 700 BCE to 300 BCE. Razib notes: "Looking at the distribution of Melanesian ancestry they concluded this admixture occurred on the order of ~1,500 years before the present (their intervals were wide, but the ancient samples serve as a boundary)." In other words, around 500 CE, around the same time that half way around the world, the Roman Empire was falling.

In defense of those who saw the Lapita as more advanced than the Melanesians, however, it is notable that this is one of the rare instances when the language of the conquered people (the Austronesians) was adopted by the conquerors, rather than the other way around. The only other example of this kind of reverse language shift that comes easily to mind is the adoption of Greek by the Romans in the Eastern Roman Empire after the Romans conquered the Greeks.

It could also be that the Melanesian language shift to Austronesian languages took place before the men arrived in Tonga and Vanuatu at which time these men also mastered Austronesian maritime technology that made their arrival in Tonga and Vanuatu possible.