Pages

Monday, February 29, 2016

Newcomers Transformed Madagascar From Jungle To Grassland 1000 Years Ago

[A]round 1,000 years ago, both stalagmites' calcium carbonate composition shifted suddenly and completely, from carbon isotope ratios typical of trees and shrubs, to those more consistent with grassland, within just 100 years.
From here.

The paper is Stephen J. Burns, et al., "Rapid human-induced landscape transformation in Madagascar at the end of the first millennium of the Common Era.", Quaternary Science Reviews (2016).

UPDATE March 2,  2016: The abstract to the paper reveals that the dates are actually a bit earlier than 1,000 years ago, which helps to reconcile the archaeological record which favors a date somewhat earlier than 1000 CE for Austronesian arrival in Madagascar and this new data point.  The abstract states:
The environmental impact of the early human inhabitants of Madagascar remains heavily debated. We present results from a study using two stalagmites collected from Anjohibe Cave in northwestern Madagascar to investigate the paleoecology and paleoclimate of northwestern Madagascar over the past 1800 years. Carbon stable isotopic data indicate a rapid, complete transformation from a flora dominated by C3 plants to a C4 grassland system. This transformation is well replicated in both stalagmites, occurred at 890 CE and was completed within one century. We infer that the change was the result of a dramatic increase in the use of fire to promote the growth of grass for cattle fodder. Further, stalagmite oxygen isotope ratios show no significant variation across the carbon isotope excursion, demonstrating that the landscape transformation was not related to changes in precipitation. Our study illustrates the profound impact early inhabitants had on the environment, and implies that forest loss was one trigger of megafaunal extinction.
It also isn't implausible to think that the use of fire to cover forest to grassland wouldn't have begun immediately upon their arrival.  They would have used the modest amount of existing grassland on the island at first and would have cleared more land only after their growing herds strained the carrying capacity of the existing grasslands.

Friday, February 26, 2016

Progress Made In Search For Direct Evidence Of Planet 9

French researchers have pinned down a most likely location for a hypothetical Planet 9, some significant areas of the proposed orbit that are ruled out, and a lot of places on the proposed orbit that are neither ruled out nor favored.  Somebody still has to point a high powered telescope at the right place at the right time, but the places such a telescope should look have been greatly prioritized by the latest study.

Australian Y-DNA C Link To India Was A False Alarm

Europeans were not the first non-Australians to come into contact with Australian Aborigines after the continent was first reached by modern humans ca. 50,000 years ago.  We know this because the dingo (a type of dog related to Southeast Asian dogs) appeared there ca. 4,000 years ago and led to a secondary mass extinction event in Australia.  Most likely, the dingo arrived via trade with Austronesian mariners. There is also some evidence of South Asian autosomal DNA in some aboriginal Australians, which could date to contact in this time period.

But, a link between aboriginal Australians who have Y-DNA C and South Asians with Y-DNA C, originally estimated to suggest a recent common ancestor ca. 5,000 years ago around the same time as the arrival of the Dingo has upon re-examination been determined to be a false alarm.  In fact, the most recent common ancestors of aboriginal Australian Y-DNA C and South Asian Y-DNA C is actually closer to 50,000 years ago, in accordance with a no recent contact hypothesis.

Cultural Evolution Overwhelmingly Prioritizes Social Prestige

Models of cultural evolution can accurately model processes such as language shift and religion shift by people who have a choice (not necessarily an unfettered free one, however), regarding their decisions to adopt a new language or religion. A generalization of those models finds that social prestige, defined as the extent to which one's cultural trait has been picked up by others, is pretty much the exclusive component of selective fitness in cultural evolution.
This paper seeks answers to two questions. First, if a greater social activity of an individual enhances oblique (i.e. to non-relatives) transmission of her cultural traits at the expense of vertical (i.e. to children) transmission as well as family size, which behavior is optimal from cultural evolution standpoint? I formalize a general model that characterizes evolutionarily stable social activity. The proposed model replicates the theory of Newson et al. (2007) that fertility decline is caused by increasing role of oblique cultural transmission. Second, if social activity is a rational choice rather than a culturally inherited trait, and if cultural transmission acts on preferences rather than behaviors, which preferences survive the process of cultural evolution? I arrive at a very simple yet powerful result: under mild assumptions on model structure, only preferences which emphasize exclusively the concern for social prestige, i.e. extent to which one’s cultural trait has been picked up by others, survive.
Roman Zakharenko, "Nothing else matters: Evolution of preference for social prestige", Mathematical Social Science (February 23, 2016).

This broad theoretical result has applicability to both the modern evolution of culture and the cultural, linguistic and religious shifts experienced historically and into prehistory.

Friday, February 19, 2016

General Relativity Has Pathological Features In Five Or More Dimensions

General relativity, which is the best known description of gravity that we have right now and about 100 years old, is usually formulated in four dimensions (three dimensions in space and one in time), but it is straightforward to consider its equations in more than four, or fewer than four dimensions.

Real life, of course, can't have fewer dimensions than we observe, so a description of reality in terms of general relativity with one, two or three dimensions cannot be a physically correct description of reality.

A new theoretical analysis of general relativity in five or more dimensions, however, reveals that the theory allows for mathematical pathologies, called "naked singularities" in any version of the theory with five or more dimensions. These pathologies turn up only in rather improbable distributions of matter that might be vanishingly rare in nature even if it had more the four dimensions.  And, these pathologies also might not be present in a five or more dimensional quantum gravity that reduces to general relativity in the classical limit in most circumstances.

But, this finding still buttresses the conclusion that there is something special about a four dimensional universe that may make it the only possibility that can provide a physically correct description of reality, and disfavors beyond the Standard Model theories in which gravity is united with the other Standard Model forces (and allowed to be as weak as it is) in a space-time with more than the usual four dimensions (with five, ten and eleven dimensional options being particular popular alternatives).

Thursday, February 18, 2016

Archaic Hominin Genetics Galore

tl/dr: The first modern humans left Africa at least 100,000 years ago and modern humans have admixed with at least three archaic hominin species since then (once in Africa with the Pygmies, one or more times with Neanderthals and all populations that reached Eurasia in prehistoric times, and once with Denisovans in Asia with most traces of that admixture on the far side of the Wallace line).

It remains unclear why there are discrepancies between many genetic based estimates of the common ancestors of non-African modern humans that tend to show dates almost half as old, and evidence from archaeology and recent Altai Neanderthal DNA analysis.

* All modern humans cluster together as a clade separate from other species of Great Apes, and various clades of Great Apes have comparable levels of Y-DNA and mtDNA diversity.  All archaic hominins, of course, would also cluster with modern humans in the same clade if they were included (which they were not in the linked study discussion).

* Modern humans genetic introgression into the ancestors of the Altai Neanderthals (but not the ancestors of the European Neanderthals or Denisovans for whom we have ancient DNA) took place ca. 100,000 years ago. As the abstract of the paper explains:
It has been shown that Neanderthals contributed genetically to modern humans outside Africa 47,000–65,000 years ago. Here we analyse the genomes of a Neanderthal and a Denisovan from the Altai Mountains in Siberia together with the sequences of chromosome 21 of two Neanderthals from Spain and Croatia. We find that a population that diverged early from other modern humans in Africa contributed genetically to the ancestors of Neanderthals from the Altai Mountains roughly 100,000 years ago. By contrast, we do not detect such a genetic contribution in the Denisovan or the two European Neanderthals. We conclude that in addition to later interbreeding events, the ancestors of Neanderthals from the Altai Mountains and early modern humans met and interbred, possibly in the Near East, many thousands of years earlier than previously thought.
Kuhlwilm, et al., "Ancient gene flow from early modern humans into Eastern Neanderthals", Nature (2016).

This is the first direct evidence that Neanderthal-modern human admixture was a two way street. I am among those who think that the transitional Châtelperronian archaeological culture of Neanderthal communities is indirect evidence of the intellectual and cultural influence of hybrid individuals in Neanderthal tribes.

New archaeological evidence from Southeast Asia, Arabia and India has for several years added to evidence from the Levant (the Skhul and Qafzeh hominids) that the initial modern human "Out of Africa" migration took place more than 100,000 years ago and has similarities to modern human archaeological cultures of Sudan, reached India prior to 75,000 years ago, and reached Southeast Asia at least 65,000 years ago. The best estimates for modern human arrival in Australia and Papua New Guinea are ca. 55,000 years ago.

Hominin bones from a small number of finds in China indicate that modern humans or hybrids of modern humans and archaic hominins may have existed there as early as 100,000 years ago, but no ancient DNA has been recovered from those locations and I am still not very confident of that dating of those remains, in part, because some of the finds predate modern archaeological dating methodology and, in part, because there isn't any meaningful corroboration from artifacts found accompanying the remains in that time period (which should greatly outnumber modern human bones at any given site). Some evidence of more recent archaic hominin or hybrid individuals during the Upper Paleolithic era seem somewhat more credible, however, since the finds are more recent and presumably used modern dating methods and because H. Florensis and African data discussed below provide evidence of parallel instances in which relict populations of archaic hominins survived long after most of their species went extinct.

But, three separate lines of genetic evidence have favored a younger date: (1) the genetic evidence used to estimate the most recent common ancestor of all men with Y-DNA derived from Y-DNA CF and Y-DNA D with mutation rate estimates fortified by ancient DNA, (2) the genetic evidence used to estimate the most recent common ancestor of all persons with mtDNA derived from mtDNA M and mtDNA N with mutation rate estimates fortified by ancient DNA, and (3) linkage disequilibrium based estimates of Neanderthal admixture in ancient DNA have all pointed to an Out of Africa origin from ca. 50,000-65,000 years ago, suggesting a coincidence of the Upper Paleolithic technologies adopted by modern humans and the source population of all modern humans.

Our interpretation of those uniparental lineages is likewise impacted by a new study of 55 Upper Paleolithic ancient DNA samples which establish that mtDNA M, both of which are exclusively Asian uniparental haplogroups today, were present in Europe prior to the Last Glacial Maximum (earlier ancient DNA results have also established that Asian Y-DNA haplogroup C was present in Europe in this era). That paper is Cosimo Posth et al., "Pleistocene Mitochondrial Genomes Suggest a Single Major Dispersal of Non-Africans and a Late Glacial Population Turnover in Europe"(Current Biology 2016). The abstract notes that: "Demographic modeling not only indicates an LGM genetic bottleneck, but also provides surprising evidence of a major population turnover in Europe around 14,500 years ago during the Late Glacial, a period of climatic instability at the end of the Pleistocene." Apparently, mtDNA M women didn't make it into glacial era refugia in sufficient numbers to remain in the gene pool over the many thousands of years during which most of Europe was covered with a thick layer of ice.

Models Mesolithic and early Neolithic population genetics will also be enhanced by the imminent arrival of Natufian ancient DNA.  The Natufian culture was the last pre-Neolithic culture of the Levant from modern Gaza to Syria ca. 12,500 to 9,500 years ago, and may have engaged in proto-farming of wild type crops and fishing in a sedentary settlement pattern, in addition to hunting and gathering with the assistance of domesticated dogs.  Some commentators have suggested that they may be the historical counterpart for the "basal Eurasian" component identified statistically as a ghost population in whole genome studies of ancient European DNA.

One hypothesis for this disconnect is that a Eurasian population bottleneck makes the most recent common ancestors of modern humans artificially recent, and that LD dates for Neanderthal admixture are capturing only the most recent round of a Neanderthal admixture process that occurred in several stages (a known bias of LD based dating similar to the bias of effective population size estimates towards the lowest historical reproducing population of a species since it arises from a harmonic mean of the reproducing population size of a population).

The alternative, and more widely held view, is that there was an "Out of Africa" event that failed ca. 100,000 years ago or earlier, and that the Out of Africa event that struck coincides with the Upper Paleolithic revolution ca. 50,000 years ago. I personally don't think that this is the most likely possibility, but it is a common one and can fit the data so far.

But, the new evidence of modern human introgression into Altai Neanderthals ca. 100,000 years ago combined with the new archaeological data mentioned above, has effectively ruled out the possibility that the initial Out of Africa event actually took place ca. 50,000 years ago, as many authoritative sources have previously claimed based mostly upon DNA mutation rate evidence and conservatively young dates for modern human arrivals in Europe and Australia.

Rumor has it that genetic evidence of a ca. 100,000 year old component to Sahul (i.e. Australian and Papuan) modern human indigeneous population genetics is common soon as well.

* Updated carbon-14 dating of the youngest Neanderthal sties in Western and Central Europe has established that the extinction of Neanderthals in Western and Central Europe took place much earlier and more rapidly than previously assumed (41,030-39,260 years ago), over about 4,000 years (a roughly 2,600-5,400 year confidence interval).
The timing of Neanderthal disappearance and the extent to which they overlapped with the earliest incoming anatomically modern humans (AMHs) in Eurasia are key questions in palaeoanthropology. Determining the spatiotemporal relationship between the two populations is crucial if we are to understand the processes, timing and reasons leading to the disappearance of Neanderthals and the likelihood of cultural and genetic exchange. Serious technical challenges, however, have hindered reliable dating of the period, as the radiocarbon method reaches its limit at ~50,000 years ago3. Here we apply improved accelerator mass spectrometry 14C techniques to construct robust chronologies from 40 key Mousterian and Neanderthal archaeological sites, ranging from Russia to Spain. Bayesian age modelling was used to generate probability distribution functions to determine the latest appearance date. We show that the Mousterian ended by 41,030–39,260 calibrated years BP (at 95.4% probability) across Europe. We also demonstrate that succeeding ‘transitional’ archaeological industries, one of which has been linked with Neanderthals (Châtelperronian), end at a similar time. Our data indicate that the disappearance of Neanderthals occurred at different times in different regions. Comparing the data with results obtained from the earliest dated AMH sites in Europe, associated with the Uluzzian technocomplex, allows us to quantify the temporal overlap between the two human groups. The results reveal a significant overlap of 2,600–5,400 years (at 95.4% probability). This has important implications for models seeking to explain the cultural, technological and biological elements involved in the replacement of Neanderthals by AMHs. A mosaic of populations in Europe during the Middle to Upper Palaeolithic transition suggests that there was ample time for the transmission of cultural and symbolic behaviours, as well as possible genetic exchanges, between the two groups.
Higham, et al., "The timing and spatiotemporal patterning of Neanderthal disappearance", Nature, (August 21, 2014).

This paper doesn't outright rule out the more recent dates from the Caucasus mountains (ca. 29,000 years ago), but does cast real doubt on those dates given the discovered inaccuracies of dates computed using the same methods elsewhere in Europe.

* We are learning how Neanderthal DNA influences modern humans in observable ways (i.e. their phenotypic impact), which appears to be a mixed bag of good and bad news. Corinne N. Simonti et al., "The phenotypic legacy of admixture between modern humans and Neandertals", Science (February 12, 2016). As the abstract explains:
Many modern human genomes retain DNA inherited from interbreeding with archaic hominins, such as Neandertals, yet the influence of this admixture on human traits is largely unknown. We analyzed the contribution of common Neandertal variants to over 1000 electronic health record (EHR)–derived phenotypes in ~28,000 adults of European ancestry. We discovered and replicated associations of Neandertal alleles with neurological, psychiatric, immunological, and dermatological phenotypes. Neandertal alleles together explained a significant fraction of the variation in risk for depression and skin lesions resulting from sun exposure (actinic keratosis), and individual Neandertal alleles were significantly associated with specific human phenotypes, including hypercoagulation and tobacco use. Our results establish that archaic admixture influences disease risk in modern humans, provide hypotheses about the effects of hundreds of Neandertal haplotypes, and demonstrate the utility of EHR data in evolutionary analyses. 
* A new study also confirms previous results showing that at least one African populations (Pygmies) show genetic evidence of admixture with archaic hominin species in Africa within the last 30,000 years.  The archaic hominin species involved in the admixture event in Africa was not either a Neanderthal or a Denisovan, as direct comparisons with ancient genomes from these species have ruled out that possibility.

This study sheds little or no light by itself, however, regarding the nature of these archaic hominins and limitations on fossil preservation (and a lack of professional archaeological study) in the tropical jungles where Pygmies have lived in historically documented eras reduce efforts to do so to guesswork based upon the more recent archaic hominin remains found in Africa in more favorable preservation conditions.  The extinction of the pygmy languages via language shift to West African farmer languages (mostly or entirely Bantu languages), also impairs efforts to look for linguistic clues regarding these archaic hominin admixture events.
Comparisons of whole-genome sequences from ancient and contemporary samples have pointed to several instances of archaic admixture through interbreeding between the ancestors of modern non-Africans and now extinct hominids such as Neanderthals and Denisovans. One implication of these findings is that some adaptive features in contemporary humans may have entered the population via gene flow with archaic forms in Eurasia. Within Africa, fossil evidence suggests that anatomically modern humans (AMH) and various archaic forms coexisted for much of the last 200,000 yr; however, the absence of ancient DNA in Africa has limited our ability to make a direct comparison between archaic and modern human genomes. Here, we use statistical inference based on high coverage whole-genome data (greater than 60×) from contemporary African Pygmy hunter-gatherers as an alternative means to study the evolutionary history of the genus Homo. Using whole-genome simulations that consider demographic histories that include both isolation and gene flow with neighboring farming populations, our inference method rejects the hypothesis that the ancestors of AMH were genetically isolated in Africa, thus providing the first whole genome-level evidence of African archaic admixture. Our inferences also suggest a complex human evolutionary history in Africa, which involves at least a single admixture event from an unknown archaic population into the ancestors of AMH, likely within the last 30,000 yr.
Hsieh et al., "Model-based analyses of whole-genome data reveal a complex evolutionary history involving archaic introgression in Central African Pygmies", Genome Research (February 17, 2016).

Other prior research has also identified a second possible instance of archaic admixture in a modern African population. I read this study as agnostic on the validity of that claim which it does not study.

Similar methods have also identified an unidentified archaic hominin species that introgressed into the Denisovan genome (parsimony would favor an identification with H. Erectus for that component, although parsimony has not been a particularly fruitful principal in developing accurate pictures of archaic admixture so far).

But, it looks increasingly likely that there are no more than seven archaic hominin species for which genomic data may ultimately be found (Neanderthals, Denisovans, the archaic contributor to Denisovans, two archaic African archaic hominins and possibly H. Florensis and a Chinese archaic species if it is distinct from all of the above and is found from ancient DNA to not be one of the other species above).

Our database of modern and ancient DNA, and the archaeological record, is comprehensive enough to largely rule out additional admixing species that have left us either traces of admixture in modern DNA or ancient DNA samples, although the possibility that there were a few other archaic hominin species that existed in the last 250,000 years that did not admix with modern humans who have descendants alive today, but did leave remains or artifacts from which ancient DNA cannot be extracted is still plausible.  Still, there were probably no more than 10 species of hominins anywhere on Earth in the last 10,000 years.

Razib also notes a related paper putting the split between "Paleo-African" Pygmies and the predominant clade of modern Africans at 95,000-150,000 years ago. (Importantly, Asian Negrito populations are not derived from "Paleo-African" populations whom they resemble phenotypically. This is basically a case of convergent evolution.)

Some recent papers' estimates of the split between Khoi-San hunter-gatherers in African and other Africans at ca. 200,000 years ago (apart from admixture in the historic era), is also quite a bit later than the ca. 70,000 year split estimate for Paleo-African Pygmies and Khoi-San populations that I have previously seen based upon mutation rate dating of uniparental haplogroups found in these populations.

There has been some speculation in the blogs that this very early date based upon whole genomes is also really a methodological artifact of archaic admixture, because modern humans themselves are only 200,000-250,000 years old (see, e.g., the Omo remains in Ethiopia) and there would be many opportunities for admixture in early modern human populations between their speciation and modern times.

Tuesday, February 16, 2016

Higgs Boson Is Nearly Purely Scalar

A spin-0 boson may be a pure scalar with even parity, like the hypothetical Standard Model Higgs boson, or may be pseudo-scalar with odd parity, such as a pion (a two quark particle of a type of boson called a meson made of first generation quarks bound by gluons).

In principal, a resonance observed experimentally can also be a mix of scalar and pseudo-scalar bosons of the same spin, and many beyond the Standard Model theories assume the existence of a pseudo-scalar Higgs bosons called A, as well as a light and heavy Higgs boson (H and h).

The latest LHC data finds that the Higgs boson we see experimentally is overwhelmingly scalar and not pseudo-scalar, as predicted by the Standard Model. Specifically, there is a 95% probability that that Higgs boson seen experimentally is at least 99.64% scalar and not more than 0.34% pseudo-scalar.  The best fit value is even more strongly scalar and less strongly pseudo-scalar.

The precision may improve in future experiments, but making a conceptual leap to assume the the measured Higgs boson is a scalar spin-0 boson is not unreasonable.

Sunday, February 14, 2016

Milky Way Gamma Rays Attributed To Dark Matter Annihilation Probably Aren't

One way to detect dark matter, if it exists, is to observe the decay products of dark matter annihilation, for example, when dark matter and anti-dark matter collide.  There are definitely unidentified sources of cosmic rays including gamma rays which have been detected by astronomers coming from the center of the Milky Way galaxy.

Could these be dark matter?

A new study concludes that the answer is, probably not. The most likely sources are millisecond pulsars, although this isn't certain.

The study is: Samuel K. Lee, Mariangela Lisanti, Benjamin R. Safdi, Tracy R. Slatyer, and Wei Xue. "Evidence for unresolved gamma-ray point sources in the Inner Galaxy." Phys. Rev. Lett. (February 3, 2016).

Saturday, February 13, 2016

More Constraints On 750 GeV Resonance Models

The 750 GeV resonance observed at ATLAS and CMS and announced last December has spawned more than 750 papers.  One of the more impressive new papers uses the impact the a hypothetical new scalar or pseudoscalar boson and its vector-like fermion intermediate states would have on the running on the strong force coupling constant that has been measured to date, along with limits from direct LHC searches for decaying heavy particles, to very tightly constrain the parameter space of any realistic model.

Bottom line: if the 750 GeV resonance is real, there need to be more new particles awaiting us at masses of less than 1,000 GeV, which is right around the corner.  Otherwise, this resonance is almost surely a statistical fluke.

Dark matter theorists come close to reproducing the baryonic Tully-Fisher relation with reverse engineered simulation

Dark matter theorists recently made the most successful effort to reproduce the baryonic Tully-Fisher relation which related the rate at which a galaxy rotates to the amount of ordinary matter in it, within a fairly limited subset of galaxy masses that excludes very small and very large galaxies.

Furthermore, they have made a prediction using their model that can be used to test its accuracy on new data for low mass galaxies. Preliminary data, however, tends to show that this prediction does not reflect the data. The paper's abstract states:
The scaling of disk galaxy rotation velocity with baryonic mass (the "Baryonic Tully-Fisher" relation, BTF) has long confounded galaxy formation models. It is steeper than the M ~ V^3 scaling relating halo virial masses and circular velocities and its zero point implies that galaxies comprise a very small fraction of available baryons. 
Such low galaxy formation efficiencies may in principle be explained by winds driven by evolving stars, but the tightness of the BTF relation argues against the substantial scatter expected from such vigorous feedback mechanism. 
We use the APOSTLE/EAGLE simulations to show that the BTF relation is well reproduced in LCDM simulations that match the size and number of galaxies as a function of stellar mass. In such models, galaxy rotation velocities are proportional to halo virial velocity and the steep velocity-mass dependence results from the decline in galaxy formation efficiency with decreasing halo mass needed to reconcile the CDM halo mass function with the galaxy luminosity function. Despite the strong feedback, the scatter in the simulated BTF is smaller than observed, even when considering all simulated galaxies and not just rotationally-supported ones. 
The simulations predict that the BTF should become increasingly steep at the faint end, although the velocity scatter at fixed mass should remain small. Observed galaxies with rotation speeds below ~40 km/s seem to deviate from this prediction. We discuss observational biases and modeling uncertainties that may help to explain this disagreement in the context of LCDM models of dwarf galaxy formation.
L.V. Sales, et al., "The low-mass end of the baryonic Tully-Fisher relation" (February 5, 2016) (emphasis and paragraph breaks added).

It isn't entirely clear from the paper what Sales, et al. did in the APOSTLE/EAGLE simulation that resolved the problems that had confounded previous galaxy formation models for the past several decades. Clearly, previous studies were doing something wrong. As this paper explains (citations omitted):
[T]he literature is littered with failed attempts to reproduce the Tully-Fisher relation in a cold dark matter-dominated universe. Direct galaxy formation simulations,for example, have for many years consistently produced galaxies so massive and compact that their rotation curves were steeply declining and, generally, a poor match to observation. Even semi-analytic models, where galaxy masses and sizes can be adjusted to match observation, have had difficulty reproducing the Tully-Fisher relation, typically predicting velocities at given mass that are significantly higher than observed unless somewhat arbitrary adjustments are made to the response of the dark halo.
There is some explanation of what they have done differently, but it isn't very specific:
The situation, however, has now started to change, notably as a result of improved recipes for the subgrid treatment of star formation and its associated feedback in direct simulations. As a result, recent simulations have shown that rotationally-supported disks with realistic surface density profiles and relatively flat rotation curves can actually form in cold dark matter halos when feedback is strong enough to effectively regulate ongoing star formation by limiting excessive gas accretion and removing low-angular momentum gas.

These results are encouraging but the number of individual systems simulated so far is small, and it is unclear whether the same codes would produce a realistic galaxy stellar mass function or reproduce the scatter of the Tully-Fisher relation when applied to a cosmologically significant volume. The role of the dark halo response to the assembly of the galaxy has remained particularly contentious, with some authors arguing that substantial modification to the innermost structure of the dark halo, in the form of a constant density core or cusp expansion, is needed to explain the disk galaxy scaling relations, while other authors find no compelling need for such adjustment.

The recent completion of ambitious simulation programmes such as the EAGLE project, which follow the formation of thousands of galaxies in cosmological boxes ≈ 100 Mpc on a side, allow for a reassessment of the situation. The subgrid physics modules of the EAGLE code have been calibrated to match the observed galaxy stellar mass function and the sizes of galaxies at z = 0, but no attempt has been made to match the BTF relation, which is therefore a true corollary of the model. The same is true of other relations, such as color bimodality, morphological diversity, or the stellar-mass Tully-Fisher relation of bright galaxies, which are successfully reproduced in the model. Combining EAGLE with multiple realizations of smaller volumes chosen to resemble the surroundings of the Local Group of Galaxies (the APOSTLE project), we are able to study the resulting BTF relation over four decades in galaxy mass. In particular, we are able to examine the simulation predictions for some of the faintest dwarfs, where recent data have highlighted potential deviations from a power-law BTF and/or increased scatter in the relation.
In other words, if you set the code to produce the right sized galaxies using lamdaCDM and cherry pick the data to limit your observations to those whose halos cause them to look like the galaxies near the Milky Way, you get results that match the Tully-Fisher relation and also have other properties that match observation.

But, it isn't clear which variables are being calibrated in what respects, and it isn't clear what the data from the realizations of the simulation that don't look like the Local Group of Galaxies are being produced and discarded.

The fact that simply selecting the right size and general pattern of the dark matter halos is sufficient to reproduce baryonic Tully-Fisher and other relations is not a trivial finding.  But, it still doesn't solve the question of how to reproduce these dark matter halo sizes and patterns without putting them into the model by hand through the calibration and selection process.

We learn a little more later on, but it is still very vague:
We refer the reader to the main EAGLE papers for further details, but list here the main code features, for completeness. In brief, the code includes the “Anarchy” version of SPH, which includes the pressure-entropy variant proposed by Hopkins (2013); metal-dependent radiative cooling/heating, reionization of Hydrogen and Helium (at redshift z = 11.5 and z = 3.5, respectively), star formation with a metallicity dependent density threshold, stellar evolution and metal production, stellar feedback via stochastic thermal energy injection, and the growth of, and feedback from, supermassive black holes. The free parameters of the subgrid treatment of these mechanisms in the EAGLE code have been adjusted so as to provide a good match to the galaxy stellar mass function, the typical sizes of disk galaxies, and the stellar mass-black hole mass relation, all at z ≈ 0.
But, we aren't told what choices were made for which free parameters to accomplish this which would seem to be vital issue in determining the validity of the model and understanding what is going on to achieve this result, except that we are told is that:
1504^3 dark matter particles each of mass 9.7 × 106M ; the same number of gas particles each of initial mass 1.8×106M ; and a Plummer-equivalent gravitational softening length of 700 proper pc (switching to comoving for redshifts higher than z =2.8). The cosmology adopted is that of Planck Collaboration et al. (2014), with ΩM = 0.307, ΩΛ = 0.693, Ωb = 0.04825, h =0.6777 and σ8 = 0.8288.

The second set of simulations is the APOSTLE suite of zoom-in simulations, which evolve 12 volumes tailored to match the spatial distribution and kinematics of galaxies in the Local Group. Each volume was chosen to contain a pair of halos with individual virial mass in the range 5 × 10^11 - 2.5×10^12 M . The pairs are separated by a distance comparable to that between the Milky Way (MW) and Andromeda (M31) galaxies (800 ± 200 kpc) and approach with radial velocity consistent with that of the MW-M31 pair (0-250 km/s).

The APOSTLE volumes were selected from the DOVE N-body simulation, which evolved a cosmological volume of 100 Mpc on a side in the WMAP-7 cosmology. The APOSTLE runs were performed at three different numerical resolutions; low (AP-LR), medium (AP-MR) and high (AP-HR), differing by successive factors of ≈ 10 in particle mass and ≈ 2 in gravitational force resolution. All 12 volumes have been run at medium and low resolutions, but only two high-res simulation volumes have been completed.

We use the SUBFIND algorithm to identify “galaxies”; i.e., self-bound structures in a catalog of friends-of-friends (FoF) halos built with a linking length of 0.2 times the mean interparticle separation. We retain for analysis only the central galaxy of each FoF halo, and remove from the analysis any system contaminated by lower resolution particles in the APOSTLE runs. Baryonic galaxy masses (stellar plus gas) are computed within a fiducial “galaxy radius”, defined as rgal = 0.15 r200. We have verified that this is a large enough radius to include the great majority of the star-forming cold gas and stars bound to each central galaxy.
The masses of the dark matter and gas particles are absurdly large (comparable to intermediate sized black holes) to the point where dark matter or gas particles of this size are observationally rule out, because the simulation isn't capable computationally of handling a realistic number of particles (at least a trillion times more) with a far lower mass each.

The two particle masses assumed, the "softening" factor, and the minimum virial mass required for convergence Mconv200 (a total of four free parameters at each of four levels of resolution and sixteen free parameters for the simulation as a whole in addition to parameters inherent in the programs themselves) are arbitrarily set in a results driven manner for each resolution of the EAGLE and APOSTLE simulations.  In other words, the results that the simulations produce are reverse engineered from a moderately realistic set of model rules for parts of the process, rather than predicted from first principles in any realistic cold dark matter model.

Again, this isn't to say that the paper hasn't made a breakthrough by coming up with the first simulation that can match key aspects of reality merely by tweaking the parameters of the dark matter halos, at least within its domain of applicability.  And, if subsequent steps allow investigators to devise a dark matter model that works and isn't contradicted by observational evidence, that's great. But, this paper standing alone certainly isn't an unqualified success either.

Modified gravity theories compared.

More than thirty years ago, non-relativistic MOND did pretty much exactly the same thing with a one line equation that has only one free parameter and a larger range of applicability in that extended from solar system sized and smaller systems to the largest single galaxies that we observe.

Three decades later, it still works basically as advertised so long as one used baryonic rather than luminous matter and one makes an adjustment when one galaxy is in the gravitational field of another galaxy.  While it gets the results wrong in galactic clusters (which EAGLE and APOSTLE can't handle either), it underestimates the dark matter effects there, so it can be cured with cluster-specific dark matter (although realistically, the real problem is that the model needs another parameter or two and some adjustments to its formula to address extremely large systems).

Also, non-relativistic MOND made numerous predictions that were consistent with subsequent data collection.  But, this is a test that the EAGLE/APOSTLE approach already seems poised to fail the instant that it is proposed, just like previous dark matter models which have repeatedly proved to be dismal failures at predicting new unobserved phenomena in advance.

This isn't to say that non-relativistic MOND is right. It isn't. Like the EAGLE/APOSTLE simulation, it's a reverse engineered toy model that fails miserably in areas it wasn't designed to address like strong gravitational fields (e.g. predicting the movement of particles outside the plane of a spiral galaxy's disk or predicting strong field behavior).  And, even the relativistic version of MOND called TeVeS that resolves many of the most glaring problems with non-relativistic MOND in the strong field limit probably isn't right either, and still fails at the galactic cluster scale and above.

But, other modified gravity theories such as those proposed by Moffat are relativistic and do work at all scales, even if they look a little clunky and have another parameter or two in addition to MOND's simple parameter which is coincidentally similar to a plausible simple mix of relevant physical constants.

Even more remarkably, Mr. Deur's revision of how massless graviton self-interaction works relative to the predictions of classical general relativity models (derived by analogy to QCD, the theory of the strong force transmitted by self-interacting gluons) looks like a promising way to achieve all of the objectives of modified gravity theories with essentially no free parameters other than the gravitational coupling constant.  It solves the problems of dark matter, and all or most of the problem of dark energy at all scales, without creating strong field pathologies, and while making new predictions that are consistent with observation in the case of elliptical galaxies.  It does this all in one fell swoop that is theoretically well motivated and doesn't require us to invent any new particles or forces or dimensions of space-time or discrete space-time elements.  Sooner or later, my strong intuition is that this solution will turn out to be the right one, even though it make take a generation or so for this to happen.

If Deur is right, the main problems of quantum gravity may turn out to have arisen mainly because we were trying to design a theory that was equivalent to general relativity, when general relativity itself was actual wrong in a subtle way that is relevant mostly only in the very weak field limit that insures that any quantum gravity theory trying to replicate it will be pathological.

Friday, February 12, 2016

Strong Field Predictions Of General Relativity Confirmed

Background

Black holes and the existence of gravity waves were two of the most notable predictions of the theory of general relativity devised by Albert Einstein almost exactly a century ago (although the implications of that theory took much longer to work out with most of the main conclusions that we have reached so far in place by the 1970s).

Black holes are concentrations of matter that are so strongly bound by gravity that not even light can escape them.* They can range in mass from about 3 times the mass of the Sun to 10,000,000,000 time the mass of the Sun in supermassive black holes at the center of the largest galaxies (in principal, there is no upper limit on the mass of a black hole, but no larger black holes have ever been inferred to exist).**  In Newtonian gravity, photons aren't affected by gravity and even if they were, gravity can never get strong enough to prevent them from escaping a massive object because Newtonian gravity involve linear rather than non-linear field strengths.

Indirect experimental evidence (such as gravitational lensing) has long ago indicated that black holes exist and measured their masses.

In Newtonian gravity, gravity's effects are transmitted instantly at all distances.  In general relativity, gravity's effects are transmitted via gravitational waves in space-time that propagate at the speed of light "c".

What did LIGO See?

The LIGO gravity wave experiment formally announced yesterday that it had detected the merger of two roughly equal mass black holes with a combined mass of 65 times the mass of our Sun about 1.3 billion light years away from Earth that converted roughly 5% of their combined mass into gravity waves (of course, there was immense momentum energy in addition to rest mass present in the binary wave system).  The resulting combined black hole was a Kerr black hole which means that it has angular momentum (a Schwartzchild black hole is a special case of a Kerr black hole with zero angular momentum).

The black holes were each about 100 miles in diameter before merging, and less than two minutes before their merger this binary black hole system was spiraling at almost the speed of light at a distance of about 600 miles (a disk of space about the size of Alaska).

The power of the gravitational waves emitted by the extraordinary event that LIGO observed was greater than the combined power of the light emitted by every star in the universe at that moment.  By comparison, the gravitational waves emitted by the entire solar system have a power of about 200 watts (two ordinary light bulbs). The final ping of gravity waves when the black holes finally merged had a frequency roughly the same as the sound wave of a middle D note on a piano.

For a matter of seconds or minutes after the merger, the black hole would have been a bit "bumpy" by the combined force of gravity would swiftly smooth it out into an equilibrium smoothly curved shape.

The statistical significance of the detection event was 5.1 sigma (i.e. 5.1 standard deviations in excess of the null hypothesis that no gravitational wave event was detected) which rates as a scientific discovery. It is the first direct observation of gravity waves (which had previously been inferred from the behavior of binary star systems observed with telescopes) and the most direct observation to date that has been made of black holes.

Gravity Wave Detectors

The LIGO experiment detects gravity waves by looking at the interference pattern generated by two laser beams traveling about 4 kilometers each and back at right angles to each other at two locations, one in Washington State and the other in Louisiana, which are screened out for all manner of forms of background noise.

The LIGO experiment is sensitive to distortions of space-time on a scale of 1/1000th of the diameter of an atom, something made possible only by the immense precision with which we understand and can measure electromagnetic phenomena using the Standard Model.  What LIGO detected was a distortion in the actual physical distance from the two detectors to the four reference points of about that magnitude in a pattern that identified the strength and direction of the source generating the gravity waves.  The gravity wave event that was detected was not accompanied by a surge in cosmic neutrinos (which are associated with supernovas and star collision/mergers, but not with black hole mergers).

About half a dozen other gravity wave detection experiments are set to come on line over the next few years.  Some are space-based, one more is land based, and one uses continuous observations of pulsars in the Milky Way to great, in effect, a galaxy sized gravity wave telescope.

The experiments are largely complementary to each other rather than being competitors.  Each experiment is sensitive to a different range of gravity wave frequencies, with LIGO measuring only the highest frequency gravity waves.  For example, the LISA gravity wave experiment (in space) would not have been able to detect this event because the gravity wave was too large for its instrumentation which is tuned to less dramatic gravity waves to see.

Scientists had doubted that LIGO would be the first to detect gravity waves because it takes such an extraordinary event for it to receive a signal that it can confirm is a gravity wave with the 5 sigma significance needed to constitute a discovery.  These events were predicted to be rare and LIGO simply got lucky in having such an event occur at the right time.  Lower frequency gravitational waves are suspected to be more frequent because less dramatic events can create them.  But, LIGO has also detected several more potential gravitational wave events during its several year existence, although those detections were less statistically significant.

Significance

Twelve papers were generated by LIGO based upon the experiment.  The most notable for my purposes was the one examining the extent to which the observed gravitational waves matched the predictions of General Relativity in strong gravitational fields.

Strong gravitational fields have been the subject of a great deal of speculations about ways that general relativity could be tweaked while still remaining consistent with general relativity, in part, because experimental evidence did not constrain deviations from general relativity very strictly.

A couple of recent papers, one 95 pages long, and one four pages long, have examined how observations form LIGO could test alternatives to General Relativity, which make different predictions about the kinds of strong field gravity waves that would be generated by events like this one.

The ultra-precision LIGO results are consistent with General Relativity up to the limits of its margin of error with no real tension between theory and experiment, and accordingly, greatly constrain the parameter space of alternative theories of gravity that can still be consistent with experimental observations.  For example, these experiments place an experimental bound on how heavy gravitons can be in a "massive graviton" theory.

Similarly, direct experimental observations of gravitational waves, by providing a direct observation of the mechanism by which gravity is transmitted, powerfully disfavors alternative theories of gravity in which gravitational effects are non-local or transmitted instantaneously.  These limits may become even more power when gravity wave detectors capable to seeing the weaker gravity waves generated by events involving stars that allow gravitational wave measurements to be correlated with evidence from telescopes that see photons, cosmic ray detectors and neutrino detectors from the same event.

Understanding strong gravitational fields is relevant to understanding gravitational singularities like the Big Bang, cosmological inflation, black holes, galaxy formation, and the way that galactic clusters work.

It may end up being important to understanding quantum gravity theories which generally predict that many singularities in general relativity (a non-quantum "classical" theory of physics) which means circumstances in which infinities show up as results in equations, actually just produce very large numbers that are not infinite when quantum effects are considered.  Some quantum gravity theories also reproduce general relativity in the classical limit in the medium sized gravitational fields, while deviating from general relativity somewhat in the extreme strong field and extreme weak field limits. So, quantum gravity theories that differ from general relativity in the strong field limit can be constrained.

Extensive measurements of general relativity at work in the strong field limit may also provide insights to quantum gravity researchers who are looking for some additional, experimentally supported axiom to address the problem of the non-renormalizability of naive quantum gravity theories.  For example, if gravity wave observations in the strong field limit become precise enough, the possibility that the gravitational constant G runs or does not run with energy scale in the way that the Standard Model coupling constants do, this could provide an axiom that could be used to formula workable quantum gravity theories.  But, the LIGO observation, while ultra-precise, isn't sufficiently precise to place strong bounds on that possibility.

What it does not tell us.

On the other hand, not all ill understood aspects of gravitational phenomena in which gravity are important can better understood by looking at the strong field of general relativity that govern black holes and the Big Bang.

Phenomena like dark matter and dark energy are relevant only in the context of extremely weak gravitational fields.  An improved understanding of gravitational strong fields only limits resolution of dark matter and dark energy phenomena to the extent that a solution to this weak field issues has a side effect that would have phenomenological effect in the strong field regime as well.

Also, it is important to note that the what LIGO has seen is completely different from the effects of tensor modes of primordial gravitational B waves which the BICEP-2 experiment reported that it had seen signs of in the cosmic background radiation (which later proved to be unsupported by the available data).

Searches like those at BICEP-2 are looking for patterns in the overall distribution of matter and energy in the universe that are associated with particular cosmological inflation scenarios in the early moments after the Big Bang, rather than the gravity wave produced by a single, more recent event in one particular part of the universe that experiments like LIGO and LISA are designed to detect.

Impact Of Future Gravity Wave Observations

Gravity wave telescopes provide a new way to conduct astronomy, to supplement telescopes that look at electromagnetic waves in wavelengths on the infrared side as low as cosmic background radiation and radio waves and on the ultraviolet side to frequencies a bit beyond the blue of visual light. Cosmic ray telescopes and neutrino telescopes detect tiny bits of matter like electrons or neutrinos or individual interstellar gas or dust atoms or molecules that are hurled across the universe at high speeds from distant stars  (the term "cosmic rays" is misnomer since cosmic rays generally don't involve mere photons).

Over the next few decades, as more events are observed at a wider range of frequencies as the new gravitational wave detection experiments come on line, these constraints on the strong field behavior of Nature relative to General Relativity will become much more tight.

Footnotes Regarding Black Holes

* It is not uncommon to say that black holes are the most dense objects in the universe, and that this is why light cannot escape them.  Density means matter divided by volume.  And, for the most conventional definition of the volume of a black hole, i.e. the volume within its "event horizon" which which light cannot escape, this is not true except for the smallest of black holes. All, but the smallest of black holes are not the most dense objects in the universe, and necessarily, the reason that light cannot escape a black hole is not that it is the most dense object in the universe.

For example, photons routinely escape from neutron stars and from atomic nuclei which are more dense than all but the smallest of black holes.  Yet, we routinely directly observe the light from neutron stars with telescopes, and the photons emitted from atomic nuclei are what keep the electrons that are moving in a cloud around those atomic nuclei from flying away.

Neutron stars, which have a mass just under the threshold for them to gravitationally collapse into a black hole can have a mass of about 3 times the Sun packed into a density roughly the same as an atomic nucleus before they collapse to form black holes.  The most dense objects in the universe are black holes just over this threshold.

But, the volume of a black hole, as measured by its event horizon, grows more rapidly than its mass, due to the non-linear nature of gravity in general relativity.  As a result, black holes that are significantly heavier than the neutron star-black hole threshold, such as the roughly 30 times the Sun mass black holes seen by LIGO are significantly less dense in mass per event horizon contained volume than neutron stars or atomic nuclei.  The density of the supermassive black holes at the center of galaxies like the Milky Way and its satellite galaxies, measured in mass per event horizon contained volume, is on the order of the density of liquid water or ordinary Earth rocks.

The internal mass distribution of matter within a black hole is unknown and may be unknowable.  In general relativity, all of the observable properties of a Kerr black hole in equilibrium, such as the one created upon the merger that LIGO observed a few minutes to a few years or so after the merger, can be determined from its mass and its angular momentum.  (The fact that this is possible is a problem for quantum gravity theories for which the law that information cannot be created or destroyed is an axiom that requires considerable theoretical attention.)

The observational reality that there is a well known maximum density of matter approximately equal to the density of an atomic nucleus, neutron star or small stellar mass black hole is generally considered to be a mere empirical fact that emerges from other physical laws.  But, one can imagine a Copernican revolution arising from a theory in which a maximum density of mass-energy per volume (appropriately defined) is a law of nature.  The black hole-neutron star transition point also provides a physical calibration point which is ultimately a function of an equilibrium between a function of the gravitational constant G and a function of the strong force coupling constant of the Standard Model.

** It is theoretically possible for a black hole of less than 3 solar masses to exist, either because it is created by means other than being created exclusively from self-generated gravitational collapse, or because it was once larger and evaporated via Hawking radiation (because actually, what escapes from black holes is not nothing, but merely almost nothing with a little bit of Hawking radiation escaping).  Generally speaking, cosmic background radiation adds more mass to a stellar or larger black hole than Hawking radiation takes away.  But, in principal, at some point in time when this wasn't the case, a stellar black hole could evaporate to less than 3 solar masses while retaining its black hole status.

In real life, no one has ever observed a black hole of this kind which would be called a "primordial black hole" and most primordial black holes created around the time of the Big Bang probably would have evaporated via Hawking radiation by now, but primordial black holes of 10^14 kg or more would not have evaporated and primordial black holds of 10^23 kg or less can't be excluded by gravitational lensing observations.

Thus, primordial black holes, if they did exist, would have masses comparable to asteroids and have been proposed as dark matter candidates (although few dark matter theorists view them as a very serious dark matter candidate for a variety of reasons).

Primordial black holes would have a radius of 145 femto-meters (the size of several tightly packed uranium atoms sitting side by side) to 0.145 millimeters (the thickness of a strand or hair or one coat of paint).