Sunday, December 29, 2019

The Testimony Of Chewing Gum

A new paper managed to extract a lot of information out of a single sample of very old chewing gum - a date, a complete human genome of the person who used it, ancient DNA that showed what she was eating at the time, and the microbiome of her mouth including a disease that she was fighting at the time similar to mono.

This rich time capsule of ancient DNA comes from a time period when hunter-gatherers who started to repopulate Europe from European refugia around 14,000 years ago, when glaciers had receded after the Last Glacial Maximum ca. 20,000 years ago, co-existed with the first farmers of Europe who migrated in slowly advancing waves along watersheds (mostly in whole family units with only modest admixture with local hunter-gatherers) ultimately from Anatolia (i.e. modern Turkey) who had just started to arrive within a few centuries of that time of this sample.


An artist's recreation of the woman who used the chewing gum based upon her DNA (from here). 

The image above is our best estimate of what Western European hunter-gathers looked like before farming and herding were introduced based upon their DNA. 

This woman's appearance lacks the phenotypic changes in typical Europeans that arose with the first farmers of Europe who descended from Anatolian first farmers (who in turn, were descendants of Fertile Crescent hunter-gatherers).

The modern Scandinavian and Northern European phenotype (i.e. visible traits arising from one's genes) most notably characterized by very light colored skin, hair and eyes, didn't come into existence until the Bronze Age when Anatolian farmers and metal using Ukrainian horsemen admixed. 

The share of Western European hunter-gather ancestry in modern Europeans is quite modest and mostly comes indirectly from early farmers who admixed with the hunter-gatherers who preceded them in places they moved (often limited to places with good water supplies and fertile soils while hunter-gatherers remained dominant or adopted herding only on less prime farming land). 

(Ironically, European hunter-gathers ca. 3700 BCE would look to modern eyes more "African" at first glance than Egyptians of that time period.)
During excavations on Lolland, [Denmark] archaeologists have found a 5,700-year-old type of "chewing gum" made from birch pitch. In a new study, researchers from the University of Copenhagen succeeded in extracting a complete ancient human genome from the pitch. It is the first time that an entire ancient human genome has been extracted from anything other than human bones. . . . [Scientists] also retrieved DNA from oral microbes and several important human pathogens, which makes this a very valuable source of ancient DNA, especially for time periods where we have no human remains[.]
Based on the ancient human genome, the researchers could tell that the birch pitch was chewed by a female. She was genetically more closely related to hunter-gatherers from the mainland Europe than to those who lived in central Scandinavia at the time. They also found that she probably had dark skin, dark hair and blue eyes.
The birch pitch was found during archaeological excavations at Syltholm, east of Rødbyhavn in southern Denmark. . . . Syltholm is completely unique. Almost everything is sealed in mud, which means that the preservation of organic remains is absolutely phenomenal[.]  It is the biggest Stone Age site in Denmark and the archaeological finds suggest that the people who occupied the site were heavily exploiting wild resources well into the Neolithic, which is the period when farming and domesticated animals were first introduced into southern Scandinavia[.] 
This is reflected in the DNA results, as the researchers also identified traces of plant and animal DNA in the pitch -- specifically hazelnuts and duck -- which may have been part of the individual's diet.

In addition, the researchers succeeded in extracting DNA from several oral microbiota from the pitch, including many commensal species and opportunistic pathogens . . . The researchers also found DNA that could be assigned to Epstein-Barr Virus, which is known to cause infectious mononucleosis or glandular fever.  
Birch pitch is a black-brown substance that is produced by heating birch bark. It was commonly used in prehistory for hafting stone tools as an all-purpose glue. The earliest known use of birch pitch dates back to the Palaeolithic.  
Pieces of birch pitch are often found with tooth imprints suggesting that they were chewed. As the pitch solidifies on cooling, it has been suggested that it was chewed to make it malleable again before using it for hafting etc. Other uses for birch pitch have also been suggested. For example, one theory suggests that birch pitch could have been used to relieve toothache or other ailments as it is mildly antiseptic. Other theories suggest, people may have used it as a kind of prehistoric tooth brush, to suppress hunger, or just for fun as a chewing gum.
From hereThe source paper is: 

Theis Z. T. Jensen, et al., "A 5700 year-old human genome and oral microbiome from chewed birch pitch." 10(1) Nature Communications (2019). DOI: 10.1038/s41467-019-13549-9

More coverage at Bernard's Blog.

Saturday, December 28, 2019

Mayan Temple Discovered Near Cancun

The find isn't terribly unexpected or paradigm breaking. But, it illustrates that there is still plenty of relatively low hanging fruit out there to be discovered by archaeologists in much of the world. 
Archeologists have discovered a large palace likely used by the Mayan elite more than 1,000 years ago in the ancient city of Kuluba, near the modern day tourist hot spot of Cancun in eastern Mexico, Mexican anthropology officials said. 
The remains of the six-meter high building, 55 meters (180 feet) long and 15 meters wide, suggest the palace was inhabited for two long periods between 600-1050 A.D., the National Institute of Anthropology and History (INAH) said in a statement. 
The Mayan civilization reached its peak between 250 and 900 A.D., when it ruled large swaths of what is now southern Mexico, Guatemala, Belize and Honduras.
From here

Friday, December 27, 2019

Human Evolution Happened In The Old World 122 To 329 Times In The Last 100,000 Years

A "selective sweep" is basically a discrete instance of a gene variant's frequency changing due to some form of natural selection (including reproductive partner selection, fertility rates, and death effects). A new study discussed below looked at genetic evidence of such sweeps in 5 populations each in Africa, Europe and Asia over the last 100,000 years which is about 3,500 generations based upon modern genomes from a pre-existing database. The populations studies were as follows:


The accuracy of the method was confirmed, among other reasons, because the blind statistical methods used located the most famous and intense selective sweeps previously known to have occurred. As the body text explains:
We found that the most enriched genomic regions contain multiple iconic examples of positive selection, e.g., the LCT region in northern Europeans, EDAR in all Asian populations and TLR5 in Africa, as well as various other examples including genes associated with lighter skin pigmentation. This indicates that the information underlying our estimations corresponds to genomic signals in line with natural selection.
Africa has had fewer selective sweeps, instances of gene frequency rate changes not caused by mere random drift, in the last 100,000 years than Europe, which has in turn had fewer selective sweeps than Asia. As the body text explains:
We also found a trend toward more sweeps among non-African populations, particularly in Asia. For example, we estimated 68 [CI:46-91] and 165 [CI:119-211] sweeps in average in Africa and Asia respectively when including complete sweeps in our model.
This is, in part, due to the fact that Africa has had fewer and less intense population bottlenecks in this time frame. It is also due, in part, to the fact that humans, who evolved in the first place in Africa, needed fewer evolutionary adaptations to continue to live there than to live elsewhere where different environmental conditions were present. 

Most selective sweeps have been limited to particular continents, and not global, but have been continent-wide. As the body text of the paper explains:
We found virtually no overlap between populations from different continents, with some exceptions. 
In contrast, the most enriched regions tend to be shared across populations from the same continent. These results indicate that the total number of sweeps in humans should be close to the summation of the mean X computed per continent; i.e., 221 [CI:122-329], a first order approximation neglecting some selection signals shared across continents and considering sweeps are highly shared within continents.
Figure 4C below shows the results graphically with margins of error for Europe and Asia. Below that is a similar figure for five African populations from the Supplemental Materials. The abstract for the paper and its citation follow.



Over the last 100,000 years, humans have spread across the globe and encountered a highly diverse set of environments to which they have had to adapt. Genome-wide scans of selection are powerful to detect selective sweeps. However, because of unknown fractions of undetected sweeps and false discoveries, the numbers of detected sweeps often poorly reflect actual numbers of selective sweeps in populations. The thousands of soft sweeps on standing variation recently evidenced in humans have also been interpreted as a majority of mis-classified neutral regions. In such a context, the extent of human adaptation remains little understood. 
We present a new rationale to estimate these actual numbers of sweeps expected over the last 100,000 years (denoted by X) from genome-wide population data, both considering hard sweeps and selective sweeps on standing variation. We implemented an approximate Bayesian computation framework and showed, based on computer simulations, that such a method can properly estimate X. We then jointly estimated the number of selective sweeps, their mean intensity and age in several 1000G African, European and Asian populations. 
Our estimations of X, found weakly sensitive to demographic misspecifications, revealed very limited numbers of sweeps regardless the frequency of the selected alleles at the onset of selection and the completion of sweeps. We estimated ∼80 sweeps in average across fifteen 1000G populations when assuming incomplete sweeps only and ∼140 selective sweeps in non-African populations when incorporating complete sweeps in our simulations. The method proposed may help to address controversies on the number of selective sweeps in populations, guiding further genome-wide investigations of recent positive selection.
Guillaume Laval, et al., "A genome-wide Approximate Bayesian Computation approach suggests only limited numbers of soft sweeps in humans over the last 100,000 years" bioRxiv (December 23, 2019) doi: https://doi.org/10.1101/2019.12.22.886234

Tuesday, December 24, 2019

Quarks Within Protons Are Entangled

A proton is a composite particle made up of three valence quarks (two up quarks and one down quark) and myriad "sea quarks" which are "confined" by gluons in a coherent and discrete unit. 

The internal structure of a proton is a matter of ongoing investigation and one insight reached recently based upon experimental data is that quarks within protons, as previously suspected, appear to be "entangled" which is to say that their behavior is correlated, rather than being independent of each other.

The investigators think that the quantum entanglement may help to explain why quarks and gluons are always confined in composite particles.

[D]ata from the Large Hadron Collider hint that protons’ constituents don’t behave independently. Instead, they are tethered by quantum links known as entanglement, three physicists report in a paper published April 26 at arXiv.org.  
Quantum entanglement has previously been probed on scales much larger than a proton. In experiments, entangled particles seem to instantaneously influence one another, sometimes even when separated by distances as large as thousands of kilometers (SN: 8/5/17, p. 14). Although scientists suspected that entanglement occurs within a proton, signs of that phenomenon hadn’t been experimentally demonstrated inside the particle, which is about a trillionth of a millimeter across. 
“The idea is, this is a quantum mechanical particle which, if you look inside it, … it’s itself entangled,” says theoretical physicist Piet Mulders of Vrije Universiteit Amsterdam, who was not involved with the research. 
In the new study, the team analyzed collisions of protons, which had been accelerated to high speeds and slammed together at the Large Hadron Collider in Geneva. Using data from the CMS experiment there, the researchers studied the entropy resulting from entanglement within the proton. Entropy is a property that depends on the number of possible states a system can take on, on a microscopic level. An analogy is a deck of cards: A shuffled deck has multiple ways that it could be ordered, whereas an ordered deck has only one, so the scrambled cards have higher entropy. 
If entanglement exists within a proton, there will be additional entropy as a result of those linkages. That entropy can be teased out by counting the number of particles produced in each collision. The amount of entropy the researchers found agreed with that expected assuming the quarks and gluons were entangled, the physicists report in their paper[.]
From Science News. The paper is:

Z. Tu, D. Kharzeev and T. Ullrich. The EPR paradox and quantum entanglement at sub-nucleonic scales. arXiv:1904.11974 (April 26, 2019).

Monday, December 23, 2019

You Can't See The Brightest Star In The Sky

NASA's Fermi Gamma-ray Space Telescope has discovered a faint but sprawling glow of high-energy light around a nearby pulsar. If visible to the human eye, this gamma-ray 'halo' would appear about 40 times bigger in the sky than a full Moon.
From here

"Biggest" would be more accurate than "Brightest", but I claim poetic license for the post title. Gamma rays are in the deep ultraviolet beyond the wavelengths of visible life, which is why you can't see it.

Indian Ocean Sea Level Is A Near A Record High (In Recorded History)

Sea levels are nearing an all time high due to global warming.
[S]ea levels in the central Indian Ocean have risen by close to a meter in the last two centuries. . . .

"We know that certain types of fossil corals act as important recorders of past sea levels. By measuring the ages and the depths of these fossil corals, we are identifying that there have been periods several hundred years ago that the sea level has been much lower than we thought in parts of the Indian Ocean." . . .
if such acceleration continues over the next century, sea levels in the Indian Ocean will have risen to their highest level ever in recorded history.
From here based upon this paper:


Paul S. Kench, et al. "Climate-forced sea-level lowstands in the Indian Ocean during the last two millennia." Nature Geoscience (2019); DOI: 10.1038/s41561-019-0503-7

Munda Origins In South Asia Better Understood

There is no serious doubt that the Munda languages of South Asia arrived starting around 3000 BCE from Southeast Asia via rice farmers. A new paper whose abstract and citation appears below fills in some of the details of how that happened.

At least three patrlineal founders over a period of time, differentiated between North Munda and South Munda and Khasi were involved, and Tibeto-Burman Y-DNA O2a-M95 came separately with a different founding population. They did not experience a language shift from Austroasiatic languages in South Asia.




The phylogenetic analysis of Y chromosomal haplogroup O2a-M95 was crucial to determine the nested structure of South Asian branches within the larger tree, predominantly present in East and Southeast Asia. However, it had previously been unclear how many founders brought the haplogroup O2a-M95 to South Asia. On the basis of the updated Y chromosomal tree for haplogroup O2a-M95, we analysed 1,437 male samples from South Asia for various downstream markers, carefully selected from the extant phylogenetic tree. 
With this increased resolution, we were able to identify at least three founders downstream to haplogroup O2a-M95 who are likely to have been associated with the dispersal of Austroasiatic languages to South Asia. 
The fourth founder was exclusively present amongst Tibeto-Burman speakers of Manipur and Bangladesh. 
In sum, our new results suggest the arrival of Austroasiatic languages in South Asia during last five thousand years.
Prajjval Pratap Sing, et al., "Counting the paternal founders of Austroasiatic speakers associated with the language dispersal in South Asia" bioRxiv (November 18, 2019) https://doi.org/10.1101/843672 (emphasis added).

Background from the Introduction (emphasis added):

Among the four major language families present in South Asia, the Austroasiatic language family has the smallest number of speakers. Nevertheless, its geographical location and exclusive distribution amongst tribal populations make Austroasiatic one of the most intriguing language families in the context of the Subcontinent where it is represented by the major Mundari and minor Khasi branches. The advancement of technology has helped to resolve the long standing debate about the arrival of Austroasiatic (Mundari) speakers to South Asia. The three types of genetic markers (mtDNA, Y chromosome and autosomes) revealed the extraordinary migratory phenomenon. Because of the incompatible histories of their paternal and maternal ancestries, the origin and dispersal of Austroasiatic language in these populations had remained a debatable issue for many years. For Mundari speakers, the exclusive South Asian maternal ancestry pinpointed their origin in South Asia, whereas their overwhelming East/Southeast-Asian-specific paternal ancestry associated with haplogroup O2a-M95 suggested a dispersal from East to West. Another Austroasiatic branch, Khasi, evinced mixed ancestries for both the paternal and maternal lines. Over a century of ethnographical and Indological scholarship had identified Austroasiatic language communities as representing the earliest settlers of South Asia, so that this view represented the consensus. Then, the increasing resolution of the Y-chromosomal tree and Y-STR-based coalescent calculations began to challenge our view of these populations. Despite the problems with STR-based coalescent calculations, the Y-STR variance for haplogroup O2a-M95 was consistently reported to be significantly higher in Southeast Asia. 
With the technological advancement, a much clearer picture has now emerged, showing a strong association of largely male mediated gene flow for the dispersal of Mundari speakers from Southeast Asia to South Asia. The autosomal data showed a bidirectional gene flow between South and Southeast Asia. In addition, the global distribution and internal phylogeny of Y-chromosomal haplogroup O2a-M95 decisively demonstrated that this language dispersal transpired from Southeast Asia to South Asia. In a recent genome-wide study of a large number of South and Southeast Asian populations, Dravidian speakers of Kerala and Lao speakers from Laos were found to be the best ancestral proxies for Mundari speakers. The admixture dates for these ancestries were estimated to be between 2-3.8 kya. Moreover, this study also found clear-cut genetic differentiation between North and South Munda groups.
Based on the prevalence of haplogroup O2a-M95, the arrival of Mundari speakers to South Asia was shown to have been largely male-mediated. However, it remained unclear how many paternal founders had brought this haplogroup to South Asia. The recent admixture time estimated through autosomal data has now narrowed down the arrival of Mundari ancestors to South Asia. Therefore, a marker such as M95, which originated >12 kya, would not help us to understand the most recent founding lineages. The association of haplogroup O2a-M95 with the arrival of Mundari speakers has been explored previously in detail. However, the lack of downstream markers made it impossible to discern the number of most recent founders who had originally borne the language to South Asia. Moreover, previous studies on Tibeto-Burman populations of Northeast India and Bangladesh also reported this same haplogroup, giving rise to the contentious question as to whether the expansion of Tibeto-Burmans could have involved the assimilation of Austroasiatic populations. 
Therefore, we took advantage of recent technological advancements and extracted the downstream SNPs associated with Munda-related populations from one of our recent studies. We genotyped these SNPs for large number of Mundari, Tibeto-Burman and Bangladeshi population groups and reconstructed the phylogeographical distribution of downstream branches of haplogroup O2a-M95 in South Asia.
From the body text (emphasis added):

The diverse founders as well as the large number of unclassified samples (41% for Mundari, 38% for Khasi and 1% for Tibeto-Burmans) suggest that the migration of Austroasiatic speakers to South Asia was not associated with the migration of a single clan or a drifted population. 
Neither does the contrasting distribution of various founders discovered in this study amongst both Mundari and Tibeto-Burman populations support the assimilation of the former to the latter. 

Youngest Homo Erectus Remains Are About 100,000 Years Old

This reaffirms the paradigm and status quo, that Homo erectus had a million year run in Asia and vanished shortly (percentage-wise) before modern humans arrived.

Razib reminds us that there were many archaic hominin species in Asia contemporaneously with Homo erectus, including Neanderthals, Denisovans, the "hobbits" of Flores, and an additional species or three in the Philippines and possible China too. Modern humans reached Sumatra no later than 70,000 years ago, immediately after the Toba eruption.

It wouldn't be surprising to find new Homo erectus remains later, or new modern human remains earlier, that would narrow the gap between the two species or possible leave evidence of a brief overlap like the 1,000 years overlaps characteristic of Europe at a local level.

This species diversity may have flowed from topographies and ecologies that created many separate and independent ecological micro-niches that could not be sustained, for example, on a broad flat plain or in an area with thin trees cover and rolling hills.

My personal conjecture is that the Toba eruption resulted in temporary ecological changes that devastated the already low density H. erectus population and cleared the way for an influx of modern humans from South Asia via Myanmar. Modern humans eventually killed almost all of the stragglers, although relict populations of H. erectus may have survived many thousands or tens of thousands of years past first contact and I don't rule out the possibility that there is still a small relict population of archaic hominins or archaic hominin hybrids deep in some Indonesian jungle. If archaic hominins continue to exist anywhere, it is probably there.  
Homo erectus is the founding early hominin species of Island Southeast Asia, and reached Java (Indonesia) more than 1.5 million years ago. Twelve H. erectus calvaria (skull caps) and two tibiae (lower leg bones) were discovered from a bone bed located about 20 m above the Solo River at Ngandong (Central Java) between 1931 and 1933, and are of the youngest, most-advanced form of H. erectus. Despite the importance of the Ngandong fossils, the relationship between the fossils, terrace fill and ages have been heavily debated. 
Here, to resolve the age of the Ngandong evidence, we use Bayesian modelling of 52 radiometric age estimates to establish—to our knowledge—the first robust chronology at regional, valley and local scales. We used uranium-series dating of speleothems to constrain regional landscape evolution; luminescence, 40argon/39argon (40Ar/39Ar) and uranium-series dating to constrain the sequence of terrace evolution; and applied uranium-series and uranium series–electron-spin resonance (US–ESR) dating to non-human fossils to directly date our re-excavation of Ngandong.
We show that at least by 500 thousand years ago (ka) the Solo River was diverted into the Kendeng Hills, and that it formed the Solo terrace sequence between 316 and 31 ka and the Ngandong terrace between about 140 and 92 ka. Non-human fossils recovered during the re-excavation of Ngandong date to between 109 and 106 ka (uranium-series minimum) and 134 and 118 ka (US–ESR), with modelled ages of 117 to 108 thousand years (kyr) for the H. erectus bone bed, which accumulated during flood conditions These results negate the extreme ages that have been proposed for the site and solidify Ngandong as the last known occurrence of this long-lived species.
Yan Rizal, et al., "Last appearance of Homo erectus at Ngandong, Java, 117,000–108,000 years ago" Nature (December 18, 2019). doi:10.1038/s41586-019-1863-2

Tuesday, December 3, 2019

Astronomy Observations Supporting Dark Energy's Existence May Actually See Large Scale Structure

Executive summary

Is dark energy the dominant kind of stuff in the universe? 

Maybe not. 

A credible analysis of a data set much larger than the on supporting the existence of dark energy when it was proposed imply that the fact that the data supporting the existence of dark energy comes from supernovae which have a skewed distribution favoring certain parts of the sky suggest that what looked like dark energy may actually be an observation of large scale structure in the universe.

Background

The Standard Model of Cosmology

The "Standard Model of Cosmology" also known as the ΛCDM model, where Λ (pronounced "lambda") is the cosmological constant of General Relativity, and CDM stands for "cold dark matter" (confusingly a term defined in a manner consistent with both "warm dark matter" and "cold dark matter" in non-cosmology contexts).

The ΛCDM model is the leading overarching framework for understanding the structure and logic of our astronomy observations at the sale of the universe as a whole and its large scale structure. Its prominence derives from the fact that it can be fit consistently to astronomy observations of very large cosmological scales with just seven or so parameters determined based upon astronomy observations. 

The ΛCDM model also provides a plausible framework that could produce sometimes reasonable similar to the structure of the existing universe at the scale of galaxies and galaxy clusters, although fitting the astronomy data to observations at this scale has been more problematic.

Even at the cosmological level, the ΛCDM model has critics who are credible, smart, well-qualified PhD astronomers, astrophysicists and cosmologists, who argue that new data requires modification of this model. This post is about one such criticism.

Dark Energy

In General Relativity with a cosmological constant, the cosmological constant is a constant baseline amount of mass-energy per volume of spacetime that exists in a vacuum (in models where this mass-energy is something separate from the spacetime itself this is most often called "quintessence" or "dark energy". In this model, the universe constantly expands in volume from a mere volumeless point at the moment of the Big Bang and expands at the speed of light, and the amount of dark energy in the universe is proportionate to the volume of the universe. 

Dark energy violates conservation of mass-energy in the ordinary sense by increasing as the volume of universe expands.

Ordinary matter and dark matter

In contrast, in the ΛCDM model, the aggregate amount of ordinary matter and cold dark matter in the universe is constant, so the density of ordinary matter and cold dark matter in the universe declines as a function of one over the age of the universe (currently about 14 billion years in round numbers).

Ordinary matter, cold dark matter (and also all forms of mass-energy in the universe other than dark energy, such as kinetic energy and photons), perfectly obeys the conservation of mass-energy (although quantum mechanics allows mass-energy to be temporarily "borrowed" in non-observable intermediate steps of physics phenomena making possible phenomena like quantum tunneling which is necessary, for example, for transistors to work as they do, something that would be impossible in the classical electromagnetic theory summarized in Maxwell's equations). 

Everything other than dark energy in the universe is constantly expanding in all directions with the momentum of the Big Bang modified by gravitational pulls in all directions from everything else in the universe, although everything other than massless particles (basically photons) is doing this at less than the speed of light. So, clumps of matter like galaxies constant get more distant from each other, on average.

In the absence of dark energy, the average rate of this expansion would be constant, driven at first order by the momentum of the Big Bang, and modified up or down at second order (in an amount hat would average out over time), by the proximity of other ordinary matter, dark matter and radiation in its vicinity.

The phenomenological implications of dark energy

But, in the ΛCDM model, the constant increase in the aggregate mass-energy of the universe, by continually adding dark energy at the outer boundary of the universe as the volume of space in the universe expands at the speed of light away from the Big Bang, causes the rate at which ordinary matter and dark matter in the universe expands away from other ordinary matter and dark matter in the universe to accelerate because the newly created dark energy is pulling it outward.

Dark energy's existence is inferred in the ΛCDM model when the the rate at which ordinary matter and dark matter in the universe expands away from other ordinary matter and dark matter in the universe to accelerates.

Measuring dark energy

This acceleration can be inferred from astronomy observations because the finite speed of light means that our astronomy observations of things that are further away (something that can be determined using a phenomena known as "red shift") are further back in time. So, if the rate at which distant objects of expanding away from each other is slower than the rate at which close objects are expanding away from each other, then we can use the difference between those rates to determine the cosmological constant and the amount of dark energy as a proportion of the total mass-energy of the universe at the present time.

Using astronomy observations to determine the relative amounts of dark energy, cold dark matter and ordinary matter in the universe, in a model dependent way using the ΛCDM model, implies that right now, 14 billion years after the Big Bang, dark energy accounts for about 70% of the aggregate mass-energy of the universe at the present time, while about 30% of the aggregate mass-energy of the universe at the present time is other stuff, mostly ordinary matter (about 5% in round numbers) and dark matter (about 25% in round numbers).

The main astronomy observation we use to estimate the amount of dark energy in the universe is the distribution of Type Ia supernovae in the observable universe. These supernovae produce a very distinctive electromagnetic signal which is consistent among supernovae of this type that makes it possible to accurately determine their distance and hence how long ago they occurred based upon how redshifted the electromagnetic signal (mostly with visible light, but also with other wavelengths of light), which can be combined with the location in the sky where it is observed to mark a four dimensional Type Ia supernovae map from which expansion rates can be determined. (These supernovae also happen a predictable rates in a well understood astrophysical process.)

Similarly, according to the ΛCDM model, a little more than 7 billion years ago, about 34% of the mass energy of the universe was dark energy, about 11% was ordinary matter, and about 55% was cold dark matter.

Isotropy v. Anisotropy

But, in order to accurately calculate the amount of dark energy in the universe we have to make some key assumptions which we use to fit our astronomy observations to the ΛCDM model.

One of the key assumptions that goes into converting astronomy observations into a cosmological constant value (which combined with the age of the universe can be used to determine the amount of dark energy in the universe), is that the universe is isotropic. This means that at cosmological scales (i.e. the scale of large chunks of the entire universe), that the observable properties of any given large chunk of the universe is basically identical and symmetrical. 

In contrast, if the universe is anisotropic, then there is very large scale structure in the universe (presumably an imprint of trends put in place by the way that random outcomes of very early events in the universe, possibly at a quantum mechanical level in the first few seconds of the universe even, turned out).

An anisotropic universe isn't a horribly outrageous or absurd idea. At the scale of our local few dozen galaxies, the universe is undeniably anisotropic. There are big clumps of matter that make up galaxies and galaxy clusters, and vast comparatively empty spaces with only a little hydrogen gas and dust and a few stray isolated stars and rocks in them. And the number of galaxies in each spatial direction is not the same.

Whether the universe is isotropic or anisotropic at the cosmological scale of the entire observable universe is, in principle at least, a question that doesn't have a Platonically right or wrong answer as a matter of pure reason that can be determined empirically from astronomy observations.

The New Paper

A six page paper published in a peer reviewed scientific journal two weeks ago suggests that the astronomy observations suggesting that the rate at which the ordinary matter and cold dark matter in the universe is expanding is accelerating is just an optical illusion arising from the fact that the universe is actually anisotropic at cosmological scale. 

As a result, from our vantage point on Earth, we may actually be seeing the large scale structure of the universe and misinterpreting those observations as the acceleration of the universe due to dark energy. The paper asserts that this happens because we are using the ΛCDM model to fit our observations because the ΛCDM model counterfactually assumes that the universe is isotropic. 

While this would be bad news for a core tenant of the ΛCDM model and mean that everything we've been told about cosmology for decades is significantly inaccurate, this could be good news for fundamental physics. This is because there are deep problems involved in figuring out the fundamental laws of physics in models where the core law of conservation of mass-energy is not observed globally by gravity, even though the Standard Model of Particle Physics obeys this law and all local observations of gravity obey this law. And, the assumption of isotropy which is displaced is an assumption about empirical reality, rather than itself being a fundamental law of physics that needs to be true.

In particular, it is much easier to devise a model of quantum gravity in which gravity observes the same law of conservation of matter-energy that the Standard Model does, than it is to do so in which some gravitational phenomena does not observe that law.

The abstract of the paper and its citation are as follows:
Observations reveal a “bulk flow” in the local Universe which is faster and extends to much larger scales than are expected around a typical observer in the standard ΛCDM cosmology. This is expected to result in a scale-dependent dipolar modulation of the acceleration of the expansion rate inferred from observations of objects within the bulk flow.

From a maximum-likelihood analysis of the Joint Light-curve Analysis catalogue of Type Ia supernovae, we find that the deceleration parameter, in addition to a small monopole, indeed has a much bigger dipole component aligned with the cosmic microwave background dipole, which falls exponentially with redshift z: q0 = qm + qd.n̂ exp(-z/S). The best fit to data yields qd = −8.03 and S = 0.0262 (⇒d ∼ 100 Mpc), rejecting isotropy (qd = 0) with 3.9σ statistical significance, while qm = −0.157 and consistent with no acceleration (qm = 0) at 1.4σ. Thus the cosmic acceleration deduced from supernovae may be an artefact of our being non-Copernican observers, rather than evidence for a dominant component of “dark energy” in the Universe.
Jacques Collins, et al., "Evidence for Anisotropy of cosmic acceleration", 631 Astronomy and Astrophysics L13 (November 20, 2019) DOI: https://doi.org/10.1051/0004-6361/201936373

Hat tip to Backreaction. As Sabine Hossenelder explains in this blog post:
The most important evidence we have for the existence of dark energy comes from supernova redshifts. Saul Perlmutter and Adam Riess won a Nobel Prize for this observation in 2011. . . . They used the distance inferred from the brightness and the redshift of type 1a supernovae, and found that the only way to explain both types of measurements is that the expansion of the universe is getting faster. And this means that dark energy must exist. 
Now, Perlmutter and Riess did their analysis 20 years ago and they used a fairly small sample of about 110 supernovae. Meanwhile, we have data for more than 1000 supernovae. For the new paper, the researchers used 740 supernovae from the JLA catalogue. But they also explain that if one just uses the data from this catalogue as it is, one gets a wrong result. The reason is that the data has been “corrected” already.

This correction is made because the story that I just told you about the redshift is more complicated than I made it sound. That’s because the frequency of light from a distant source can also shift just because our galaxy moves relative to the source. More generally, both our galaxy and the source move relative to the average restframe of stuff in the universe. And it is this latter frame that one wants to make a statement about when it comes to the expansion of the universe.

How do you even make such a correction? Well, you need to have some information about the motion of our galaxy from observations other than supernovae. You can do that by relying on regularities in the emission of light from galaxies and galaxy clusters. This allow astrophysicist to create a map with the velocities of galaxies around us, called the “bulk flow” . 
But the details don’t matter all that much. To understand this new paper you only need to know that the authors had to go and reverse this correction to get the original data. And then they fitted the original data rather than using data that were, basically, assumed to converge to the cosmological average. 
What they found is that the best fit to the data is that the redshift of supernovae is not the same in all directions, but that it depends on the direction. This direction is aligned with the direction in which we move through the cosmic microwave background. And – most importantly – you do not need further redshift to explain the observations. 
If what they say is correct, then it is unnecessary to postulate dark energy which means that the expansion of the universe might not speed up after all. 
Why didn’t Perlmutter and Riess come to this conclusions? 
They could not, because the supernovae that they looked were skewed in direction. The ones with low redshift were in the direction of the CMB dipole; and high redshift ones away from it. With a skewed sample like this, you can’t tell if the effect you see is the same in all directions. 
What is with the other evidence for dark energy? 
Well, all the other evidence for dark energy is not evidence for dark energy in particular, but for a certain combination of parameters in the concordance model of cosmology. These parameters include, among other things, the amount of dark matter, the amount of normal matter, and the Hubble rate. 
There is for example the data from baryon acoustic oscillations and from the cosmic microwave background which are currently best fit by the presence of dark energy. But if the new paper is correct, then the current best-fit parameters for those other measurements no longer agree with those of the supernovae measurements. This does not mean that the new paper is wrong. It means that one has to re-analyze the complete set of data to find out what is overall the combination of parameters that makes the best fit.
The post there concludes as follows:
This paper, I have to emphasize, has been peer reviewed, is published in a high quality journal, and the analysis meets the current scientific standard of the field. It is not a result that can be easily dismissed and it deserves to be taken very seriously, especially because it calls into question a Nobel Prize winning discovery. This analysis has of course to be checked by other groups and I am sure we will hear about this again, so stay tuned.
Footnote

This is basically a rehash of an analysis previously published by an overlapping group of authors in 2016. A more technical discussion of the 2016 paper can be found at 4gravitons. 

There is also an unrelated major 2015 paper revealing a systemic issue involved in dark energy calculations, which is that there are basically two kinds of type Ia supernovae which have to be distinguished and that grouping them in one pool of data undermines its precision and reliability. I summarized that paper in the linked post as follows:
It turns out that there are two different subtypes of type 1a supernovas, with one more common in the early universe, and the other more common recently. They are very hard to distinguish in the visible light spectrum, but have clear differences in the UV spectrum. As a result, the rate at which the universe is expanding, if indeed it is expanding, and the amount of dark energy in the universe, are systemically overestimated by a significant amount. 
Less dark energy may, however, mean that another cosmology mystery is more profound. This could bring the relative amounts of dark matter and dark energy in the universe closer together, something that is already called the cosmic coincidence problem because there is no obvious theoretical reason for the two dark components of cosmology to be so similar in aggregate amount.
It isn't clear to me if the 2016 and 2019 paper account for the issue discovered in the 2015 paper. If they don't, the case for there not being net dark energy may be even stronger. 

Monday, December 2, 2019

European Barbarians Had Varied Y-DNA Within Populations

I'll comment more if I have time, but the headline points out what I think is the most notable point. This kind of Y-DNA diversity doesn't happen overnight. It takes generations. This points to a diversity of ancestry among discrete groups of "barbarians" in Europe that is greater than usually appreciated and a more cosmopolitan culture among these groups than is often appreciated.

Also, while the Hungarian example is usually considered an extreme counterexample of demic diffusion of language, with language shift secured despite minimal genetic introgression, there is lots of Uralic Y-DNA (see also Eurogenes) in this ancient DNA sample from Hungary.


Via Bernard's Blog. From that post (translated by Google from the French original):
Endre Neparáczki and his colleagues have just published a paper entitled: Y-chromosome haplogroups from Hun, Avar and conquering Hungarian nomadic period people of the Carpathian Basin. They analyzed the Y chromosome haplogroups of several individuals from 8 cemeteries in Hungary hosting Hungarian conquerors but also individuals from the previous Hun and Avar periods. The authors also tested some autosomal markers associated with eye, hair and skin color, and lactose tolerance. The authors obtained results for 46 men including three from the Hun period and 14 from the Avar period.

Monday, November 25, 2019

Hungarian Scientists Almost Surely Didn't Discover A Fifth Force

Scientists at the Institute for Nuclear Research at the Hungarian Academy of Sciences (Atomki) have posted findings showing what could be an example of that fifth force at work. 
The scientists were closely watching how an excited helium atom emitted light as it decayed. The particles split at an unusual angle -- 115 degrees -- which couldn't be explained by known physics. 
The study's lead scientist, Attila Krasznahorkay, told CNN that this was the second time his team had detected a new particle, which they call X17, because they calculated its mass at 17 megaelectronvolts. 
"X17 could be a particle, which connects our visible world with the dark matter," he said in an email.
From CNN (previous papers by the same group are linked to in the story with hyperlinks).

The odds are at least 98% that Hungarian scientists have made some sort of experimental error and have not discovered a fifth force (approximately the precision with which non-Standard Model physics are typically ruled out in broad ranging searches for "new physics" that aren't specifically predicted by a particular hypothesis of interest).

Why? 

Because there are a great many experiments that could have detected the 17 MeV mass particle that they claim to have seen (e.g. LEC, Tevatron, LHCb, ATLAS, CMS, Jefferson Labs, etc.) and none of the other experiments have replicated this result. ATLAS and CMS are sensitive to new physics particles up to about 1,000,000,000 MeV of mass. The two experiments at the Tevatron collider were sensitive to particles up to about 200,000 MeV of mass. The LEC was sensitive to particles up to about 100,000 MeV of mass. Particles with masses on the order of 17 MeV have been possible to observe in high energy physics experiments since the 1960s, if not earlier, and nobody has claimed to see them, until now.

More generally, myriad other high energy physics calculations would be affected by such a fifth force but are adequately explained by the plain old three forces of the Standard Model (also ignoring gravity).

17 MeV is about 34 times the mass of an electron, a muon is about 6 times as massive as 17 MeV, a pion (the lightest particle made out of quarks, which never appear in isolation in the real world) is about 8 times as massive as 17 MeV, and a proton or neutron is about 54 times as massive as 17 MeV. Neutrinos are on the order of a million or more times less massive than 17 MeV.

Particles such as this hypothetical particle, which purportedly have effects which can be seen electromagnetically (such as interactions involving the emission of light as claimed in this article), are particularly easy to see experimentally, because experimental electromagnetic measurements are among the most precise of all kinds of scientific measurements. The polarization and energies of single photons can be measured with modern physics instrumentation.

17 MeV is also a poor fit to most astronomy data related to hypothetical dark matter particle candidates based upon the inferred mean velocity of dark matter particle candidates in thermal freeze out scenarios, which tend to favor "warm dark matter" candidates on the order of 10,000 times less massive than 17 MeV.

Also, while a helium atom may seen pretty simple (usually two protons and two neutrons), this atom is close to the boundary of what it is possible to calculate from first principles from the Standard Model of Physics, and "hadronic matter" (i.e. particles made up of bound compound particles made of quarks), which is described with a combination of the strong force described by quantum chromodynamics (QCD), the electromagnetic force described by quantum electrodynamics (QED), and the Standard Model physics of the weak force, are exceedingly challenging to model mathematically from first principles. Almost all such calculations, in practice, are done using numerical approximations that each have their flaws and are least reliable in the low energy regimes (i.e. for masses and energies on the order of 2 GeV or so, plus or minus, with a helium atom having a mass of approximately 4 GeV) in which QCD calculations become "non-perturbative" (which basically means highly non-linear).

How else can you tell?

Any genuinely plausible experimental anomaly routinely results in hundreds of research papers attempting to explain this phenomena written by physicists all over the world, in a matter of weeks. This paper has generated no such academic interest, effectively illustrating an implicit form of negative peer review. If the claims of a fifth force held water, the physics pre-print depositor arXiv would be full of papers trying to explain this anomaly and it isn't.

Friday, November 22, 2019

Why Do Some LSB Dwarf Galaxies Have Lots Of Dark Matter While Some Seemingly Have None?

The paper does not consider whether the MOND external field effect (EFF) can adequately explain the departure from the mass discrepancy-acceleration relation that MOND codifies in the form of a phenomenological toy model gravity modification.
We use a sample of galaxies with high-quality rotation curves to assess the role of the luminous component ("baryons") in the dwarf galaxy rotation curve diversity problem. 
As in earlier work, we find that the shape of the rotation curve correlates with baryonic surface density; high surface density galaxies have rapidly-rising rotation curves consistent with cuspy cold dark matter halos, slowly-rising rotation curves (characteristic of galaxies with inner mass deficits or "cores") occur only in low surface density galaxies. 
The correlation, however, seems too weak in the dwarf galaxy regime to be the main driver of the diversity. In particular, the observed dwarf galaxy sample includes "cuspy" systems where baryons are unimportant in the inner regions and "cored" galaxies where baryons actually dominate the inner mass budget. 
These features are important diagnostics of the viability of various scenarios proposed to explain the diversity, such as (i) baryonic inflows and outflows; (ii) dark matter self-interactions (SIDM); (iii) variations in the baryonic acceleration through the "mass discrepancy-acceleration relation" (MDAR); or (iv) non-circular motions in gaseous discs. A reanalysis of existing data shows that MDAR does not hold in the inner regions of dwarf galaxies and thus cannot explain the diversity.
Together with analytical modeling and cosmological hydrodynamical simulations, our analysis shows that each of the remaining scenarios has promising features, but none seems to fully account for the observed diversity. The origin of the dwarf galaxy rotation curve diversity and its relation to the small structure of cold dark matter remains an open issue.
Isabel M.E. Santos-Santos, et al., "Baryonic clues to the puzzling diversity of dwarf galaxy rotation curves" (November 20, 2019) (submitted to MNRAS).

The conclusion in the body text states:
SUMMARY AND CONCLUSIONS

Dwarf galaxy rotation curves are challenging to reproduce in the standard Lambda Cold Dark Matter (LCDM) cosmogony. In some galaxies, rotation speeds rise rapidly to their maximum value, consistent with the circular velocity curves expected of cuspy LCDM halos. In others, however, rotation speeds rise more slowly, revealing large “inner mass deficits” or “cores” when compared with LCDM halos (e.g., de Blok 2010). This diversity is unexpected in LCDM, where, in the absence of modifications by baryons, circular velocity curves are expected to be simple, self-similar functions of the total halo mass (Navarro et al. 1996b, 1997; Oman et al. 2015). We examine in this paper the viability of different scenarios proposed to explain the diversity, and, in particular, the apparent presence of both cusps and cores in dwarfs.
 
In one scenario the diversity is caused by variations in the baryonic contribution to the acceleration in the inner regions, perhaps linked to rotation velocities through the “mass discrepancy acceleration relation” (MDAR; McGaugh et al. 2016). In agreement with previous work, we show here that the inner regions of many dwarf galaxies deviate from such relation, especially those where the evidence for “cores” is most compelling. We conclude that the MDAR does not hold in the inner regions of low-mass galaxies and, therefore, it cannot be responsible for the observed diversity. 
A second scenario (BICC; “baryon-induced cores/cusps”) envisions the diversity as caused by the effect of baryonic inflows and outflows during the formation of the galaxy, which may rearrange the inner dark matter profiles: cores are created by baryonic blowouts but cusps can be recreated by further baryonic infall (see; e.g., Navarro et al. 1996a; Pontzen & Governato 2012; Di Cintio et al. 2014a; Tollet et al. 2016; Ben´ıtez-Llambay et al. 2019). 
A third scenario (SIDM) argues that dark matter self-interactions may reduce the central DM densities relative to CDM, creating cores. As in BICC, cusps may be re-formed in galaxies where baryons are gravitationally important enough to deepen substantially the central potential (see, e.g., Tulin & Yu 2018, for a recent review). 
We have analyzed cosmological simulations of these two scenarios and find that, although they both show promise explaining systems with cores, neither reproduces the observed diversity in full detail. Indeed, both scenarios have difficulty reproducing an intriguing feature of the observed diversity, namely the existence of galaxies with fast-rising rotation curves where the gravitational effects of baryons in the inner regions is unimportant. They also face difficulty explaining slowly-rising rotation curves where baryons actually dominate in the inner regions, which are also present in the observational sample we analyze. 
We argue that these issues present a difficult problem for any scenario where most halos are expected to develop a sizable core and where baryons are supposed to be responsible for the observed diversity. This is especially so because the relation between baryon surface density and rotation curve shape is quite weak in the dwarf galaxy regime, and thus unlikely to drive the diversity. We emphasize that, strictly speaking, this conclusion applies only to the particular implementations of BICC and SIDM we have tested here. These are by no means the only possible realizations of these scenarios, and it is definitely possible that further refinements may lead to improvements in their accounting of the rotation curve diversity. 
Our conclusions regarding SIDM may seem at odds with recent work that reports good agreement between SIDM predictions and dwarf galaxy rotation curves (see; e.g., the recent preprint of Kaplinghat et al. 2019, which appeared as we were readying this paper for submission, and references therein). That work, however, was meant to address whether observed rotation curves can be reproduced by adjusting the SIDM halo parameters freely in the fitting procedure, with promising results. Our analysis, on the other hand, explores whether the observed galaxies, if placed in average (random) SIDM halos, would exhibit the observed diversity. Our results do show, in agreement with earlier work, that SIDM leads to a wide distribution of rotation curve shapes. However they also highlight the fact that outliers, be they large cores or cuspy systems, are not readily accounted for in this scenario, an issue that was also raised by Creasey et al. (2017). Whether this is a critical flaw of the SIDM scenario, or just signals the need for further elaboration, is still unclear. 
We end by noting that the rather peculiar relation between inner baryon dominance and rotation curve shapes could be naturally explained if non-circular motions were a driving cause of the diversity. For this scenario to succeed, however, it would need to explain why such motions affect solely low surface brightness galaxies, the systems where the evidence for “cores” is most compelling. Further progress in this regard would require a detailed reanalysis of the data to uncover evidence for non-circular motions, and a clear elaboration of the reason why non-circular motions do not affect massive, high surface brightness galaxies. Until then, we would argue that the dwarf galaxy rotation curve diversity problem remains, for the time being, open.

All Top Quark Mass Measurements And Some Notable Predictions For It Summarized

A Summary Of Top Quark Mass Measurements
Direct measurement of the top quark mass 
ATLAS: lepton+jets events at 8 TeV (20.2 fb−1 ). ... 172.69 ± 0.48 GeV, with a relative uncertainty of 0.28%. ...
CMS: dilepton events at 13 TeV (35.9 fb−1 ). ... 172.33 ± 0.14 (stat) +0.66 −0.72 (syst) GeV, with a total relative uncertainty of approximately 0.42%. ...
CMS: all-jets events at 13 TeV (35.9 fb−1 ). ... 172.26 ± 0.61 GeV, with a relative uncertainty of 0.36%. ...
ATLAS: lepton+jets with an additional soft µ at 13 TeV (36.1 fb−1 ). ... 174.48 ± 0.40 (stat) ± 0.67 (syst) GeV, with a total relative uncertainty of 0.45%. ...
Indirect determination of the top quark mass 
ATLAS: inclusive tt cross section in the eµ channel at 13 TeV (36.1 fb−1 ). ... 173.1 + 2.0 − 2.1 GeV. ...
ATLAS: differential cross section for lepton+jets tt+1jet events at 8 TeV (20.2 fb−1 ). ... 171.1 ± 0.4 (stat) ± 0.9 (syst) +0.7 −0.3 (theo) GeV [pole mass]. ...
CMS: triple-differential cross section in dilepton events at 13 TeV (35.9 fb−1 ). ... 170.5 ± 0.8 GeV [pole mass]. ... 
CMS: invariant jet mass distribution for boosted jets in lepton+jets events at 13 TeV (35.9 fb−1 ). ... 172.56 ± 2.47 GeV. ...
CMS: running of the top quark mass in eµ events at 13 TeV (35.9 fb−1 ). ... The observed evolution of the mt(µk) values agrees with the prediction from renormalization group equations at 1- loop precision within 1.1 standard deviations. The no-running hypothesis is excluded at 95% confidence level.
Abstract
The ATLAS and CMS Collaborations have performed a variety of measurements of the top quark mass, taking advantage of the abundant production of top quarks at the LHC. The most recent measurements are reported here, based on data collected at 8 and 13 TeV, with particular emphasis on the distinction between the so-called "direct" measurements and the "indirect" evaluations obtained from cross sections and differential cross sections.
Andrea Castro (on behalf of the ATLAS and CMS Collaborations), "Top Quark Mass Measurements in ATLAS and CMS" (November 21, 2019).

The final combined value from the two Tevatron experiments which were the first to measure the top quark mass was 174.30 ± 0.35 ± 0.54 GeV.

The Particle Data Group entity for this is here, but not as up to date. The latest direct measurement average from the PDG is 172.9 ± 0.4 GeV. The latest indirect measurement average from PDG is 173.1 ± 0.9 GeV. The weighted average of those two measurements was 172.96 GeV.

Combining The Results For A New World Average

To recap, the nine independent measurements at Tevatron and the LHC combined to date are:

172.69 ± 0.48 GeV
172.33 ± 0.14 (stat) +0.66 −0.72 (syst) GeV
172.26 ± 0.61 GeV
174.48 ± 0.40 (stat) ± 0.67 (syst) GeV
173.1 + 2.0 − 2.1 GeV
171.1 ± 0.4 (stat) ± 0.9 (syst) +0.7 −0.3 (theo) GeV
170.5 ± 0.8 GeV 
172.56 ± 2.47 GeV
174.30 ± 0.35 ± 0.54 GeV

Feel free to calculate an error weighted world average at your leisure (which I would if I had time). 

Step One

First combine the errors for each measurement by taking the square root of the sum of the squares of the errors (and the arithmetic average where upper and lower bound errors differ before combining different types of error). This gives you this (with extreme values in each category noted). 

LHC Results (likely to have some correlated errors)

172.69 ± 0.48 GeV (171.73 - 173.65 GeV) ATLAS direct
172.33 ± 0.70 GeV (170.93 - 173.73 GeV) CMS direct
172.26 ± 0.61 GeV (171.04 - 173.48 GeV) CMS direct
174.48 ± 0.78 GeV (172.92 - 176.04 GeV) ATLAS direct
173.1 ± 2.05 GeV (169 - 177.2 GeV) ATLAS indirect
171.1 ± 1.10 GeV (168.9 - 173.3 GeV) ATLAS indirect
170.5 ± 0.80 GeV (168.9 - 172.1 GeV) CMS indirect
172.56 ± 2.47 GeV (167.62 -177.5 GeV) CMS indirect

Tevatron Results

174.30 ± 0.64 GeV (173.02 -175.58 GeV)

Step Two

Then, use the inverse of the margin of error for each measurement to construct of weight for each measurement and take the weighted average. This give you:

172.65 GeV

Thus the latest experimental result from the LHC have pulled down the global average top quark mass by about 0.31 GeV from the value currently listed by the Particle Data Group.

Step Three

Then, calculate the margin of error of the weighted average from the inputs. A prior post somewhere at this blog explains how to do with simplifying assumptions. 

But, to really do it rigorously, you have to consider the fact that the systemic and theoretical errors are not fully independent of each other, particularly for results that are all from LHC (which tends to make the total combined margin of error greater), and that historically, in this kind of experiment, actual errors have had fatter tails than a Gaussian (i.e. "normal") distribution and are distributed in something closer to a student t-test distribution with different parameters established by the historical data, which also tends to increase the total combined margin of error. The combined error should be a little lower than, but fairly close in magnitude to, the lowest margin of error of any of the individual entries, unless there is very wide scatter of precise measurements, in which case the combined error should be higher than the lowest margin of error of the individual entries, because this pattern implies that one or more of the margins of error is underestimated.

I suspect that the combined margin of error is on the order of 0.35 GeV to 0.45 GeV. Using the higher figure to account for issues like correlated errors and non-Gaussian errors, the two sigma range for the top quark mass given current data is 171.75 GeV to 173.55 GeV.

Given comparison of the two sigma bands for each result and giving slightly more importance to the Tevatron value as it is more independent than the other values, I suspect that the true value of the top quark pole mass is probably between 172.92 GeV and 173.3 GeV, which favors the higher end of combined two sigma range.

Theoretical Comparison Points (Highly Speculative)

Other reference points from theory include the following conjectures, none of which is widely accepted among physicists, but is innocent enough to compare to the experimental results.  The world average is at the low end of these predictions, but the first two are consistent to within two sigma with the current world average, while the third theoretical number is not quite consistent with the world average including all measurements at two sigma.

The Extended Koide's Rule Estimate

An extended Koide's rule estimate of the top quark mass using only the electron and muon masses as inputs, predicted a top quark mass of 173.263947 ± 0.000006 GeV. This would be 173.26 GeV to the greatest precision that would be non spurious to compare to current experimental results. This is probably within 1.74 sigma of the current world average and also within the preferred region I identify above. 

If you read my prior posts about the extended Koide's rule, there is good reason to think that this value should receive a second order correction which is a small downward adjustment of this value. This is because it does not reflect top quark to down quark transition which are rare but not impossible. This rule as a whole should also be given something less than full confidence, because both the charm quark and up quark values estimated in this fashion are quite far from the experimentally measured values of these quarks, so it clearly needs some adjustment to accurately reflect reality and is only a first order approximation of the fermion mass matrix. This suggests that a corrected extended Koide's rule would need a roughly 151 MeV downward adjustment to the top quark mass.

The Higgs Vacuum Expectation Value Based (LP&C) Estimates

* The weak version of the LP & C hypothesis is probably true.

The sum of the squares of the pole masses of the Standard Model fundamental particles is almost precisely identical to the square of the vacuum expectation value of the Higgs field.

A fit comparing the two predicts a top quark mass of 173.1125 ± 0.0025 GeV. Most of the uncertainty in this value is due to the uncertainty in the Higgs boson mass. This is 173.11 GeV to the greatest precision that would be non spurious to compare to current experimental results. This is probably within 1.31 sigma of the current world average and also right in the middle of the preferred region I identify above based on harmonizing the two sigma ranges of the nine available mass measurements.

The boson side is also consistent with the weak version of this hypothesis at a roughly 1.3 sigma level.

I strongly suspect that this relationship is a true and accurate law of physics and that the top quark, as a result, has a true pole mass of 173.11 GeV.

* The strong version of the LP & C hypothesis is probably false.

The value of the top quark mass necessary to make the sum of the squares of the fermion masses equal to the sum of the square of the boson masses would with the combined amount equal to the square of the vacuum expectation value of the Higgs field is 174.974 GeV (with less than 0.0005 GeV of uncertainty). This is 174.97 GeV to the greatest precision that would be non spurious to compare to current experimental results. This is probably 6.6 sigma from the current world average and is at least 5.1 sigma from the current world average. It is also more than 5.1 sigma from the PDG direct measurement world average. 

On the boson side, the strong version of the hypothesis would require a Higgs boson mass of 124.66 GeV, which is 5.3 sigma away from the current PDG value of 125.10 ± 0.14 GeV, which is also pretty much definitively ruled out by the experimental data.

Thus, the strong version of this hypothesis is ruled out by experimental data, at the more than five sigma level, for both fermions and for bosons.

This would most directly imply, without other new physics, a lack of perfect harmony between fundamental fermion pole masses and fundamental boson masses in the universe, even though they are very nearly balanced (in much the way that the pion is almost, but not quite, a Goldstone boson and is, instead, a pseudo-Goldstone), and there is a slight imbalance in favor of bosons in the universe, for reasons unknown.

To the extent that one thinks about this approximate balance of masses as being some sort of "supersymmetric" (in the less strict sense) balance in the universe between fermions and bosons, this broad sense supersymmetry is an approximate, rather than an exact, symmetry of the universe. This approximate symmetry may help explain why supersymmetry theories can provide informative and useful predictions despite the fact that there is no positive evidence for the existence of any of the new particles or other new physics predicted in supersymmetry theories.