Tuesday, May 28, 2013

130 GeV Fermi Line Inconsistent With Dark Matter

Whatever the Fermi line is, it isn't a dark matter signal
The cusp in the dark matter distribution required to explain the recently found excess in the gamma-ray spectrum at energies of 130 GeV in terms of the dark matter annihilations cannot survive the tidal forces if it is offset by 1.5° from the Galactic center as suggested by observations.
From Dmitry Gorbunov, Peter Tinjakov, "On the offset of a DM Cusp and the interpretation of the 130 GeV line as a DM signal" via Tommasso Dorigo's blog.

One of the strongest bits of experimental data favoring WIMP (weakly interacting massive particle) dark matter with particle masses at the electroweak scale is the Fermi line, i.e. 130 GeV photon signals detected by the Fermi Gamma-Ray Space Telescope that have no understood astronomical process as its source.

These signals have been hypothesized by theorists to be evidence of the annihilation of matter and antimatter dark matter particles of about 130 GeV mass with each other. But, even if these signals are evidence of such annihilations, they can only be the dark matter that everyone is looking for as the biggest gap in fundamental physics today if the signals come mostly from the right direction. The latest study indicates that they do not. Thus, the Fermi line grows an ever lengthening list of possible direct detections of dark matter or dark matter annihilation signals that have been ruled out as true dark matter signals.

The Fermi line could still be real; it just can't be dark matter

The study still doesn't tell us if the Fermi line is "real" or just some sort of statistical or systematic error in the observations.  So far, no plausible explanation that could explain the Fermi line as an experimental error has been identified. 

But, for example, suppose there was a supersymmetric particle with that mass or a second kind of Higgs boson of slightly different mass than the one already discovered, that was highly unstable.  If this particle existed, its annihilation could produce this signal in some unknown process, such as some kind of high energy interactions near the Milky Way's central black hole's event horizon, even though such an unstable particle can't be an important source of the phenomena attributed to dark matter.

A true SUSY optimist could see both the Fermi line and the ASM-02 positron excess as signatures of SUSY particle annihilations.  But, even for a SUSY optimist, the likelihood that a canonical sparticle or SUSY Higgs boson can provide a dark matter candidate that fits the experimental evidence is rapidly waning. 

The best hope in a SUSY theory for a dark matter candidate is now the same as it is in minimal Standard Model extensions - some sort of sterile (i.e. right handed) neutrino with a mass on the other of 2 keV (i.e. warm dark matter).  These models are highly constrained and it hasn't been fully established that they can really reproduce all observed dark matter phenomena.  But, these particles are the only game in town using the dark matter particle paradigm that hasn't been pretty definitively ruled out by observational evidence to date.

Independent lines of experimental evidence disfavor WIMP dark matter

This isn't too surprising. 

Multiple lines of evidence disfavor weakly interacting dark matter particles with masses of 10 GeV or so or more.  For example, heavy WIMPs are disfavored by (1) the small scale structure of the universe (i.e. the fact that there aren't enough dwarf satellite galaxies), (2) the exclusion ranges in multiple direct dark matter detection experiments at cross-sections of interactions many orders of magnitude weaker than those of neutrinos, (3) the "cuspy halo" problem (that heavy wimp dark matter doesn't naturally distribute itself in the shapes necessary to match observed galactic rotation curves), and (4) the non-detection of particles in the appropriate mass ranges at particle accelerators like the LHC. 

A determination that the directional source of the Fermi line gamma-rays is inconsistent with dark matter just adds one more independent line of experimental data to the others.

While the refutation of the ASM-02 positron excess as a possible dark matter annihilation signal at 300 GeV or more isn't yet complete, the astronomy data problems with cold dark matter apply to particles this heavy with especially great force, and there is other circumstantial evidence (such as the fact that other things we would expect to discovery at the same time as the annihilation of a dark matter particle that heavy) have not been seen.  Cosmic rays from quasars continue to be a more plausible source for this signal.

A personal conjecture

For what it is worth, my own intuition, informed by studies that disfavor dark matter models with more than one kind of dark matter particle in any significant frequency, is that a warm dark matter sterile neutrino, if there is one, is not a right handed neutrino in the usual sense, but is instead a singlet particle that is taxonomically part of the gravity sector in a gravi-weak unification theory or some other particle outside the domain of the three Standard Model forces and their interactions, rather than a missing piece within the SU(3)*SU(2)*U(1) group structure of the Standard Model that has almost nothing to do with any Standard Model particle other than the Higgs boson (which might interact with dark matter since it seems to couple to mass).

Also, while warm dark matter is the best prospect in the dark matter paradigm, I believe that it is still hasty to rule out theories outside that paradigm.  The best runner up would be some sort of gravity modification, possibly rooted in quantum gravity effects or limitations on the wavelengths of gravity waves as a result of the finite size of the universe.  Another would be that the phenomena attributed to dark matter actually consist, at least in part, of multiple kinds of "dim matter" phenomena consisting of ordinary matter, quite possibly maintained in some sort of dynamic equilibrium by ill understood astronomy processes, particularly in galactic clusters and/or galaxy formation, about which we have the least solid understanding.

Thursday, May 16, 2013

How Long Was A Trip From Sweden To Cyprus And Back By Sail?

As noted in the previous post at this blog, there appears to have been maritime trade network that extended from Southern Sweden to Cyprus in the Bronze Age (ca. 1500 BCE and later).

How long would it have taken to sail the entire distance and back?

Resort to an atlas (giving the benefit of the doubt to the ability of ancient sailors to take straight oversea routes rather than hugging the coast), shows that the total distance by sea is about 4000 nautical miles one way, and hence an 8000 nautical mile long round trip.

Based upon historically attested reports of travel times by sail in the Roman era, sailing speeds averaged about four knots with favorable winds and about two knots with unfavorable winds. 

Canny pre-modern sailors probably knew the wind patterns well enough to time their trips so that they had favorable winds at least half of the time, suggesting an average speed of at least three knots over such a long, multistage journey.

This would suggest a one way travel time of about 56 days of travel time, and a round trip travel time of about 112 days. 

But, while the trips used to calibrate these speeds in the Mediterranean may have been mostly direct trips, for a trip of this distance, probably at least two days a week and possibly more to conduct the trade that was the point of these cruises, would have been spent in port. 

This gives us our answer (below the jump):

Tuesday, May 14, 2013

Bronze Age Long Distance Trade In Europe

Dienekes' Anthropology blog notes three papers showing imports of early Nordic Bronze Age metals from as far as Cyprus, in exchange for amber making its way as far as the Mycenaean Greeks. 

Maju, in turn, gives these papers a more critical analysis looking at them in a wider archaeological context in light of additional data points.  He emphasizes the likely intermediate role played by Iberians in these long trade connections (perhaps as the hub with the Scandinavian and Eastern Mediterranean connections as the ends of spokes of the network).  He notes the problems with a chronology that gives the Nordic Bronze Age a Mycenaean source (the Nordic Bronze age begins two hundred years before the Nordic Bronze Age and this part of the link is thinner than the abstracts to the papers would suggest). 

Maju also tempers the implications of the papers suggesting that the Minoans did not engage in East-West trade in the Mediterranean and that this connections was made for the first time by the Mycenaeans, even though it does appear that the Minoans may not have been part of the Eastern Mediterrean-Iberian bronze for Nordic amber trade route that these papers have documented.  He also makes the useful observation that any trade in copper must also have involved trade in tin when the end products traded were bronze artifacts and that the papers are silent about the source of the tin in these artifacts even though they do identify the sources of the copper.

Also, Maju notes the spread culture specific symbols (in particular four interlocking spirals) of not entirely certain cultural origins in this network.  This provide a parallel line of evidence to explain the connections that helps to discriminate between alternative theories concerning the nature of the trade links joining the end points of the Southern Sweden to Greece and Cyprus.  He also notes that important timing of the appearance of Greek cultural influences in Iberia which coincides with the earliest evidence of this pan-European Bronze Age trade network.

While Maju's analysis casts serious doubt on the naive implications of these three papers, his criticism is constructive and does a great deal to illuminate an alternative visions of the way that distant cultures in Bronze Age Europe interacted economically and via cultural contact in this era. 

A naive reading of the three papers could easily lead to my first impression which was that this link might be powerful evidence that the Indo-European Germanic language and peoples are directly derived from the Indo-European Mycenaeans.  These cultures, respectively, are the first attested Indo-European cultures in their respective geographic locations.  Prior to reading these papers, it had not been clear to me, at least, that the first half of the Nordic Bronze Age was culturally (and presumably linguistically) Germanic at all (for the second half of the Nordic Bronze Age this is much more clear).  A first read of the paper suggested, astoundingly, that the Germanic languages might actually be derived from ancient Greek.  Closer inspection, aided by Maju's analysis, has largely dismissed that theory as implausible, and muddied the waters as to whether the early Nordic Bronze Age peoples of Southern Sweden really were Indo-European at all (which is the view with which I had started).  But, these new papers do show that there really was a regular pan-European maritime trade network in existence as far back as 1500 BCE, along the Southern coast of Europe and all of the way up the Atlantic coast to the Baltic Sea.

Maju's Iberio-centric suggestion that copper and tin mines in Iberia may well have formed the principal source of metal production for this trade network and that it may have been its hub (perhaps indeed even providing a source for Plato's legendary account of "Atlantis"), in the end, provides a very credible "forest" level big picture interpretive lens that can explain all of the archaeological data without requiring any unreasonably far fetched assumptions.

In terms of questions of historical linguistics and the associates clash of great prehistoric European civilizations, Maju's read tends to suggest that the Atlantic coast and early Nordic Bronze Age may have been in a Vasconic, rather than Indo-European sphere of influence with Mediterranean influences transmitted via the Atlantic maritime trade routes mediated though the Iberian civilization of that era.  Still, these papers do provide fuel for the conjecture that Bronze Age Indo-European civilizations may have been far more conscious of the cultural counterparts at vast distances from them than had previously been safe to assume.  The world was smaller and better understood, earlier on, than we have given the residents of prehistory credit for.

Wednesday, May 8, 2013

Study Purporting To Link Seven Language Families At 15 kya Time Depth Based On Iffy Data

On the web site of the Proceedings of the National Academy of Sciences, in the "Early Edition" section, is an article by Mark Pagel, Quentin D. Atkinson, Andreea S. Calude, and Andrew Meade: "Ultraconserved words point to deep language ancestry across Eurasia". The authors claim that a set of 23 especially frequent words can be used to establish genetic relationships of languages that go way, way back — too far back for successful application of the standard historical linguistics methodology for establishing language families, the Comparative Method. The idea is that, once you've determined that these 23 words are super-stable (because they're used so often), you don't need systematic sound/meaning correspondences at all; finding resemblances among these words across several language families is enough to prove that the languages are related, descended with modification from a single parent language (a.k.a. proto-language).

From Language Log.

As the trenchant analysis in the linked post explains, the study is deeply flawed because it is based on dubious choices of cognates in the seven proto-languages which were compared. In many cases, there are several cognates for the allegedly ultraconservated word and there is no consensus on which is correct in the proto-language, but the authors simply choose one by whim.

The data also cast serious doubt on the hypothesis that these words are truly ultra-conserved. For example, only a quarter of the ultraconserved Indo-European cognates survive in English, despite a time depth from proto-Indo-European to English estimated by more reliable means on the order of 6,000 years. The survival rates of these "ultraconserved" words are even lower in some of the other language families examined.

The hypothesis that the seven families discussed in the paper are part of a single macro-language family is not widely accepted among linguists, and isn't a good fit to genetic data for the populations who speak them either, as explained in comments to this post at Dienekes' Anthropology blog.

There is lots of very good research in the area of historical linguistics, but this, like a number of other papers with Quentin D. Atkinson as one of the authors, isn't one of them.

GR Uniquely Determined By Properties Of Hypothetical Graviton

On the Origin of Gravitational Lorentz Covariance



Justin Khoury, Godfrey E. J. Miller, Andrew J. Tolley(Submitted on 3 May 2013)



We provide evidence that general relativity is the unique spatially covariant effective field theory of the transverse, traceless graviton degrees of freedom. The Lorentz covariance of general relativity, having not been assumed in our analysis, is thus plausibly interpreted as an accidental or emergent symmetry of the gravitational sector. . . . Lorentz covariance is a central pillar of the modern field-theoretic interpretation of general relativity (GR). From this point of view, GR is no more and no less than the unique Lorentz covariant theory of an interacting massless spin-2 particle. In this paper, we show that GR can be derived without assuming Lorentz covariance. Our approach relies on the weaker assumption of spatial covariance, within the context of the effective field theory of the transverse, traceless graviton degrees of freedom[.]




The massless spin-2 graviton, while hypothetical and never directly observed, has strong theoretical support for this reason. No other hypothetical particle is so widely believed to really exist.


Friday, May 3, 2013

WIMP Dark Matter Does Not Exist

For decades, the prime candidates for dark matter were WIMPs (weakly interaction massive particles), that interacted only via the weak nuclear force and gravity, and that had masses in the GeV to thousand GeV range.  Evidence from multiple experiments essentially rules out the existence of such particles as an important constituent of dark matter.

Direct searches for WIMPS have excluded the entire parameter space of WIMP dark matter candidates, and all hints of WIMP dark matter in any given experiment have been contradicted by many other independent experiments.  Astronomy data compel the conclusion that if dark matter exists, that its particles must look like 2 keV sterile neutrinos, rather than GeV scale WIMPS.  Weakly interacting and SUSY particles have been ruled out in the appropriate mass ranges.

Note also that the direct dark matter searches, by pushing down the maximum possible cross-section of interaction of any heavy WIMPS to about 10^-44 v. 10^-36 or so for neutrinos, rule out Cold Dark Matter models in which the CDM particles deviate meaningfully from being sterile and collisionless.  This makes astronomy simulation exclusions ruling out pure collisionless CDM conclusive for pretty much all kinds of CDM that couldn't be detected by Xenon-100.

Direct Searches For WIMPs have come up empty

The Xenon-100 experiment is the most sensitive of all of the direct dark matter detection experiments that has reported results. Xenon-100 contradicts all positive results for direct dark matter detection.

Its results from 2012 ruled out the existence of dark matter particles with the properties purportedly seen in five other experiments, only a couple of which are consistent with each other (as well as those associated with Fermi line (ca. 130 GeV) and AMS-02 experiment (of at least 350 GeV) that are looking for evidence of dark matter-antimatter annihilation).

For comparison purposes, the cross-sections of WIMP-Nucleon interaction probed are many orders of magnitude weaker than neutrino-nucleon cross-sections of interaction over almost all of the region excluded.


 





As Lubos Motl notes:

You see various claims of this kind – not really compatible with each other – made by DAMA/I, DAMA/Na, CoGeNT, CRESST-II. An extra shape could perhaps be added to reflect the data from PAMELA, Fermi, and the newest three events from CDMS II (which wouldn't be too far from the CoGeNT potato).

On the other hand, the long lines depict the statements by the "negative experiments" that claim that all the points above their curve are excluded: SIMPLE, COUPP, ZEPLIN-III, EDELWEISS, CDMS (2010/2011: later betrayed the axis), and – most famously – XENON100. I say "most famously" because XENON100 is by far the most powerful experiment of this kind, at least among the negative ones.

The newest exclusion curve makes this priority even more obvious. Note the blue line inside the green-and-yellow (Brazil) band at the bottom (the exclusion is about a sigma stronger than expected). It is safely below all the "positive" potato ellipses and it is also well below the other exclusion curves. The contradiction between the latest XENON100 results and the "positive" experiments couldn't be stronger. Well, it could but it's already strong enough! ;-) One may say that pretty much all the preferred regions are disfavored by XENON100 at 5 sigma or more.

Note that the liquid xenon is a relatively diverse mixture of many isotopes (or do they filter which ones they use?). So the absence of signals is probably not due to some special properties of a xenon nucleus. On the other hand, the absence could be explained if the signals ultimately involved the interaction of a particle with the electrons – because xenon (unlike germanium, silicon, and all the other elements used in the experiments) is an inert gas with full electron shells and L=S=J=0 , when it comes to atomic physics. The events don't look like interactions with the electrons but there could be some subtleties. The tension between XENON100 and others seems so strong that the inert character of xenon seems "almost necessary" for me to understand the apparent xenophobia of the dark matter particle – but the de Broglie wavelength of the new particle would have to be of atomic size or longer for the vanishing atomic angular momentum to matter at all (which seems like an insanely low momentum, too). Also note that the collisions with the electrons are supposed to be "background" and distinguished from the dark-matter-like collisions with the nuclei but there could be a reason why some particle's collisions with the electrons look nucleus-like.
Each experiment that claims to have seen a dark matter particle with a particular cross-section of interaction and mass has about ten other experiments that have seen nothing in the same parameter space.  The case that all of the claimed direct dark matter detections reported to date are erroneous is quite powerful.  This is particularly so because the kinds of signals purportedly seen are more or less identical to the kinds of signals that would be seen if an unaccounted for source of background noise was omitted from the analysis:
Yet one more sobering fact (NYU’s Neal Weiner emphasized this in his talk last week in Princeton) is that in all of these underground experiments, a failure to account for a small background will typically show up as a few extra low-energy collision candidates, which will then closely resemble what you’d expect for a low-mass dark matter particle. In other words, lightweight dark matter is what an oops! will look like.
Efforts are currently underway to build a Xenon based sensor that is more sensitive by the same factor as the leap from Xenon-10 to Xenon-100 illustrated above.

Astronomy data rule out Cold Dark Matter models

Astronomy simulations that show galactic scale structures and halo shapes different from those that cold dark matter would produce also rule out most cold dark matter in the 1 GeV and up range, and favor warm dark matter models instead with particles closer to a single kind of 2 keV mass particle that behaves like a sterile neutrino, although the exclusions of heavier and lighter dark matter particles is more definitive than the positive evidence that a 2 keV dark matter particle (give or take) could fit the data.

A good summary of the reasons that cold dark matter is experimentally excluded can be found at H.J. de Vega and N.G. Sanchez, “Warm dark matter in the galaxies:theoretical and observational progresses. Highlights and conclusions of the chalonge meudon workshop 2011″ (14 Sept 2011) http://arxiv.org/abs/1109.3187 Here are some key quotes from the abstract and body text:

Warm Dark Matter (WDM) . . . essentially works, naturally reproducing the astronomical observations over all scales: small (galactic) and large (cosmological) scales (LambdaWDM). Evidence that Cold Dark Matter (LambdaCDM) and its proposed tailored cures do not work at small scales is staggering. . . .
The most troubling signs of the failure of the CDM paradigm have to do with the tight coupling between baryonic matter and the dynamical signatures of DM in galaxies, e.g. the Tully-Fisher relation, the stellar disc-halo conspiracy, the maximaum disc phenomenon, the MOdified Newtonian Dynamics (MOND) phenomenon, the baryonic Tully-Fisher relation, the baryonic mass discrepancy-acceleration relation, the 1-parameter dimensionality of galaxies, and the presence of both a DM and a baryonic mean surface density. . . .
It should be recalled that the connection between small scale structure features and the mass of the DM particle follows mainly from the value of the free-streaming length lfs. Structures smaller than lfs are erased by free-streaming. WDM particles with mass in the keV scale produce lfs ∼ 100 kpc while 100 GeV CDM particles produce an extremely small lfs ∼ 0.1 pc. While the keV WDM lfs ∼ 100 kpc is in nice agreement with the astronomical observations, the GeV CDM lfs is a million times smaller and produces the existence of too many small scale structures till distances of the size of the Oort’s cloud in the solar system. No structures of such type have ever been observed. Also, the name CDM precisely refers to simulations with heavy DM particles in the GeV scale. . . . The mass of the DM particle with the free-streaming length naturally enters in the initial power spectrum used in the N-body simulations and in the initial velocity. The power spectrum for large scales beyond 100 kpc is identical for WDM and CDM particles, while the WDM spectrum is naturally cut off at scales below 100 kpc, corresponding to the keV particle mass free-streaming length. In contrast, the CDM spectrum smoothly continues for smaller and smaller scales till ∼ 0.1 pc, which gives rise to the overabundance of predicted CDM structures at such scales. . . . 
Overall, seen in perspective today, the reasons why CDM does not work are simple: the heavy wimps are excessively non-relativistic (too heavy, too cold, too slow), and thus frozen, which preclude them to erase the structures below the kpc scale, while the eV particles (HDM) are excessively relativistic, too light and fast, (its free streaming length is too large), which erase all structures below the Mpc scale; in between, WDM keV particles produce the right answer. 
See also in accord: S. Tulin, et al. “Beyond Collisionless Dark Matter: Particle Physics Dynamics for Dark Matter Halo Structure” (15 Feb 2013) http://arxiv.org/abs/1302.3898:
As is well known, the collisionless cold DM (CCDM) paradigm has been highly successful in accounting for large scale structure of the Universe. . . . Precision observations of dwarf galaxies show DM distributions with cores, in contrast to cusps predicted by CCDM simulations. It has also been shown that the most massive subhalos in CCDM simulations of Miky Way (MW) size halos are too dense to host the observed brightest satellites of the MW. Lastly, chemo-dynamic measurements in at least two MW dwarf galaxies show that the slopes of the DM density profiles are shallower than predicted by CCDM simulations.
A number of more recent papers have highly constrained the mass range for warm dark matter and have disfavored models with multiple kinds of dark matter.

deVega and Sanchez, for example, offer up, "Dark matter in galaxies: the dark matter particle mass is about 2 keV" (Submitted on 2 Apr 2013) http://arxiv.org/abs/1304.0759
Warm dark matter (WDM) means DM particles with mass m in the keV scale. For large scales, for structures beyond 100 kpc, WDM and CDM yield identical results which agree with observations. For intermediate scales, WDM gives the correct abundance of substructures. Inside galaxy cores, below 100 pc, N-body classical physics simulations are incorrect for WDM because at such scales quantum effects are important for WDM. Quantum calculations (Thomas-Fermi approach) provide galaxy cores, galaxy masses, velocity dispersions and density profiles in agreement with the observations. All evidences point to a dark matter particle mass around 2 keV. Baryons, which represent 16% of DM, are expected to give a correction to pure WDM results. The detection of the DM particle depends upon the particle physics model.  . . . So far, not a single valid objection arose against WDM.
See also, for example, C. Watso, et al. “Constraining Sterile Neutrino Warm Dark Matter with Chandra Observations of the Andromeda Galaxy” http://arxiv.org/abs/1111.4217 (10 Jan 2012) (WDM mass capped at 2.2 keV); R. de Souza, A. Mesinger, A. Ferrara, Z. Haiman, R. Perna, N. Yoshida, “Constraints on Warm Dark Matter models from high-redshift long gamma-ray bursts” (17 Apr 2013) http://arxiv.org/abs/1303.5060 (WMD mass at least 1.6 keV); D. Anderhaldena, et al. “Hints on the Nature of Dark Matter from the Properties of Milky Way Satellites” (12 Dec 2012) http://arxiv.org/pdf/1212.2967v1.pdf (mixed CDM/WDM models disfavored); J. Viñas, et al. “Typical density profile for warm dark matter haloes” (9 Jul 2012) http://arxiv.org/abs/1202.2860 (models with more than one WDM species disfavored);  Xi Kang, Andrea V. Maccio, aaron A. dutton, "The effect of Warm Dark Matter on galaxy properties: constraints from the stellar mass function and the Tully-Fisher relation" (8 April 2013) http://arxiv.org/abs/1208.0008 (WDM mass of more than 0.75 keV and consistent with 2 keV).

Weakly interacting particles light enough to be dark matter are ruled out experimentally

Particles that interact via the Standard Model weak force with masses of less than 45 GeV have been excluded for many years by precision electro-weak measurements at LEP (and this will soon rise to 62.5 GeV as Higgs boson decays are analyzed).

This means that any dark matter particles must interact with ordinary matter, if it interacts at all other than via gravity, via a force other than the three Standard Model forces (although it could conceivably couple to the Higgs boson).

SUSY can't supply particles that could be dark matter

Searches for SUSY particles at colliders like the LHC have likewise established that there are no sparticles with masses below the hundreds of GeV in fairly simple MSSM and NMSSM SUSY models.

None of the SUSY models propose particles consistent with experimental data describing dark matter phenomena. Any viable explanation of dark matter effects needs to come from a source outside SUSY and SUGRA theories.

Experiments do not rule out the possibility that ephemeral and unstable heavy SUSY particles exist, but even if they do, they cannot the source of dark matter. SUSY theories, generically, have no features which could help solve this most glaring of remaining unsolved problems in fundamental physics.

Wednesday, May 1, 2013

Some Important Open Issues In Human Prehistory

1. When and where did Neanderthal admixture take place?

One, two and three admixture event scenarios are all plausible (and, of course, more complex scenarios are also possible).

The simplest model assumes that admixture took place ca. 100 kya to 75 kya at a time when the archaeology shows the two species overlapping in the Levant and the effective population size would have been smallest, and not thereafter. But, this would mean that a schism into West Eurasian and East Eurasian populations would have had to happen very early on to prevent the admixed Neanderthal genes from coming closer to fixation between the proto-West Eurasians and proto-East Eurasians than differences in which Neanderthal genes are found in each population reflect.

Another one admixture model puts the date closer to 50-75 kya when the modern human population is at a post-Toba low point after an initial Out of Africa surge, perhaps in an Arbian or Persian Gulf refugium. This reduced effective population size, and allows for a sustained period of declining population during which genetic drift can loose some of the Neanderthal inheritance and give rise to Founder effects not likely to be seen in a rapidly expanding population that has reached fixation.

In a simple two event model, West Eurasians and East Eurasians split before admixture takes place and Neanderthal admixture takes place in parallel processes that produce similar overall levels of admixture in each clade of Eurasians in different places - perhaps one in Anatolia, and another in Arabia, Persia or South Asia. This also has the virtue of driving down effective population sizes in each source population.

A three event model combines these two models, which some admixture taking place early on and pre-schism and some taking place later in parallel.

An example of a more complex model would be one with a common admixture event, additional East Eurasian admixture, and then additional West Eurasian admixture that is diluted in the Upper Paleolithic to Neolithic transition by West Asian populations that did not experience the additional West Eurasian admixture experienced by Europeans with prolonged co-existence with Neanderthals.

2. Were there cases of archaic admixture in Africa?

Preliminary population genetic analysis and possibly one Y-DNA haplogroup A00 that looks older than the species make this look plausible and may have happened in two or three separate cases there quite recently.

The case for Neanderthal and Denisovan admixture where observed is in my opinion rock solid and not adequately explained by any other mechanism (such as deep population structure in Africa within modern humans).

The window of time in which additional non-African, non-Neanderthal, non-Denisovan admixture could be discovered in modern humans alive today hasn't completely closed but is well on its way to doing so. It is quite clear that this isn't present outside modern relict populations of Eurasia and the Americas.

3. It is clear that there was a major modern human population genetic transition in Europe between the Upper Paleolithic and the early Iron Age. How much of this transition in any given part of Europe took place:
* during the Mesolithic era on the eve of the Neolithic revolution;
* when the first farmers arrived with the Neolithic revolution;
* in the mid- to late Neolithic (i.e. the Copper Age and early Bronze Age);
* in the early Iron Age.

Two subquestions here are particularly unclear.

First, were Y-DNA Haplogroup R1b and mtDNA haplogroup H, respectrively which expanded from Iberia during the Neolithic era and Bronze Age indigeneous haplogroups in continuity with the Paloelithic era, or did they arrive with folk migrations in the Mesolithic, the early Neolithic, or the Bell Beaker people?

Tenatively, I favor an arrival of R1b predominantly with the Bell Beaker people, and an arrival of mtDNA haplogroup H in Iberia mostly in the Mesolithic or with the Bell Beaker people. But, there is no ancient Y-DNA evidence old enough to definitively resolve the question, mutation rate based dates are too uncertain to resolve the issue, and the ancient mtDNA evidence is thin. Publically available data from France on this point is particularly thin.

Second, what were the relative contributions to the expansion of Y-DNA Haplogroup R1b and mtDNA haplogroup H, respectrively which expanded from Iberia during the Neolithic era and Bronze Age of the first farmer early Neolithic megalithism culture and the Bell Beaker expansion?

I suspect that R1b and H expansions in Western Europe are predominantly Bell Beaker rather than Megalithic, but the eivdence is not yet secure enough to be sure.

This goes hand in hand with the question of whether the Bell Beaker expansion gave rise to language shift from previous languages of the first farmers of that region. I suspect that it did, but resolution of this prehistoric genetic question would go a long way towards shoring up or undermining that assumption.

On the other hand, many points are fairly settled.

In my mind, there is increasingly little reason to doubt that linguistically Indo-European didn't arrive in Western Europe until at least the Urnfield culture, and that prior to that point almost the entire maximal range of Bell Beaker influence had at some point been linguistically Vasconic.

In my mind, it is all but settled that the Indo-European Iron Age tweaks to the gene pool of Western Europe were quite modest.

It is also likely all but settled in my mind that mtDNA H was at the very least confined to far Southern regions in Europe until the Neolithic at the earliest, and that mtDNA H bearers did not participate materially in the repopulation of Europe immediately after the retreat of the glaciers after the Last Glacial Maximum. This in turn implies that this was probably not an important mtDNA component of refugia populations that participated in this repopulation and were probably rare in, or absent from Southern Europe until at least the Mesolithic (e.g. as a pilot wave from the Fertile Crescent and vicinity in advance of the first farmers on the eve the Neolithic revolution).

4. What were the geographic and archaeological culture sources for the first wave Neolithic populations of Europe and the Bell Beaker peoples?

Modern Europe is to a crude approximation a composite of a West Asian, Mediterrean and Northeast Asian signal. New ancient DNA data from Central Europe makes it an unlikely Bell Beaker source population. A region roughly stretching from the Balkans to Anatolia to the Caucasus to the Zargos Mountains and Western Persia to the remainder of the Fertile Crescent are the most plausible candidates. But, some of these areas are blanks in the ancient DNA map, we don't have a perfectly solid sense of how the ancient populations of these regions differed from the modern ones other than to be forewarned that there has been a lot of historic and late prehistoric era population. It is also possible that at least one signal from a first wave Neolithic population (most likely the LBK) has been so obscured that it can no longer be distinguished from noise in modern populations.

The emerging notion of LBK as "the Neolithic revolution wave that failed" is looking more plausible.

Data from the Ukraine, Central Asia and Iran are particular thin.

5. Are the outlier dates for modern humans in the New World from 20,000 years ago and older valid or not?

There is good reason to scrutinize this data as explained in a previous post, but the evidence is what it is and if there is really no methodological flaw then we have to explain it. The possibility that outlier sites could represent archaic hominin populations that made the trip before modern humans but left a very modest footprint because their lithic technology was less advanced and they were less ecology distrupting top predators is particularly intriguing if the evidence becomes strong enough to force us to fit the theory to it.

6. What archaic hominin species is Denisovan DNA associated with?

Denisovan DNA was found in a Siberian cave. Denisovan admixture is found from roughly the Wallace line and beyond. At least two know archaic hominin species were present in between: Homo Erectus, in most of that region, and Homo Florensis on Flores and perhaps a couple of neighboring islands only. Theories about the existence as a separate species and range of Homo Heidlebergus is also sketchy.

There is no solid indication of Neanderthals beyond South Asia, but they aren't entirely ruled out, particular via a Northern route to the Denisovan cave. Some clade analysis of the Denisovan DNA suggests a clade shared with Neanderthals apart from Homo Erectus.

7. When did modern humans make it from South Asia to Southeast Asia?

The window admitted by archaeology is approximately 100,000 years ago to 45,000 years ago, with a time frame of 65,000 to 75,000 most strongly favored by the evidence, but not very conclusively established.

8. How many waves of mass migration were there to Asia in the Upper Paleolithic and when did they take place?

There is little evidence of prehistoric mass population genetic transfer between East Eurasia and West Eurasia (with a couple of notable exceptions, e.g., for Uralic populations and in the case of mtDNA haplogroup X) prior to the historic era after this initial first order split of Eurasians after leaving Africa.

The data is probably a better fit to several waves of pre-Neolithic migration, rather than just one (although the major Neolithic waves need to be understood to parse the earlier waves from the genetic data). But, the timing and paths of these waves is subject to reasonable dispute.

9. How did Homo Erectus go extinct?

While the evidence is very thin indeed on this point, extinction due to warfare and/or inability to complete for food and territory with modern humans upon first contact, outside of Flores where a cooperative mode emerged, possibly boosted by the Toba erruption or a climate shift seems most plausible.

10. When, if ever, did Homo Florensis go extinct?

The earliest possible date for the extinction is about 10,000 years ago, which is the date of the oldest skeletal evidence, but there is good reason to think that they persisted even after first contact with Europeans and that there may even be a tiny population of this species extant on one known part of one Indonensian island.

11. Is the Inuit language linguistically related to the Uralic languages as part of a transpolar language family?

There is some scholarly evidence to suggest that this might be the case, but it is not widely accepted at this point.

12. Was the Na-Dene a separate migration wave to the New World separate from the Paleo-Eskimo Dorsett, or do they represent admixture of Dorsett and first wave indigeneous American populations?

Historically, these have been treated as two separate post-first wave indigeneous population events, but some suggestive ancient DNA analysis points to collapsing them into one wave. This population is the only one strongly linked to an old world Paleo-Siberian population lingustically to date.

13. To what extent can known historical cultures of the New World in the last several thousand be properly said to be derived from each other? What is the sequence of historical cultures in North America?

There is accumulating evidence that a lot of separate archaeological cultures in the New World can all be traced back to an early one near Monroe, Louisiana.

14. What are the linguistic origins of Japanese?

The Yayoi people came from Korea and the evidence strongly favors one of the languages of Korea at about that time as a source, but there is not agreement on which one. There is likewise dispute over whether Korean and Japanese languages are related to the Altaic languages with some evidence, particularly statistical analysis of lexemes, favoring that position.

15. Is the Dravidian language family indigeneous or did it have an outside source, and if so which one?

Multiple theories exist. I tend to favor an Afro-Dravidian hypothesis with a source language not that different from early Swahili, although distinct from it. This is because the South Asian Neolithic involved a heavy component of Sahel African crops. Some archaeological evidence suggests that it could be that these crops were brought to South Asia, domesticated by the Harappans or related populations, and then returned to Africa.

16. How did Y-DNA haplogroup T end up in the western EASTERN part of South Asia?

This distribution is suggestively similar to the expansion of the Dravidian language and South Asian Neolithic. But, the resolution of the Y-DNA haplogroup T data collected so far is not high enough to discern its source (and possibly with it a source for Dravidian crop transfer).

Monday, April 29, 2013

More Direct Dark Matter Detection Experiment Result Skepticism

Direct dark matter detection experiments appear to have cried wolf so many times that skepticism of their results is (appropriately) mounting.
Apparent effects of dark matter have been “discovered” so many times in the last decade that you may by now feel a bit jaded, or at least dispassionate. Certainly I do. Some day, some year, one of these many hints may turn out to be the real thing. But of the current hints? We’ve got at least six, and they can’t all be real, because they’re not consistent with one other. It’s certain that several of them are false alarms; and once you open that door a crack, you have to consider flinging it wide open, asking: why can’t “several” be “all six”? 
Professor Matt Strassler.

The evidence that something is causing the universe to behave at astronomy scales in a manner inconsistent with general relativity acting primarily on luminous matter is overwhelming and largely internally consistent.  Most commonly these effects are attributed to "dark matter" and "dark energy."

But, the evidence that particular experiments have actually discovered new kinds of particles that cause these effects is not currently compelling.  They are seeing "something" that gets tagged as an event, but it is very hard to distinguish those events from experimental error and unknown but pedestrian background effects.

Monday, April 22, 2013

Evidence Of 22,000 Year Old Human Habitation In Brazil Is Weak

In the extraordinary claims require extraordinary evidence department, I am deeply skeptical of the claim that archaeologists have found human made stone tools in a Brazilian rock-shelter that date to more than 22,000 years ago. (The linked story's source is: C. Lahaye et al., Human occupation in South America by 20,000 BC: The Toca da Tira Peia site, Piaui, Brazil, Journal of Archaeological Science. (March 4, 2013)).

While there very likely were pre-Clovis modern humans in the New World, the evidence that there were humans in Brazil nine thousand years pre-Clovis is not strong.  The new evidence at the Toca da Tira Peia rock-shelter is in the same Brazilian national park as the site of a previous claim at Pedra Furada alleged to be 50,000 years old by their investigators.

Skeptics have argued that the "unearthed burned wood and sharp-edged stones" dated to such ancient time periods, "could have resulted from natural fires and rock slides." 
[The new] site’s location at the base of a steep cliff raises the possibility that crude, sharp-edged stones resulted from falling rocks, not human handiwork, says archaeologist Gary Haynes of the University of Nevada, Reno. Another possibility is that capuchins or other monkeys produced the tools, says archaeologist Stuart Fiedel of Louis Berger Group, an environmental consulting firm in Richmond, Va. 
Other researchers think that the discoveries are human-made implements similar to those found in Chile and Peru at the Monte Verde site 14,000 years ago and at another possibly as old as 33,000 years ago (the dating method for the older dates is likewise widely questioned).

The dating methods are also suspect. 
Dating the artifacts hinges on calculations of how long ago objects were buried by soil. Various environmental conditions, including fluctuations in soil moisture, could have distorted these age estimates, Haynes says. . . . An absence of burned wood or other finds suitable for radiocarbon dating at Toca da Tira Peia is a problem, because that’s the standard method for estimating the age of sites up to around 40,000 years ago, Dillehay says. But if people reached South America by 20,000 years ago, “this is the type of archaeological record we might expect: ephemeral and lightly scattered material in local shelters.” 

The dates of 113 putative human artifacts were made with a "technique that measures natural radiation damage in excavated quartz grains, the scientists estimated that the last exposure of soil to sunlight ranged from about 4,000 years ago in the top layer to 22,000 years ago in the third layer. Lahaye says that 15 human-altered stones from the bottom two soil layers must be older than 22,000 years."

Fundamentally, the dates are questionable because:

* There is no historical precedent for modern humans moving into virgin territory on a sustained basis for thousands of year without expanding exponentially and leaving an obvious sign of their presence.  If the site showed the signs of a marginal community that lasted a few hundred years and collapsed that might be imaginable.  But, this site purports to show continuous habitation for eighteen thousand years or more.

* There is an absence of intermediate sites between South America and any possible source of humans in the appropriate time frame.  (There is one alleged older site in the American Southeast with similar dating and hominin identification issues).

* There is no skeletal evidence matching the remains definitively to modern humans prior to 14,000 years ago or so.  Even if the dates were undisputably that old and made by hominins, in that time frame, a small band of archaic hominins with less of a capacity to dominate their surroundings might be more plausible.

* No radiocarbon dating has been possible and it is not well established that the dating method used is really accurate to the necessary degree of precision.

Friday, April 19, 2013

Local Dark Matter Density

A detailed model of the inferred dark matter halo of the Milky Way galaxy based on the observed motions of stars in our galaxy implies that the density of dark matter in the vicinity of our solar system is 0.49 +0.08/-0.09 GeV/cm^3. 

If a dark matter particle is 2 keV as implied by other studies, then the density in dark matter particles per volume in this vicinity of the galaxy is 245,000 per cm^3, which is 245 dark matter particles per millimeter^3.

At an 8 GeV mass (which is disfavored by other measurements) there would be one particle per sixteen cm^2 (about one per cubic inch).  At an 130 GeV mass (also disfavored by other measurements) there would be one particle per 260 cm^3 (a cube slightly more than 6 cm on each side).

Wednesday, April 17, 2013

Two Physics Quick Hits

* Dark Matter
Among the hints of dark matter, I believe that the three apparently decidedly non-background-like events seen by CDMS II represent the strongest hint of a dark matter particle we have seen so far.
However, there are other hints, too. . . . CDMS suggests a WIMP mass of 8.6 GeV; AMS-02 indicates 300 GeV or more; and we also have the Weniger line at Fermi which would imply a WIMP mass around 130-150 GeV. These numbers are apparently inconsistent with each other.
An obvious interpretation is that at most one of these signals is genuine and the other ones – or all of them – are illusions, signs of mundane astrophysics, or flukes. But there's also another possibility: two signals in the list or more may be legitimate and they could imply that the dark sector is complicated and there are several or many new particles in it, waiting to be firmly discovered.
From here.

Lubos Motl greatly understates the weight of the experimental difficulties facing potential dark matter particles in these mass ranges if these indications are anything more than experimental error or statistical flukes.

There are other contradictory results from other experiments at the same light dark matter scale as CDMS II.  Some purport to exclude the result that it claims it sees and either see nothing at all, or see something else at a different mass or cross-section of interaction.  One other experiments seems to confirm the CDMS-II result.

Even if there are dark matter particles, however, none of the hints of possible dark matter particles are consistent with the dark matter particle needed to explain the astronomy data which is why we need to consider that dark matter exists at all.

Hot Dark Matter (i.e. neutrinos) does not fit the astronomy evidence.

Cosmic microwave background radiation measurements (most recently by the Planck satellite which are as precise as it is theoretically possible to be in these measurements from the vicinity of Earth), establish that dark matter particles must have a mass signficantly in excess of 1 eV (i.e. it can't be "hot dark matter") and hypothesizes that a bit more than a quark of the mass-energy of the universe (and the lion's share of the matter in the universe) is full of of unspecified stable, collisionless dark matter particle relics of a mass greater than 1 eV.

Cold Dark Matter (e.g. heavy WIMPs) does not fit the astronomy evidence.

Simulations of galaxy formation in the presence of dark matter are inconsistent with "cold dark matter" of roughly 8 GeV and up of particle mass.  Cold dark matter models, generically, predict far more dwarf galaxies than we observe, and also predict dark matter halo shapes inconsistent with those inferred from the movement of luminous objects in galaxies.

The conventional wisdom in the dark matter field has not necessary fully come to terms with this fact, in part because a whole lot of very expensive experiments which are currently underway and have many years of observations ahead of them were designed under the old dominant WIMP dark matter paradigm that theoretical work now makes clear can't fit the data.

Warm Dark Matter could fit the astronomy evidence.

The best fit collisionless particle to the astronomy data at both CMB and galaxy level is a single particle of approximately 2000 eV (i.e. 0.00002 GeV) mass, with an uncertainty of not much more than +/- 40%.  This is called "warm dark matter."

There is not yet a consensus, however, that has determined if this model can actually explain observed dark matter phenomena.  A particle of this kind can definitely explain many dark matter phenomena, but it isn't at all clear that it can explain all of it.  Most importantly, it isn't clear that this model produces the right shape for dark matter halos in all of the different kinds of galaxies where these are observed (something that MOND models can do with a single parameter that has a strong track record of making accurate predictions in advance of astronomy evidence).

The simulations that prefer warm dark matter models over cold dark matter models also strongly disfavor models multiple kinds of warm dark matter, although some kind of self-interaction of dark matter particles is not ruled out.  So, more than the aesthetic considerations that Lubos Motl discusses in his post disfavor a complicated dark matter sector. 

This simulation based inconsistency with multiple kinds of dark matter disfavors dark matter scenarios in which the Standard Model is naively extended by creating three "right handed neutrinos" corresponding one a one to one basis with the weakly interacting left handed neutrinos of the Standard Model but have different and higher masses, with some sort of new interaction governing left handed neutrino to right handed neutrino interactions.

But, as explained below, any warm dark matter would have to be "sterile" with respect to the weak force.

These limitations on dark matter particles make models in which sterile neutrino-like dark matter particles arise in the quantum gravity part of a theory, rather than in the Standard Model, electro-weak plus strong force part of the theory, attractive.

Light WIMPs are ruled out by particle physics, although light "sterile" particles are not.

No dark matter particle that is weakly interacting of less than 45 GeV is consistent with experimental evidence.  Any weakly interacting particle with less than half of the Z boson mass would have been produced in either W boson or Z boson decays.  Moreover, if there were some new particle with a mass of between 45 GeV and 63 GeV that was weakly interacting, it would have been impossible to observe, as the LHC has to date, Higgs boson decays that are consistent with a Standard Model prediction that does not include such a particle to within +/- 2% or so on a global basis. 

In other words, any dark matter particle of less than 63 GeV would have to be "sterile" which is to say that it does not interact via the weak force.  This theoretical consideration strongly disfavors purported observations of weakly interacting matter in the 8 GeV to 20 GeV range over a wide range of cross-sections of interaction, where direct dark matter detection experiments are producing contradictory results.

It is also widely recognized that dark matter cannot fit current models if it interacts via the electromagnetic force (i.e. if it is "charged") or if it has a quantum chromodynamic color charge (as quarks and gluons do). This kind of dark matter would have too strong a cross-section of interaction with ordinary matter to fit the collisionless or solely self-interacting nature it must have to fit the dark matter paradigm for explaining the astronomy data.

So, sterile dark matter can't interact via any of the three Standard Model forces, although it must interact via gravity and could interact via some currently undiscovered fifth force via some currently undiscovered particle (or if dark matter were bosonic, via self-interactions).

Neither heavy nor light WIMPs fit the experimental evidence as dark matter candidates.

Since light WIMPs (weakly interacting massive particles) under 63 GeV are ruled out by both particle physics, and all WIMPS in the MeV mass range and up are ruled out by the astronomy data, the WIMP paradigm for dark matter that reigned for most of the last couple of decades is dead except for the funeral. 

Even if there are heavy WIMPS out there with masses in excess of 63 GeV which are being detected (e.g. by the Fermi line or ASM-02 or at the LHC at some future data), these WIMPS can't be the dark matter that accounts for the bulk of the matter in the universe.

Direct detection of WIMP dark matter candidates is hard.

It is possible to estimate with reasonable accuracy the density of dark matter that should be present in the vicinity of Earth if the shape of the Milky Way's dark matter halo is consistent with the gravitational effects attributed to dark matter that we observe in our own galaxy. 

So, for any given dark matter particle mass it is elementary to convert that dark matter density into the number of dark matter particles in a given volume of space.  It takes a few more assumptions, but not many, to predict the number of dark matter particles that should pass through a given area in a give amount of time in the vicinity of the Earth, if dark matter particles have a given mass.

If you are looking for weakly interacting particles (WIMPs), you use essentially the same methodologies used to directly detect weakly interacting neutrinos, but calibrated so that the expected mass of he incoming particles is much greater.  The cross-section of weak force interaction adjusted to fit constraints from astronomy observations such as colliding galactic clusters further tunes your potential signal range.  You do your best to either shield the detector from background interactions or statistically substract background interactions from your data, and then you wait for dark matter particles to interact via the weak force (i.e. collide) with your detector in a way that produces an event that your detector measures, which should happen only very infrequently because the cross-section of interaction is so small. 

For heavy WIMPs in the GeV to hundreds of GeV mass ranges the signal should be pretty unmistakable, because it would be neutrino-like but much more powerful.  But, this is greatly complicated by a lack of an exhaustive understanding of the background.  We do not, for example, have a comprehensive list of all sources that create high energy leptonic cosmic rays.

Direct detection of sterile dark matter candidates is virtually impossible.

The trouble, however, is that sterile dark matter should have a cross-section of interaction of zero and never (or at least almost never) collide with the particles in your detector, unless there is some new fundamental force that governs interactions between dark matter and non-dark matter, rather than merely governing interactions between two dark matter particles.  Simple sterile dark matter particles, which only interact with non-dark matter via gravity, should be impossible to detect directly following the paradigm used to detect neutrinos directly.

Moreover, while it is possible that sterile dark matter particles might have annihilation interactions with each other, this happens only models very very non-generic choices about their properties.  If relic dark matter is all matter and not anti-matter, and exists in only a single kind of particle, it might not annihilate at all, and even if it does, the signal of a two particle warm dark matter annihilation which would have energies on the order of 4,000 keV would be very subtle and hard to distinguish from all sorts of other low energy background effects.

And, measuring individual dark matter particles via their gravitational effects is effectively impossible as well, because everything from dust to photons to cosmic microwave background radiation to uncharted mini-meteroids to planets to stars near and far contribut to the background, and the gravitational pull of an individual dark matter particle is so slight.  It might be possible to directly measure the collective impact of all local dark matter in the vicinity via gravity with enough precision, but directly measuring individual sterile dark matter particles via gravity is so hard that it verges on being even theoretically impossible.

If sterile dark matter exists, its properties, like those of gluons which have never been directly observed in isolation, would have to be entirely inferred from indirect evidence.

* The Higgs boson self-coupling.

One of the properties of the Standard Model Higgs boson is that it couples not only to other particles, but also to itself, with a self-coupling constant lambda of about 0.13 at the energy scale of its own mass if it has a mass of about 125 GeV as it has been measured to have at the LHC (see also in accord this paper of January 15, 2013).  The Higgs boson self-coupling constant, like the other Standard Model coupling constants and masses vary based on the energy scale of the interactions involved in a systematic way based on the "beta function" of a particular coupling constant.

A Difficult To Measure Quantity

Many properties of the Higgs boson have already been measured experimentally with considerable precision, and closely mathc the Standard Model expectation.  The Higgs boson self-coupling, however, is one of the hardest properties of the Higgs boson to measure precisely in an experimental context.  With the current LHC data set, the January 15, 2013 paper notes that “if we assume or believe that the `true’ value of the triple Higgs coupling lambda is true = 1, then . . . We can conclude that the expected experimental result should lie within lambda (0:62; 1:52) with 68% confidence (1 sigma), and lambda (0:31; 3:08) at 95% (2 sigma) confidence." 

Thus, if the Standard Model prediction is accurate, the measured value of lambda based on all LHC data to date should be equal to 0.08-0.20 at a one sigma confidence level and to 0.04-0.40 at the two sigma confidence level. 

With an incomplete LHC data set at the time their preprint was prepared that included only the first half of the LHC data to date, the authors were willing only to assert that the Higgs boson self-coupling constant lambda was positive, rather than zero or negative.  But, even a non-zero value of this self-coupling constant rules out many beyond the Standard Model theories. 

After the full LHC run (i.e. 3000/fb, which is about five times as much data as has been collected so far), it should be possible to obtain a +30%, -20% uncertainty on the Higgs boson self-coupling constant lamba.  If the Standard Model prediction is correct, that would be a measured value at the two sigma level between 0.10-0.21.

BSM Models With Different Higgs Boson Self-Coupling Constant Values

There are some quite subtle variations on the Standard Model that are identical in all experimentally measureable respects except that the Higgs boson self-coupling constant is different.  This would have the effect of giving rise to an elevated level of Higgs boson pair production while not altering any other observable feature of Higgs boson decays.  So, if decay products derived from Higgs boson pair decays are more common relative to decay products derived from other means of Higgs boson production than expected in the Standard Model, then the Higgs boson self-coupling constant is higher than expected.

A recent post at Marco Frasca's Gauge Connection blog discusses two of these subtle Standard Model variants which are described at greater length in the following papers:

* Steele, T., & Wang, Z. (2013). Is Radiative Electroweak Symmetry Breaking Consistent with a 125 GeV Higgs Mass? Physical Review Letters, 110 (15) DOI: 10.1103/PhysRevLett.110.151601  (open access version available here).

This model, sometimes called the conformal formulation, predicts a different Higgs boson self-coupling constant value of lambda=0.23 at the Higgs boson mass for a Higgs boson of 125 GeV (77% more than the Standard Model prediction) in a scenario in which electroweak symmetry breaking takes place radiatively based on an ultraviolet (i.e. high energy) GUT or Planck scale boundary at which the Higgs boson self-coupling takes on a notable value (for example, zero) and has a value determined at lower energy scales by the beta function of the Higgs boson.

This model, in addition to providing a formula for the Higgs boson mass thereby reducing the number of experimentally measured Standard Model constants by one, dispenses with the quadratic term in the Standard Model Higgs boson equation that generates the hierarchy problem.  The hierarchy problem, in turn is a major part of the motivation of supersymmetry.  But, it changes no experimental observables other than the Higgs boson pair production rate, which is hard to measure precisely, even with a full set of LHC data that provide quite accurate measurements of most other Higgs boson properties.

Similar models are explored here.

* Marco Frasca (2013). Revisiting the Higgs sector of the Standard Model arXiv arXiv: 1303.3158v1

This model predicts the existence of excited higher energy "generations" of the Higgs boson at energies much higher than those that can be produced experimentally, giving rise to a predicted Higgs boson self-coupling constant value of lamba=0.36 (which is 177% greater than the Standard Model prediction).

Both of these alternative Higgs boson theories propose Higgs boson coupling constant values that are outside the one sigma confidence interval of the Standard Model value based upon the full LHC data to date, but are within the two sigma confidence interval of that value.  The 0.13 v. 0.23 Higgs self-coupling value distinction looks like it won’t be possible to resolve at more than a 2.6 sigma level even after much more data collection at the LHC, although it should ultimately be possible to distinguish Frasca's estimate from the SM estimate at the 5.9 sigma level before the LHC is done, and a the 2 sigma level much sooner.

SUSY and the Higgs Boson Self-Coupling Constant

Supersymmetry (SUSY) models generically have at least five Higgs bosons, two of which have the neutral electromagnetic charge and even parity of the 125 GeV particle observed at the LHC, many of which might be quite heavy and have only a modest sub-TeV scale impact on experimental results. 

The closer the measured Higgs boson self-coupling is to the Standard Model expectation, and the more precision there is in that measurement, the more constrained the properties of the other four Higgs bosons and other supersymmetric particles must be in those models since the other four can't be contributing much to the scalar Higgs field in low energy interactions if almost all of the observational data is explained by a Higgs boson like looks almost exactly like the Standard Model one.

The mean value of the Higgs boson contribution to electroweak symmetry breaking is about 96% with a precision of plus or minus about 14 percentage points.  If the actual value, consistent with experiment is 100%, then other Higgs bosons either do not exist or are "inert" and do not contribute to electroweak symmetry breaking.



Tuesday, April 16, 2013

Ainu Origins

Razib has flagged a December 2012 study on autosomal DNA in Ainu and other Japanese populations.  The full study is pay per view, but the abstract is as follows:
57 Journal of Human Genetics 787-795 (December 2012)
The history of human populations in the Japanese Archipelago inferred from genome-wide SNP data with a special reference to the Ainu and the Ryukyuan populations
Japanese Archipelago Human Population Genetics Consortium
Abstract
The Japanese Archipelago stretches over 4000km from north to south, and is the homeland of the three human populations; the Ainu, the Mainland Japanese and the Ryukyuan. The archeological evidence of human residence on this Archipelago goes back to >30000 years, and various migration routes and root populations have been proposed. Here, we determined close to one million single-nucleotide polymorphisms (SNPs) for the Ainu and the Ryukyuan, and compared these with existing data sets. This is the first report of these genome-wide SNP data. Major findings are: (1) Recent admixture with the Mainland Japanese was observed for more than one third of the Ainu individuals from principal component analysis and frappe analyses; (2) The Ainu population seems to have experienced admixture with another population, and a combination of two types of admixtures is the unique characteristics of this population; (3) The Ainu and the Ryukyuan are tightly clustered with 100% bootstrap probability followed by the Mainland Japanese in the phylogenetic trees of East Eurasian populations. These results clearly support the dual structure model on the Japanese Archipelago populations, though the origins of the Jomon and the Yayoi people still remain to be solved.
The Yayoi people arrived in Japan from Korea immediately following the Jomon period in Japan around 900 BCE to 800 BCE.  They brought with them the core of what would become the modern Japanese language, cavalry warriors, and the rice farming method of food production used on the mainland.  The precise Korean culture on the then balkanized Korean penninsula that was ancestral to the Yayoi is disputed, but linguistically there were not a Tibeto-Burman people although they were a people who had experienced considerable Chinese cultural influence.

The culture created by the fusion of the Yayoi superstrate and the Jomon substrate upon the arrival of the Yayoi in Japan did not include all of Japan's main Honshu Island until about 1000 CE or later. 

The genetic evidence shows that while the Jomon language and much of its culture was wiped out on Honshu, that a very substantial proportion of the genetic ancestry of the modern Japanese people is Jomon in comparison to other historically or archaeologically attested encounters between hunter-gatherer populations and farmers.  The Jomon had pottery long before they were farmers, contrary to the experience in the Fertile Crescent where there was a long pre-Pottery Neolithic period, and in most other places.

The Ainu and Ryukuan ethnic minorities in Japan are widely believed to have significantly more indigineous Japanese (i.e. Jomon) ancestry and less Yayoi ancestry than the majority ethnicity in Japan. This autosomal genetic study appears to confirm this conclusion.

But, the genetics of the Ainu come with a twist. The Ainu appear to have another ancestral component not present in the also Jomon derived Ryukuan people. The obvious guess in the absence of the closed access paper, based on uniparental data available about the Ainu, would be that this other component is some now existing or now extinct or moribund Northeast Asian Paleo-Siberian popuation.

The Jomon are also very notable for being the apparent source of Y-DNA haplogroup D, a paternal lineage that is virtually absent in mainland East Asia and Southeast Asia, which is absent outside Tibet and the Andaman Islands except for trace to moderate frequencies similar to the Tibetan rather than to Japanese populations in North Asia.  Y-DNA haplogroup D is more closely related to Y-DNA haplogroup E, which is the predominant Y-DNA haplogroup in modern Africa, than to any of the other Eurasian Y-DNA haplogroups.  This suggests that the members of this migration wave may have been part of a migration wave distinct from that of the main Out of Africa migration that was ancestral to most of the rest of Eurasia.

There are two possible "two wave" scenarios.  In one, the people of the Y-DNA haplogroup D came first and were brought to extinction by a later wave of migration in the remainder of East Eurasia, Australia, Melanesia and Oceania who arrived in Australia and Papua New Guinea not later than 45,000 years ago.  In the other scenario, the people of Y-DNA haplogroup D were a secondary wave of out of Africa migration to Asia that was left with the territory that the first wave populations didn't occupy, didn't want, or couldn't defend, not later than about 30,000 years ago when Japan and Tibet were first populated, which is still well before the last glacial maximum ca. 20,000 years ago.

In the latter scenario, which I think is more likely, the Y-DNA haplogroup D people could have migrated either via a coastal maritime "Southern route" along the Southern coast of mainland Asia, or via a "Northern route" travelling to Tibet and Japan via Central Asia and/or Siberia and then migrating from Tibet to the parts of South Asia and Andaman Islands where Y-DNA haplogroup D is now found from Tibet.  I am increasingly coming around to the Northern route, rather than the maritime coastal Southern route as the more plausible of the two possibilities.  Other routes, such as a migration first to South India, then to Tibet, and from Tibet onto Japan are possible, but not necessarily persuasive since the archaeological evidence points to Tibet being populated from the direction of China, rather than India.

However, there is strong circumstantial evidence to suggest that the original Y-DNA haplogroup D people overwhelmingly had mtDNA haplogroup M.  Neither Y-DNA haplogroup D nor mtDNA haplogroup M (or its descendants) are associated with any West Eurasian populations.  So, if this D/M population did migrate via a Northern route, it is not easy to explain why they left no West Eurasian relic populations.





An Atom Drawn To Scale

Fig. 2: A more accurate depiction of an atom, showing it is mostly empty space (grey area) traversed by rapidly moving electrons (blue dots, drawn much larger than to scale) with the nucleus (red and white dot, drawn larger than scale) at center.  This is somewhat analogous to a rural community, with expanses of uninhabited land, a few scattered farm houses, and a small village with closely packed houses at its center.

From here.

Wednesday, April 10, 2013

When Should Cosmology Begin?

Cosmology is roughly speaking, the scientific study of the history of the universe.  This is a worthy pursuit, but only to a point.

Right now, the universe has certain laws that it obeys.  Nothing moves faster than the speed of light.  The universe is expanding in a manner consistent with a simple cosmological constant.  General relativity governs the gravity and describes the nature of space-time.  Baryon number and lepton number are almost perfectly preserved as separate quantities.  Mass-energy is conserved.  The quantum physical laws of the universe obey CPT symmetry, even though they are neither CP symmetric nor T symmetric.  Entropy increases over time.  Baryon number conservation and lepton number conservation severely limit the creation of antimatter.  The universe is predominantly made of matter and not antimatter.  There are invariant physical laws whose physical constants and physical laws in the Standard Model and General Relativity do not change.

Extrapolating these rules of physics back in time can take you a very long way.  It can carry you through the formation of all of the atoms in the universe.  It can take you back to before the "radiation era" more than thirteen billion years ago.  It can take you back to a point in time where the mass-energy in the universe was extremely smoothly distributed in a universe that fills a far smaller volume than it does today and the ambient temperature in the universe was close to the GUT (grand unified theory) scale where all of the forces of nature start to look very similar to each other.

There are questions, however, that one cannot answer by simply extrapolating back the rules of physics without making up new ones.  You can't answer the question, "why do we have precisely the amount of mass-energy in the universe that we do?"  You can't answer the question, "why is the universe mostly matter and not antimatter?"  You can't come up with a principled answer to the question of how our current baryon number and lepton number in the universe came to be what it is today.  You can't answer the question of why the physical constants are what they are today.  You have to violate laws of physics like the speed of light limitation to get the universe to be sufficiently smooth in the first second or two of the universe.

Rolling back the clock, at most can give you a set of initial conditions.  At proper time T, when the universe was X meters across, the laws of physics and physical constants were what they are today, there was this much mass-energy in the universe, the baryon number and lepton number of the universe respectively were Y and Z, the universe was A% antimatter and O% ordinary matter, and so on and so forth.

It is conceivable that this extrapolation backwards in time may even make it possible to get back to the first few minutes, or even seconds of the universe.  But, from decades of trying we have learned that there are questions that can't be answered simply by extrapolating back in time with the existing laws of physics.    But, there are limits that can't be explained without new physics.

My own bias and prejudice is to stop when we reach those limits.  Cosmology should legitimately take us back as far as possible using the existing laws of nature to a set of initial conditions that had to exist that far back in time.  This is a very sensible place to call "the beginning" from the point of view of scientific cosmology.  Indeed, the initial conditions themselves may be suggestive of possible new physics that could bring them about.  But, at that point, we start to engage in the process of scientific mythmaking, and stop engaging in the process of science itself.

Given that we have a thirteen billion plus year Big Bang cosmology that can't take us back before a singularity at t=0 in any case, who cares if we choose to start counting at t=0 or t=two seconds or t=ten minutes or t=one week or t=100,000 years.  As long as we go back as far as we can with existing laws of physics and set initial conditions for that point in time, any early initial conditions that require new physics is just question begging.  If you have to begin somewhat, why not choose a point of beginnning that goes back as far as your expertise can support, but no further.

This may mean that we never get a satisfactory answer to some of these questions, but so what.  We will know what is important and will have a conclusion around which a scientific consensus can be built.  If that means leaveing the source of those initial conditions unknown and unnatural, then so be it.  Life doesn't promise us answers to every question.

Why Three Generations Of Fermions?

Why are there three and only three generations of fermions?  Here is a conjecture.

One heuristic way to think about it is that the mass of a fundamental fermion beyond the first stable first generation and its rate of decay via the weak force are strongly intertwined.  The heavier something is, the faster is decays.  The lighter it is, the less rapidly it decays.

But, nothing can decay via the weak force any faster than the W boson, which facilitates those decays in the Standard Model.

The top quark decays almost, but not quite as quickly as the W boson does, and any particle much heavier would have to decay faster than the W boson.  But, because the W boson is what makes such decays possible, this can't happen.  Therefore, there can be no fundamental particles significantly heavier than the top quark.

Also, there is something to Koide's formula which seems to apply quite accurately to the heavier quark masses and the charged leptons.  If one extends the formula based upon recent data on the mass of the bottom and top quarks and presumes that there is a b', t, b triple, and uses masses of 173.4 GeV for the top quark and 4.190 GeV for the bottom quark, then the predicted b' mass would be 3.563 TeV (i.e. 3,563 GeV) and the predicted t' mass would be about 83.75 TeV (i.e. 83,750 GeV).  If the relationship between decay time for fundamental fermions and mass were extrapolated in any reasonable way to these masses, they would have decay times far shorter than that of the W boson that facilitates this process.  Thus, the bar to fourth generation quarks is similar to the physics that prevents top quarks from hadronizing.

Of course, even if Koide's formula is not correct in this domain, it is suggestive of the kinds of masses for fourth generation quarks that one would expect and the estimated masses need not be very precise to give rise to the same conclusion.

This reasoning also disfavors SUSY scenarios with superpartners that are universally heavier than the top quark, as increasingly seems to be the case for the currently experimentally allowed part of the SUSY parameter space, to the extent that SUSY particle decays and ordinary particle decays both took place via the weak force, which to some extent, is the whole point of SUSY in the first place.  A SUSY theory that decays by means other than the force described in electroweak unification doesn't solve the hierarchy problem which is its reason d'etre.

This reasoning also almost rules out annihilations of fundamental dark matter particles in the 300 GeV to 400 GeV+ mass range as suggested as one possible although quite implausible reading of AMS-02 observations of positron proportions in high energy cosmic rays.  If no fundamental particle can be much heavier than a top quark, than this scenario is ruled out and pair production via gamma-rays interacting with electromagnetic fields are all that remains.

The extension of a Koide triple for charged leptons (a muon, tau, tau prime triple), however, would imply a 43.7 GeV tau prime, which has been excluded at the 95% confidence level for masses of less than 100.8 GeV and with far greater confidence at 43.7 GeV (which would be produced at a significant and easy to measure freuquency in Z boson decays).  This is far from the mass level at which W boson decay rates would impose a boundary on charged lepton mass.  So, one has to infer that fundamental fermion generations, by virtue of some symmetry, are all or nothing affairs and that one cannot have just three generations of quarks, while having four generations of leptons.

This kind of symmetry, if it exists, suggests that the more common sterile neutrino theories are misguided.  Even if there is a massive particle that accounts for dark matter than doesn't interact weakly or electromagnetically or via the strong force, there is no place of it dangling from the neutrinos of the Standard Model at different fermion masses.  Neutrino mass and the source of a dark matter particle very likely are not two birds that can be killed with one unified theoretical solution.

Graviweak unification models, which create a singlet sterile neutrino which is not very tightly bound in mass theoretically within in the gravitational sector, rather than the electroweak sector, thus seem more attractive from this perspective.  These models have only left handed neutrinos and only right handed antineutrinos as a fundamental part of the theory, embracing rather than fighting what observation has told us, and the neutrinos therefore, must acquire mass via the same essential mechanism as all of the other Standard Model fundamental fermions do.

Rather than filling the right handed neutrino gap with mere right handed sterile neutrinos, gravitweak unification models fill the right handed neutrino gap with the entire gravitational sector operating in parallel to the electroweak sector, with the graviton and gravitational fields, a sterile neutrino, an intrasterile neutrino U(1) force, and a gravity sector Higgs boson-like scalar (perhaps the very same Higgs boson extending across both the electroweak and gravitational sectors) that could be attributed to dark energy, the inflaton, interia, or all of the above.

About Hadrons

This post recaps a few basic facts about Standard Model particle physics that is neither particularly controversial nor at the cutting edge of experimental discoveries to provide context when discoveries are made in the future. This analysis has consulted standard reference sources, my notes from a number of QCD journal articles, and a spreadsheet created for comparison purposes.

Quarks and other fundamental particles

There are six kinds of quarks (up, down, strange, charm, bottom (formerly also know as "beauty") and top), and all of them except the top quark are always observed confined to two quark (meson) or three quark (baryon) composite structures bound by gluons.  The top quark, which is the most massive of all of the fundamental particles with a mass of about 173.3 +/- 1.4 GeV based on the most recent LHC measurements, decays via the nuclear weak force too quickly to form observable hadrons, although in principle a vanishingly small share of top quarks produced might last long enough to allow hadronization since particle lives are probabilities of decay per time period and not certainties.    (The central value of the Tevatron measurement of the top quark mass was 173.2 +/- 0.8  GeV, with a combined estimate of a bit closer to 173.2 GeV than 173.3 GeV with a two sigma confidence interval of about 172.8 GeV to 174 GeV which probably overestimates the true error since independent measurements are so much closer to each other than we would expect if the error bars were as great as they are stated to be).

There are eight basic kinds of gluons, which are defined by the combinations of color charges that they carry, since they are otherwise identical. Gluons have a zero rest mass, but can acquire mass dynamically as they interact with each other and quarks.

Leptons (electrons, muons, tauons, and neutrinos) can interact with particles made of quarks via the exchange of five particles that are associated with the elecromagnetic and weak forces, including the Higgs boson (although they don't interact via the strong nuclear force) but don't form composite particles with them that binds quarks together. The heaviest quark has a rest mass of about 4.2 GeV. Up and down quarks are believed to have rest masses in the single digit MeV (about a thousand times lighter).

Each kind of quark comes in three color charges, has an electric charge of either +/- 1/3 (for up, charm and top quarks and their antiquarks) or +/- 2/3 (for down, strange and bottom quarks and their antiquarks), can have a left or right parity (sometimes called even or odd), and can come in matter or antimatter varieties. A particular quark has a particular rest mass associated with it which is the same for both the particle and its antiparticle (which also have opposite electric charges). Apart from these properties, quarks are entirely identical expect for their current momentum and location (both of which can't be determined at the same time beyoond a certain level of precision as a fundamental principle of physics) and their history of entanglement with other quantum particles.

Quark "color" which is neutral for every confined hadron, like the five lighest quark masses,  is something that is never directly observed. We know what observable results would flow from a different number of color charges, and those predictions are inconsistent with what we see in experiments, but there is no device that exists to directly tell you if a particular quark has a red, green or blue QCD charge.

Most importantly, a three color charge system (with three corresponding anticolors) constrains all hadrons to have integer electromagnetic charges and to have particular combinations of matter and antimatter in baryons and mesons, while forbidding all other combinations.

Hadrons

Only A Finite Number Of Hadrons Are Theoretically Possible

There are roughly one hundred theoretically possible kinds of hadrons and their quantum numbers (charge, spin, etc.), which can be set down from first principles by any graduate student in physics in an afternoon from the basic rules of quantum chromodynamics, although a handful of observed states which are combinations of different electromagnetically neutral hadron states, or are excited states, are not obvious from a mere rudimentary understanding of the laws of quantum chromodynamics.  With the more sophisticated nuances like excited states and a high but not utterly unlimited bound on energy levels, maybe you can get to twice that number.

In many practicle applications, an approximation of reality that ignores the masses of the lightest three kinds of quarks, and the existence of some or all of the heaviest three kinds of quarks, is adequate to provide results that are as accurate as can be calculated because the ephemeral particles made of heavier quarks are hard or impossible to form due to matter-energy conservation, and often have only a minor impact on lower energy physical systems. These models exclude the vast majority of these exotic hadrons.

Several other kinds of composite quark and gluon particles are not obviously forbidden by the Standard Model, but have not been observed and definitively identified. These include quarkless "glueballs" made entirely of gluons, tetraquarks and pentaquarks.

Potential tetraquark resonnances seen to date have turned out to be, in fact, "molecules" of discrete mesons rather than single coherent four quark composite particles. Numerous theoretical papers have described the properties that glueballs ought to have, but in the absence of experimental evidence (which is hard to amass since glueballs would be similar in many ways to hadrons with neutral electrical charge), we can't be certain that some law of nature not currently known to us forbids their formation or makes them so absurdly rare that we will never see one.

Mean Hadron and Fundamental Particle Lifetimes

The only hadrons that are stable are the proton and the bound neutron. The proton is stable with a mean lifetime at least as long as the age of the universe and a neutron which is not stable when not confined within an atom of a stable isotype, has a mean lifetime of about 886 seconds (about fourteen minutes and 46 seconds). The runner up, the charged pion, has a mean lifetime of about 2.6*10^-8 seconds, followed closely by the neutral kaon with a mean lifetime of about 1.2*10^-8 seconds, followed by others with mean lifetimes hundreds to trillions of times shorter. Protons, neutrons and pions are comprised only of up quarks, down quarks and their antiparticles. Kaons also incorporate strange quarks. The longest lived hadrons that contain charm or bottom quaraks have mean lifetimes on the order of 10^-12 seconds (one 1000 billionth of a second).

By way of comparison, electrons are stable, second generation heavy electrons which are called muons have a mean lifetime on the order of 10^-6 seconds, and third generation even heavier electrons called tauons have a mean lifetime on the order of 10^-13 seconds. The massive bosons of the electroweak force (the W+, W-, and Z bosons and the Higgs boson) ae likewise ephemeral, as are solitary top quarks which essentially always decay before they can form hadrons.

Hadron Volume

While quarks and other fundamental particles in the Standard Model are conceived of as being point-like, hadrons have a radius (that can be defined in several ways) on the order of 0.8.5*10^-15 meters (one femtometer), the experimentally measured size of a proton, which is small, but is trillions of times longer than the hypothetical minimum Planck length favored in many quantum gravity theories.

This scale is set largely by the form of the equations of the strong nuclear force.  At very small distances relative to this distance it is repulsive.  At longer distances it grows incredibly strong.  In between, the quarks bound by it are "asymptotically free".

While exotic hadron volume is rarely directly measured, it can be expected to vary to a similar extent to the strong force energy field binding energy of hadrons, which is pretty much all within an order of magnitude.

Electron orbits around atomic nuclei are much more tight than the gravitationally bound orbits of objects in our solar system which are often used as an analogy to them.  But, like our solar system, the vast majority of atoms and matter made of atoms and molecules is empty space.

Hadron Masses

The lighest of the hadrons (the "neutral pion") has a mass of about 0.1349766 (6) GeV. Both the proton and neutron have masses that are almost, but not quite identical, of about 0.93827013 (23) GeV for the proton and 0.939565346 (23) Gev for the neutron (about one part in four million). Approximately thirty meson masses and forty-two baryon masses have been measured to date. Several dozen more possible combinations are theroretically possible but belong to the mountain of experimental data for which some basic properties and approximate mass resonnances have been observed, but which have not been succeptible to a definitive identification with a particular predicted composite particle of QCD.

The heaviest observed three quark particle (i.e. baryon) whose mass has been precisely measured called a "bottom omega", made of two strange quarks and a bottom quark bound by gluons has a mass of about 6.165 (23) GeV (and is the least precisely measured mass of the lot at an accuracy of about a third of a percent precision). The heaviest observed two quark particle (i.e. meson) whose mass has been precisely measured is called an upsilon, is made of a bottom quark and an anti-bottom quark bound by gluons has a mass of about 9.46030 (26) GeV. The heaviest theoretically possible meson or baryon (that does not have a top quark as a component), which has not yet been observed, called the triple bottom omega baryon should have a mass of about 15 GeV.

The heaviest theoretically possible hadron is about 100 times as heavy as the lighest possible hadron, a much narrower range of masses than the range of masses for the fundamental particles of the Standard Model (which range over about 21 orders of magnitude from the top quark to the lighest neutrino), or even the quarks themselves (which have a range of masses of about 100,000 to 1). The range of hadron masses is bounded in part because the heaviest quark, the top quark, does not form hadrons. The range of rest masses of the five quarks that form hadrons is about 3,000 to one.

Equally or more importantly, this is because the color charge interactions of any kinds of quarks in two and three quark particles respectively, are almost (but not exactly) the same. The amount of strong nuclear force field energy necessary to bind an exotic spin-3/2 baryon made of the heaviest quarks is only about 30% greater than the amount of energy necessary to bind an ordinary proton or neutron and is basically the same for all spin-3/2 baryons (the ones with the most binding energy have only about 3% more binding energy than the ones with the least binding energy about ten to one hundred times more than the uncertainty in the experimental measurements themselves).

There is more variation in the amount of strong nuclear force field energy that binds together spin-1/2 baryons, but none have a binding energy that is more than about 40% greater than that of an ordinary proton or neutron (which have the least).

Moreover, this range is greatly inflated by a handful of the heaviest and most rare varieties of hadrons.

The stability of the amount of hadron mass attributable to this binding energy matters a great deal because in a proton or neutron, the sum of the three fundamental up and down quark rest masses is equal to roughly 1% of the total mass of nucleon. In contrast, in a bottom omega baryon, sum of the rest masses of the constituent quarks is about 71% of the whole particle's mass, and in the heaviest experimentally measured hadron, the upsilon, the sum of the masses of the constituent quarks is about 89% of the whole particle's mass.  In the heaviest theoretically possible hadrons, the ratio of fundamental particle mass to hadron mass would be even greater.

Of course, hadrons in turn bind themselves into one of about 120 different kinds of atoms, in a wide variety of isotypes, i.e. numbers of neutrons in an atom of N protons (only a small portion of which are stable), whose nuclei are made entirely of protons and neutrons.

Hadron Density

Atomic nuclei, in general, have approximately the same density of neutron stars, which are the most dense known objects in the universe outside of black holes.  Indeed, large black holes have less mass per volume defined by their event horizons than neutron stars do.  Black holes have a declining mass per event horizon volume has they acquire more mass.  Atomic nuclei are significantly below the density needed to form a black hole at their scale according to the General Relativity, although it isn't obvious that General Relativity applies at such small scales without modification since it is a clasical rather than a quantum theory that is applied to a quantum scale in this context.

It is possible that higher generation quarks like strange quarks or hadrons made of them may be stable in extreme circumstances like extremely dense neutron stars (or perhaps utterly beyond our observation inside black holes), but there is no solid evidence that such quark stars actually exist.

Theory Lags Behind Experiment In Hadron Physics

First principles theoretical calculations of proton and neutron masses are accurate to about 1% in absolute terms, although more precise theoretical predictions can be made for heavier exotic hadrons and it is also possible to calculate from first principles to an order of magnitude accuracy the much smaller mass difference between the proton and neutron massses, even though this is only about 0.1% of the mass of the proton. Even the more precise theoretical determination of the difference of the two masses is about 4,000 times less precise than experimental measurements of this mass difference.

Quantum chromodynamics is the virtually unchallenged contender for this part of the Standard Model mostly because none of its theoretical predictions have been contradicted and because no one else has come up with any really credible alternatives that make more precise predictions. The only particles it predicts that we haven't seen are particles that are hard to observe and classify which we may actually have seen already. Every particle that we have been able to observe carefully enough to classify has been succeptible to being fit into QCD's built in taxonomy with a modicum of ingenuity.

An important reason for this difficulty in making accurate calculations is that it is quite difficult to determine the quark masses precisely from the roughly seventy-two available hadron data points, and some less direct data points (like measurements of the strong force coupling constant which are accurate to at least about four significant digits), and the known values of the Standard Model coupling constants. The ligher the quark, the less precisely its mass is known, because mass attributable to gluon interactions so profoundly overwhelms the fundamental quark masses. Turning those data points into theoretical constant values involves great computational difficulties, so mostly physicists resort to methods only moderately more sophisticated than a basic spreadsheet comparing the seventy-two data points with the known composition of the particles in question (determined based upon their other properties like charge and spin).

These percentages are a bit slippery because fundamental particle masses "run" with the energy level of the circumstances where they are observed so any single mass value for them necessarily includes contextual assumptions about the measurement that may be inconsistent with the context in which the hadron that contains it is observed. Often the 2 GeV mass level of about two protons at rest, is used as a standard. Likewise, the conventional view that the additional mass associated with quarks within hadrons is localized in the dynamically generated gluon masses within the confined quark system is to a great extent a model dependent feature and one could imagine a coherent model in which that additional mass was apportioned to the constituent quarks which is hard (although not necessarily impossible) to distinguish experimentally.

In a calculation like the calculation of the proton-neutron mass difference, uncertainty regarding the values of the fundamental constants gives rise to something on the order of two-thirds of the uncertainty in the theoretical prediction, while about a third of the theoreticallly predicted value or so is due to truncation of the infinitely long series of equation terms that the QCD equations tell us give the exact value which can be approximately solved numerically but not calculated precisely in all but the most simple cases.