Wednesday, November 30, 2011

Ritual Human Sacrifice In Pagan Iceland

The is archaeological evidence in Iceland that is suggestive of ritual human sacrifices in the period prior to its conversion to Christianity ca. 1000 CE.

Tuesday, November 29, 2011

Is Dali Man Really A 209,000 Year Old Modern Human? No.

There are only a few non-African purportedly modern human skeletal remains prior to 100,000 years ago, when they are first modern human remains are seen in the Levant. These are the Dali Man remains "discovered by Shuntang Liu in 1978 in Dali County in the Shaanxi Province of China" which are claimed to be 209,000 years old. Images of these remains can be found here.

Shaanxi Province (via Wikipedia)

The Dali Man find is described in the Wikipedia link above as follows:

The Dali cranium is interesting to modern anthropologists as it is possibly an ideal specimen of an archaic Homo sapiens. It has a mixture of traits from both Homo erectus and Homo sapiens. The details of the face and skull are however distinct from European Neandertals and earlier European hominids like the finds from Petralona and Atapuerca.

Skull vault

The skull is low and long, though the posterior end of the skull is rounded, unlike the contemporary broad-based H. erectus or top-wide skull of modern humans. It does however bear a prominent sagital crest, a trait found in H. erectus but in few modern humans. The brain appears to have been sitting mainly behind the face, giving an extremely low forehead. The cranial capacity is estimated to around 1,120 cc, at the lower end of the modern human range, and upper end of the H. erectus range. The base of the cranium is less robust than in H. erectus. The posterior margin lacks the heavy neck muscle attachment seen in that group. Unlike the distinct tubular form seen in H. erectus, the tympanic plate is thin and foreshortened, a condition similar to that of modern humans.

Unlike H. erectus skulls, the Dali skulls lack the "pinched" look between the face and the cranial vault.


The face is topped by massive brow ridges. The ridges curve over each eye, unlike the straight bar-like ridges seen at the Peking man material from Zhoukoudian. The curvature is more similar structurally to the brow ridges in archaic humans from Europe and Africa. The cheek bones are delicate, and the nasal bone flattened, again a curious combination of traits. During fossilization, the upper jaw has been fractured and dislocated upwards, giving the cranium the appearance of having a very short face. If reconstructed, the face would be probably be similar in overall dimensions to that of the Jinniushan man skull.

The Jinniushan cranium found in China in 1985 which is 250,000-260,000 years old and similarly shows strong Homo Erectus traits.

Dali Man and the Jinniushan cranium rival in age some of the oldest anatomically modern human remains from Africa. But, my own take is that Dali Man is simply a highly evolved descendant of Homo Erectus, arguably deserving a new species name, perhaps Homo Denisovia, if he is of the same species as the Denisovians, which seems likely given his proximity in time and geography. He may have evolved some Homo Sapien-like traits in parallel to modern humans, but the more plausible reading of the totality of the facts is that neither Dali Man nor Jinniushan man are in any way genetically descended from modern humans.

Then, there is also "Maba Man, a 120 to 140,000 year old fragmentary skull from Guangdong in China shows the same general contours of the forehead." As explained in the image link above:

The Maba cranium, dated to approximately 120,000 years ago was discovered in 1958 in the southern Chinese province of Guangdong. . . . It was initially thought to be an Asian Neandertal but does not in fact show any of the derived features of Neandertals as known from Europe and the Near East. The Maba skull is similar to other more complete finds . . . subsequantly found in China, differing only in minor ways, such as the size and shape of the orbits and nasal bones. Maba is also somewhat reminiscent of the recently discovered Narmada skull from India.

Maba Man too seems more likely to be a late evolutionary stage of Asian Homo Erectus than any form of Homo Sapien, hybrid or otherwise. The Narmada cranium from central India dated to 236,000 years ago, to which these finds are often compared is likewise probably a Homo Erectus, although from a West Eurasian, rather than an East Asian subspecies.

Indeed, Maba Man shows continuity of the Homo Erectus species in Asia pretty much right up until modern humans arrive, admix with a species that is genetically kid to the Denisovian remains in Siberia, which then goes extinct upon contact with modern humans, just like the Neanderthals and many species of megafauna.

Megafauna Extinctions In Japan

As an aside, it is interesting to note that Japan's major megafauna extinction took place around 12,000 years ago, even though it was inhabited by the modern human Jomon people from perhaps as much as 35,000 years ago. In other places, like North America and Australia, megafauna extinctions followed much more quickly after the arrival of modern humans and it isn't clear why Japan was different.

UPDATE: John Hawks discusses how anthropology resolves these kinds of issues opening with a quote arising from the study of Asian Homo Erectus.

Monday, November 28, 2011

Extended Particle Models

Between preon models, that assume that the so called fundamental particles are really composites of smaller pieces, and point particle models that are inconsistent with general relativity, are extended particle models, that have non-point-like particles that occupy a non-point-like volume, although the line between the two is fine one.

A recent example of such a model is Chih-Hsun Lin, Jurgen Ulbricht, Jian Wu, Jiawei Zhao, "Experimental and Theoretical Evidence for Extended Particle Models" (2010). The abstract, in part, for the 147 page paper with 41 figures reads as follows:

We review the experimental searches on those interactions where the fundamental particles could exhibit a non point-like behavior. In particular we focus on the QED reaction measuring the differential cross sections for the process $ \EEGG $ at energies from sqrt{s} =55 GeV to 207 GeV using the data collected with the VENUS, TOPAZ, ALEPH, DELPHI L3 and OPAL from 1989 to 2003.

The global fit to the data is 5 standard deviations away from the standard model expectation for the hypothesis of an excited state of the electron, corresponding to the cut-off scale Lambda =12.5 TeV.

Assuming that this cut-off scale restricts the characteristic size of QED interaction to 15.7x10^{-18} cm, we perform an effort to assign in a semi-mechanical way all available properties of fundamental particles to a hypothetical classical object. Such object can be modeled as a classical gyroscope consisted of a non rotating inner massive kernel surrounded by an outer rotating massive layer equipped with charged sorted in a way to match the charge contents for different interactions. The model size of an electron agrees with 1.86x10^{-17} cm with the experiment. The introduction of a particle like structure related to gravity allows to estimate the inner mass kernel of an electron to 1.7x10^{-19} cm and the mass of a scaler to 154 GeV. The extension of the model to electrical charged particle-like structure in nonlinear electrodynamics coupled to General Relativity confirms the model in the global geometrical structure of mass and field distribution.

Some of the same authors have explored the same ideas in papers here (2009), here (2003), here (2001), and here (1999).

The 2009 paper is much shorter and has a nice survey of the research in the field, with this particular approach related closely to non-linear electrodynamics coupled to gravity (NED-GR) theories and the Born-Infeld Lagrangian, discussed, for example, here (2010) and here (2009). Most of the literature related to this focuses on modeling atypical hypothetical types of black holes.

Ultimately, the matter goes to fundamental issues of quantum gravity discused, for example, in this paper (2000) that systematically explores different possible couplings of the electromagnetic field and gravity.

Ancient DNA Suggests Multiple Neolithic Population Waves In Iberia

We successfully extracted and amplified mitochondrial DNA from 13 human specimens, found at three archaeological sites dated back to the Cardial culture in the Early Neolithic (Can Sadurní and Chaves) and to the Late Early Neolithic (Sant Pau del Camp). We found that haplogroups with a low frequency in modern populations—N* and X1—are found at higher frequencies in our Early Neolithic population (∼31%). Genetic differentiation between Early and Middle Neolithic populations was significant (FST∼0.13, P less than 10−5), suggesting that genetic drift played an important role at this time.

To improve our understanding of the Neolithic demographic processes, we used a Bayesian coalescence-based simulation approach to identify the most likely of three demographic scenarios that might explain the genetic data. The three scenarios were chosen to reflect archaeological knowledge and previous genetic studies using similar inferential approaches. We found that models that ignore population structure, as previously used in aDNA studies, are unlikely to explain the data. Our results are compatible with a pioneer colonization of northeastern Iberia at the Early Neolithic characterized by the arrival of small genetically distinctive groups, showing cultural and genetic connections with the Near East.

Via Dienekes.

Based on ancient DNA, this study makes three notable claims:

(1) The Cardial Pottery culture had a strong demic component; it cannot be explained by cultural diffusion in the absence of migration.

(2) There was a demographically distinct wave of migration to Iberia after the initial early Neolithic Cardial Pottery wave and before the Bronze Age, i.e. in the Middle Neolithic.

(3) There is a lack of genetic continuity between the early Cardial Pottery culture and modern Iberians (or for that matter, modern Europeans). A model in which modern Iberians are descendants of the first farmers and herders, or of hunter-gatherers who learned to farm and herd from their neighbors and trade partners via cultural diffusion, is wrong.


Maju reports in the comments to the linked post that the full haplogroup breakdown in this sample was "3H, 2N*, 1U5, 1K and 1X1" (and implictly that five samples didn't produce definitive haplogroup identifications) and that "another paper (published in Spanish language only, from 2009 but unknown) on very early Neolithic DNA from Navarre [found mtDNA haplogroups] . . . 3H*, 2H3, 1U, 1K, 1I and 1 HV."

Without considering subtypes and lumping the two samples one gets from a total sixteen data point sample from two contemporaneous but independent sources:

7H, 2N, 2U, 2K, 1I, 1 HV and 1 X1.

The pre-Neolithic origins of the mtDNA U samples in each group is relatively uncontroversial. Maju argues, and Dienkes disputes, that H is pre-Neolithic.

The notion that one can make a p=0.00001 claim about anything with a simple size of thirteen or less is doubtful. Small samples are inherently prone to contain flukes. The margin of error at the 95% confidence level in a sample of this size for a trait upon which there is an even split in the total population is +/- 27 percentage points (i.e. a 50% results is consistent with an actual percentage in the population of 23%-77%).

Also, unlike an ordinary sample populations, ancient DNA samples from a single region are very likely not independent of each other; there is a good chance that some samples are from members of the same family that were buried together because they were members of the same family and that other burial sites from the same community from other families are absent from the sample because they weren't well preserved.

Thus, the fact that a group of haplogroups that are rare today made up 31% of a non-random sample then, could easily be consistent with a frequency in the overall early Neolithic population of well under 10%.

You can be certain that each of the haplogroups in the sample was present at the time. You can be reasonably confident that groupings of haplogroups that are common, if there is any sound reason for the grouping, were not rare at the time. You can say that it is possible that haplogroups that don't show up at all might not have been present at the time and can be more confident in that conclusion if that haplogroup suddenly appears in reasonable frequency at a more shallow time depth and as the total sample size at the older time depth grows. You can't say a whole lot else.

The overlap of mtDNA haplogroups H, U and K at reasonably similar frequencies in two early Neolithic Iberian ancient DNA samples that probably are independent of each other does suggest that there is a good chance that the percentages of mtDNA H, U and K in these samples are not wildly atypical and that N*, I, HV and X1 were all present in the larger early Neolithic Iberian population, probably at more than vanishingly small frequencies.

While the early Neolithic ancient Iberian mtDNA isn't really consistent with modern distributions, since the odds of having that many quite rare haplogroups in a draw from a modern Iberian population wouldn't be that great, the overlap between modern Iberian mtDNA and early Neolithic Iberian mtDNA isn't huge. An identical sized sample from modern Iberians would probably have an overlap of more than 65% in broad haplogroup type with this early Neolithic sample, and finer distinctions could simple represent mutations within a population in continuity with the modern one. I suspect that in this case, the mtDNA is less volatile than Y-DNA over the last 7,500 years or so.

We think we know that the mtDNA X1 v. X2 split dates to a time prior to the arrival of proto-Native Americans in the Americas, thousands of year before the Neolithic revolution. The Druze of the Levant are only modern population of which I am aware that has significant proportions of both X1 and X2. This suggests that the Druze are either descendants of the ancestral X* population, or are an admixed population that had sources in a population with significant X1 percentages and a population with significant X2 percentages in the place where Druze ethnogenesis took place, which by tradition would be in the mountains of Turkey or Iran.

But, tracing an mtDNA lineage to the Near East doesn't necessarily tell you very much about its time depth. It is undisputed that all modern humans in Europe arrived from a source in common with the Near East in the last 40,000 years or so and that there was a major reduction in modern human populations of Europe at the last global maximum around 20,000 years ago. There is little evidence to tell us how much of a demographic influx, if any, Europe experienced between the last global maximum and the early Neolithic, but it wouldn't be controversial to argue that there was at least some population influx into Europe from the Near East in that time period. Mutation rate based dating also provides very little insight into whether a particular haplogroup had Epipaleolithic or early Neolithic origins.

The ancient DNA evidence for anything other than a couple of subhaplogroups of mtDNA haplogroup U (e.g. U4 and U5) prior to the early Neolithic in Europe is very thin, and as Dienekes notes in the comments to the linked post, there is reason to be skeptical of the only unpublished study that has made that claim (as far as I know, ancient DNA samples from North Africa putting other haplogroups commonly found in Europe there as far back as 12,000 years ago are less controversial). Certainly, non-U mtDNA haplogroups were very rare outside Southern Europe for most of the Upper Paleolithic era. But, it is harder to rule out the possibility that there may have been an influx of people more genetically similar to modern Southern Europeans in Southern Europe probably via a mostly coastal route, perhaps 14,000 years ago or so, a relatively late and geographically confined part of the Upper Paleolithic, but also many thousands of years before farmers or herders from the Near East arrived on the scene.

In an ancient mtDNA sample from pre-Roman Iron Age Iberia, (apparently N=17), "[t]he most frequent haplogroup is H (52.9%), followed by U (17.6%), J (11.8%), and pre-HV, K and T at the same frequency (5.9%). No samples were found to correspond to other haplogroups that are widely present in the Iberian peninsula populations (Table 7), such as V, X, I or W. The North African U6 subhaplogroup and Sub-Saharan African L lineages are also absent from the ancient Iberians analyzed so far[.]" More Iberian ancient DNA links can be found here.

The overlap between this sample and the early Neolithic sample is even stronger. The H, U and K percentages in the Iron Age sample are well within sampling error ranges of either of the early Neolithic samples (or both combined), and those three percentages make up 76.4% of the Iron Age sample. This would suggest that the matriline contribution to the Iberian gene pool in the metal ages was pretty modest, despite suggestive evidence that the Y-DNA contribution was greater. Given the proven antiquity of HV in Iberia, the likelihood that pre-HV is also not a relatively recent arrival also seems pretty strong, suggesting an early Neolithic to Iron Age mtDNA shift in Iberia of less than 20% (subject to a significant margin of error).

There was also more mtDNA continuity from ancient mtDNA of the early Neolithic era to the present in Iberia than in Central Europe where the ancient DNA of LBK Neolithic peoples is in stark contrast to that of both modern Central Europeans and Central European hunter-gatherers.

The mtDNA case for a repopulation of Europe after the Last Glacial Maximum from a Southwest European refugia with population expansion taking hold around 14,000 years ago, which is calibrated to a great extent by mtDNA mutation rate dating (which while subject to serious doubts and error bars is certainly more reliable than Y-DNA mutation rate dating) from the modern distributions of haplogroups H and V and their subtypes can be seen here.

Ultimately, the question of how much of Iberia (and Europe's) mtDNA is Epipaleolithic and how much is Neolithic or later, mostly boils down to your assessment of the time when mtDNA H (and probably HV) arrived. If these haplogroups are Epipaleolithic, then 80% or so of Europe's matrilines are ancient; if these haplogroups are Neolithic, then 80% or so of Europe's matrilines were replaced in the Neolithic revolution or afterwards.

The ancient DNA makes clear that H has been in Iberia at least since the early Neolithic, so if it is Neolithic or later, it was there from the start. But, it doesn't tell us if 5500 BCE or 14000 BCE is closer to the time when it arrived.

The mtDNA mutation dating of mtDNA haplogroup H subhaplogroups, however, in the Iberian refugia link above, however, along with the older Upper Paleolithic ancient mtDNA does, however, make a pretty strong case that H's predominance in Europe is at the very least, a post-LGM event, and probably many thousands of years later than that.

It also bears a mention in passing that no modern humans have ever been found to have Neanderthal mtDNA. (Query if populations with high levels of mtDNA haplogroup U5 have elevated Neanderthal DNA percentages.)

Sunday, November 27, 2011

Venezuelan Maternal Ancestry Mostly Native American

We analyzed two admixed populations that have experienced different demographic histories, namely, Caracas (n = 131) and Pueblo Llano (n = 219). The native American component of admixed Venezuelans accounted for 80% (46% haplogroup [hg] A2, 7% hg B2, 21% hg C1, and 6% hg D1) of all mtDNAs; while the sub-Saharan and European contributions made up ∼10% each, indicating that Trans-Atlantic immigrants have only partially erased the native American nature of Venezuelans. A Bayesian-based model allowed the different contributions of European countries to admixed Venezuelans to be disentangled (Spain: ∼38.4%, Portugal: ∼35.5%, Italy: ∼27.0%), in good agreement with the documented history.

From here (hat tip to Dienekes).

Thus, approxmiately 80% of the matrilineal ancestry of urban Venezuelans was Native American, 10% was African (presumably descendants of women brought to the New World in the slave trade), and 3.8% is Spanish, 3.6% is Portugese, and 2.7% is Italian.

One would expect that the patrilineal ancestry of Venezuelans (and also autosomal ancestry which is derived from all ancestors, rather than merely the patriline and matriline ancestors) would have a much larger Southern European component. Also, by implication, the percentage of urban Venezuelans who have exclusively European ancestry is less than (and probably significantly less than) 10%.

The Native American component, consistent with other studies of Native American mtDNA in North American and in Latin America, shows less genetic diversity than many North American Native American populations. It remains unclear how much of that difference is due to serial founder effects and how much is due to the impact of possible subsequent waves of migration to the Americas that did not penetrate to Central and South America in significant numbers.

It is also reasonable to expect, given historical patterns of colonization in Venezuela, that the Native American component of rural populations in Venezuela is probably larger than the population in two of its largest cities.

This data point also supports the frequent reality that a paternal superstrate population can give rise to language shift even in the absence of a large maternal genetic contribution.

Wednesday, November 23, 2011

New Evidence Favors SE Asian or South Chinese Dog Domestication

Dogs were almost surely the first domesticated animal. But, there is dispute over where dogs were domesticated with different studies based on different evidence reaching different conclusions. New evidence suggests that the lion's share of modern dogs, based on Y-DNA and mtDNA diversity, suggests that they have roots in dogs domesticated in Southeast Asia or South China.

Dead Sea Scroll Writers Wore White Linen Clothes

New data points on the kind of clothing favored by the writers of the Dead Sea Scrolls (white linen, rather than the more common wool and colorful clothes of the era) support the already leading theory that the more recent of the scrolls in the group were written by members of the ancient Jewish religious group called the Essenes. The absence of the equipment necessary to make linen at the site (which was almost exclusively women's work at the time) also suggests that the Dead Sea Scrolls may have been stored at a male dominated religious center, rather than a place that had a whole community of complete families.

Tuesday, November 22, 2011

Soybeans Domesticated 3500 BCE

Examination of charred soybean remains in Northern China, Japan and South Korea has pushed their date of domestication back from 0 CE (with a connection to the Zhou dynasty), the best estimate of the date of domestication until now, to the much earlier 3500 BCE at a location in Central China, early on in the East Asian Neolithic. Earlier studies of the East Asian Neolithic had found millet that old in these regions, but not soybeans, which are less easily preserved.

Soybeans (Glycine max) have been found in the wild in a predomesticated small bean version in the region from 7000 BCE.

Monday, November 21, 2011

Instatons Liquid Vacuum Models In QCD

Let us briefly summarize the basic picture of the QCD vacuum that has emerged from the instanton liquid model. The main point is that the gauge fields are very inhomogeneous. Strong, polarized fields are concentrated in small regions of space-time. Quark fields, on the other hand, cannot be localized inside instantons. In order to have a finite tunneling probability, quarks have to be exchanged between instantons and anti-instantons. This difference leads to significant differences between gluonic and fermionic correlation functions. Gluonic correlators are short-range and the mass scale for glueballs is significantly larger than that for mesons. In addition to that, the fact that the gluonic fields are strongly polarized leads to large spin splittings in both glueballs and light mesons.

From this 1997 review.

Basically, the strong force is concentrated in globs within the vacuum of about 0.35 femtometers (fm) that interact with each other and take up a bit less than 2% of the space-time in a hadron. Quarks hop from one of these globs to the next in quantum tunnelling interactions (i.e. they can move from point A to point B in a field even though they lack the energy to cross field potentials that exist between those two points) at predictable rates.

This model remains relevant and accurate in predicting a wide variety of experimental and lattice QCD results such as the conditions under which quark-gluon plasmas form, and provide a way to mediate between raw QCD equations for interactions between individual quarks and gluons and the phenomenological models that fit the experimental results.

Friday, November 18, 2011

Julien Riel-Salvatore on Neanderthal Assimilation To Modern Humans

The Denver Post is hyping a new finding by hometown Neanderthal expert Julien Riel-Salvatore (author of the blog "A Very Remote Period Indeed" in the sidebar), which shows how Neanderthals could have been wiped out as a species simply by admixture with modern humans who had superior population densities.

"Neanderthals were not inferior to modern humans," said Julien Riel-Salvatore, assistant professor of anthropology at UC Denver and co-author of the study published in Human Ecology.

"They were just as adaptable and in many ways simply victims of their own success," he said. . . .

Riel-Salvatore said the new computer model takes the added step of showing how Neanderthals and homo sapiens roamed farther afield than assumed.

The computer models ran for 1,500 generations, following estimated samples of humans and Neanderthals roving across the two continents.

"You set some baseline parameters and see how these 'agents' interact," said Riel- Salvatore, who co-wrote the study with C. Michael Barton of Arizona State University. One parameter included researchers' knowledge that the last ice age forced Neanderthals and humans to widen their search for food.

They interacted, and because there were far more humans, over time that species diluted Neanderthals until they appear as just a fraction of our DNA.

Evidence of Neanderthal prowess as mates follows a March study co-authored by CU-Boulder, showing Neanderthals were experts at keeping fires burning continually. That skill, key to flourishing, had previously been thought beyond their reach.

"It shows you don't need to depend on the old cliches to explain the disappearance of Neanderthals," Riel-Salvatore said. "You don't need to make assumptions about them being stupid or less flexible."

I have two main criticisms of Riel-Salvatore's take on the matter.

First, the absence of any Neanderthal Y-DNA or mtDNA despite the fact that 1-4% of autosomal DNA in modern non-Africans has a Neanderthal source does not support the kind of admixture dilution model that he is proposing. You need a very specific kind of admixture that takes into account factors like differences between hybrid fertility and intraspecies fertility to get the observed result, and you also need to take into account post-Neanderthal replacements of much of the modern human population of Europe to make the geographic uniformity of the Neanderthal percentages fit what is observed. Also, there is pretty solid evidence to indicate that there were no significantly mixed Neanderthal-modern human communities; tribes from the diifference kinds of hominin mostly kept their distance from each other. There is not a single case of a Neanderthal skeleton and a modern human skeleton buried in the same graveyard in the same culturally continous time frame, although there is evidence of modern humans using caves previously inhabited in prior eras by Neanderthals.

Second, there is a quite strong case to be made that Neanderthals, while more sophisticated than Homo Erectus, for example, were not the intellectual equals of modern humans. Their material culture remained stagnant for 200,000 years, while modern human technologies evolved continuously and at a much higher rate. Some of the technological advances attributed to them during the contact period with modern humans (e.g. the Uluzzian) have been determined to very likely have been modern human tool cultures, and in the cases where there was change in Neanderthal tool culture, the change was probably in lower quality imitation of modern humans and didn't happen until contact took place. Their food sources were much less diverse (they were pretty much exclusively apex predators while modern humans had more fish, although Neanderthals did some non-harpoon fishing, more small game, and more fruit and vegetable gathering). Their geographic range was much narrower (they didn't make into Southeast or East Asia, nor did they make it to Africa, and they had a more limited presence in the far North and Siberia although there are some traces of them in parts of these places). The quantity and quality of Neanderthal artistic expression and symbolic activity was a pale shadow of that of modern humans (they had simple burials but no elaborate paintings or sculptures). The range of materials that Neanderthals used in tools was smaller (e.g. they used far less processed bone and twine). They had less of an ecological impact on megafauna than modern humans, suggesting that they were less successful hunters. Their social groupings were smaller from a quite early point.

You may not have to make assumptions about Neanderthals being stupid or less flexible than modern humans to explain their demise. But, despite having relatively large brains, and genes that would have given them more language abilities than some of their hominin predecessors, there is ample evidence from the material culture that they left behind that they were not as adaptable as modern humans. And, if one can determine that from independent evidence, it isn't unreasonable to include that in models of how Neanderthals went extinct.

Mid-November And No Definitive Higgs Boson Signals Yet

The latest Higgs boson exclusion plot in the interesting mass region v. viXra crude combinations from months ago in red

Phil Gibbs has the latest combination plot from the Higgs boson search at LHC which closely confirms his much earlier back of napkin combination in all the places where it matters.

According to Jester, the latest report is that "the Standard Model Higgs is excluded at 95% confidence level for all masses between 141 GeV and 476 GeV." The exclusion is 132-476+ GeV at 90% confidence, and consideration of electroweak stability provide further bias towards a low end mass range if there is a Higgs boson (as do electroweak precision measurements). In fact, that is conservative. There really isn't that is a plausible Higgs boson signal at any mass below 600 GeV and some of the potential masses between 120 GeV and 140 GeV also seem highly unlikely give that data assembled to date from all sources.

A Higgs boson mass of less than 115 GeV was ruled out before the LHC started its work.

The absence of a Higgs boson of less than 476 GeV would rule out both the Standard Model Higgs boson and a large swath of the more popular SUSY models. Theorists are prepared, of course, with "heavy Higgs" (see, e.g. Marco Frasca) and "Higgsless" models if they become necessary to explain the data, but this result would clear all of the front runners out of the race to be the correct theory of particle physics, at least in their unmodified forms. There are also a variety of plausible theories that could suppress the signal of a Higgs boson, while allowing it to still do the job that it does in the Standard Model.

One hot spots where a Standard Model Higgs boson could appear are a little under 120 GeV mass, where there is perhaps a bit over a 2 sigma excess over the expected value (1.6 sigma with look elsewhere effects considered), the excess has a pretty narrow width, the electroweak precision predictions are favorable, and the resolution of the data to data is lowest so there couldn't be a strong signal there no matter what due to sample size limitations.

The other hot spot is around a 140 GeV mass, where there is a wide excess of about two sigma, but the resolution of the data to date would suggest that a real SM Higgs boson would produce a stronger and sharper signal by this point (the mean result we would expect at this mass if there was a SM Higgs boson at this mass would be about 4 sigma for the amount of data collected to date). The only explanation I've heard for a wide excess from about 125 GeV to 150 GeV in the combined plot, is that there is probably something wrong in the Standard Model background modeling or some other sort of unknown systemic error, as no particle that is predicted by any model that anyone is thinking about very seriously produces that kind of extremely wide excess. CMS and Atlas have opposite directions of signal in some subcomponents of their experiments which is less encouraging than weak signals in the same direction from both.

Data from the diphoton channel disfavors a Standard Model Higgs boson with a mass from about 120 GeV to 137 GeV, leaving only windows from 115 GeV to 120 GeV and 137 GeV to 141 GeV looking at possible in that experiment. The so called "golden channel" data also disfavors Higgs boson masses in the high 120s and low 130s GeV mass range.

Corrected for the "Look Elsewhere Effect" (LEE) the maximum statistical significance of the combined result sees a 1.6 sigma result just under 120 GeV and well under a 1 sigma result elsewhere (even if there was a Higgs boson an exclusion below 124 GeV was predicted to be possible with the amount of data collected so far and in actuality the exclusion was made down to 141 GeV). The broad width of the excess in the low mass range comes from the mass uncertainty in the ZZ and WW channels modulated by the diphoton channel, and the number of excess events there in absolute numbers isn't that big. The excess events are closer to the Standard Model background prediction than to the signical of a Higgs boson at that mass.

A CERN meeting in December would be the first possible date to announce preliminary results from the full 2011 run of both experiments, although official announcements of the analysis of that data could be as late as a conference planned for March 2012. Yet, even if there is no official announcement until March, it is hard to believe that credible rumors won't leak sometime before then.

The more time passes from the end of the 2011 data collection run without a credible rumor (this is an experiment that involves thousands of people who are very clever when it comes to physics and office politics), the more likely it is that the Standard Model Higgs boson could be ruled out entirely.

Bottom line: 117-119 GeV Higgs bosons or bust! And, either outcome would be interesting.

I have been, and continue to be, ambivalent about the prospects that a Standard Model Higgs boson really exists, or even that one is necessary. My own intuition is that we've found all of the fundamental particles, but that we aren't doing all the math right and may be missing some key relationships that serve the same function as the Higgs does in the Standard Model.

Meanwhile, the latest report out of the OPERA experiment discounts some of the more popular possible sources of experimental error in their result showing neutrinos travelling at faster than the canonical experimentally measured speed of light by a favor of about one part per 10^5.

Thursday, November 17, 2011

Siberian and North American Prehistory and Ancient History Observations

The Ket people, currently numbering about 1,100 in the fact of assimilation with neighboring populations, who live along the middle of the Yenesian River in central Siberia, are widely viewed as the most ancient Paleosiberian population still in existence.

According to their own legends, they were exiled to the Yenesian River area from where they live now, perhaps around 0 CE, in the face of fierce invaders (perhaps Indo-Europeans or the Turks), probably from the other side of the Sayan Mountains, which together with the Altai Mountains (aka Altay), and Lake Baikal, form the geographic barrier between Siberia to the North and Mongolia to the South. This area has also been proposed as a homeland for the Altaic languages (which make up the Turkic, Mongolian and Tungistic language familes and amy also be a distant ancestors of Korean and Japanese). And, this area has been proposed an an area in which a pre-Uralic protolanguage may have had a formative period.

They, in turn, were followed by more than one later waves of migration. Most recently, since about the 18th century, Russian influence spread east across Siberia. Roughly in the 13th century, the Mongolians had an empire that spanned Siberia but it swiftly collapsed with not much of a trace in the region, and before that from the middle of the 1st millenium until at least about 900 CE, the Turks expanded across much of Siberia. Indo-Europeans had spread as far as the Tarim Basin by about 2000 BCE (which they held until about 600 CE), and the migration east across Siberia was probably underway by 3500 BCE.

At least in Northeastern Europe, the Indo-Europeans were preceded by the Uralic peoples, whose language survives, for example, in Finnish and Hungarian. The Pitted Ware culture (3200 BCE to 2300 BCE in Southern Scandinavia) is sometimes seen as linguistically Uralic, on the grounds that they were a Mesolithic fishing culture rather than a Neolithic hunting one, like their Comb Ceramic neighbors to the east.

The Pit-Comb Ware Culture (aka Comb Cermamic 4200 BCE to 2000 BCE) is almost surely linguistically Uralic. Although, according to James P. Mallory and Douglas Q. Adams, "Pit-Comb Ware Culture", in Encyclopedia of Indo-European Culture,( Fitzroy Dearborn, 1997), pp. 429–30, via Wikipedia: "some toponyms and hydronyms may indicate also a non-Uralic, non-Indo-European language at work in some areas." One can imagine those toponyms and hydronyms as being associated, for example, with a dead language of a lost culture bearing mtDNA halogroup V also found in Iberia and Northeast African Berbers, that was lost when the predecessors of the modern Saami of Northern Sweden and Finland (perhaps associated with Pitted Ware culture) were assimilated into a Uralic culture from which mtDNA N1a made its way on a "great circle" route into Northern Scandinavia from Siberia and before there from Southeast Asia.

The origins of Comb Ceramic culture may be reflected in its pottery, as well as genetically. "It appears that the Comb Ceramic Culture reflects influences from Siberia and distant China." (per Marek Zvelebil, Pitted Ware and related cultures of Neolithic Northern Europe, in P. Bogucki and P.J. Crabtree (eds.), Ancient Europe 8000 BC–AD 1000: Encyclopaedia of the Barbarian World, Vol. I The Mesolithic to Copper Age (c. 8000-2000 B.C.) (2004).) The 4200 BCE-2300 BCE date for Comb Ceramic may be an underestimate. "Calibrated radiocarbon dates for the comb-ware fragments found e.g. in the Karelian isthmus give a total interval of 5600 BCE – 2300 BCE (Geochronometria Vol. 23, pp 93–99, 2004)," and some accounts suggest Comb Ceramic culture was still in place at 2000 BCE.

Where was this culture found?

Finnmark (Norway) in the north, the Kalix River (Sweden) and the Gulf of Bothnia (Finland) in the west and the Vistula River (Poland) in the south. In the east the Comb Ceramic pottery of northwestern Russia merges with a continuum of similar ceramic styles ranging towards the Ural mountains. It would include the Narva culture of Estonia and the Sperrings culture in Finland, among others. They are thought to have been essentially hunter-gatherers, though e.g. the Narva culture in Estonia shows some evidence of agriculture. Some of this region was absorbed by the later Corded Ware horizon.

The Yenesian languages of which Ket is the last surviving member are probably a part of the same language family as the Na-Dene languages, whose most famous member is Apache. The Apache people moved to the American Southwest around 1000 CE from Northwest Canada, where the rest of the Na-Dene live. (The genetic links between the Na-Dene and the Ket are weak, however, possibly as each group of assimilated neighboring outgroups to the point where the outgroups are a more important source of genetic ancestry than the original core of each of the groups).

To their North are the Eskimo-Aleut language speakers, most famously the Inuit, and an indigeneous population found around Vancouver Island intermixed with Na-Dene speaking groups, speaks a language that may also be a part of the Eskimo-Aleut family. The Proto-Eskimo populations reached the fringes of Alaska from East Asia around 500 CE. Quite credible linguistic evidence associates the Eskimo-Aleut languages , the Uralic language and two other Siberian language groups (Yukaghir (with only an 80 speaker moribund linguistic community left sometimes classed as part of Uralic proper), Chukotko-Kamchatkan (the main living indigeneous language of Northeast Siberia) with a pre-Uralic protolanguage, joining them in a group of Uralo-Siberian languages.

Fortescue argues that the Uralo-Siberian proto-language (or a complex of related proto-languages) may have been spoken by Mesolithic hunting and fishing people in south-central Siberia (roughly, from the upper Yenisei river to Lake Baikal) between 8000 and 6000 BC, and that the proto-languages of the derived families may have been carried northward out of this homeland in several successive waves down to about 4000 BC, leaving the Samoyedic branch of Uralic in occupation of the Urheimat thereafter.

In this scenario, Inuit migration to North America coincides roughly with Altaic expansion, as does Yenesian exile to central Siberia.

The emerging sense is that the Na-Dene, in addition to the Inuits (who are the same people as the Thule culture), preceded by the Dorset indigeneous Paleoeskimo culture (that no longer exists and was probably replaced to a great extent after flourishing perhaps 500 BCE to 1500 CE in much of Arctic North America, although a relict population called the "Sadlermiut survived until 1902-1903 at Hudson Bay on Coats, Walrus, and Southampton islands."), and the Saqqaq Paleoeskimo culture (2500 BCE until about 800 BCE) whom ancient DNA links to the modern related to modern Chukchi and Koryak peoples of Siberia, that preceded the Dorest, all made migrations to North America that were subsequent to that of the remainder of the indigeneous American founding population. Ancient DNA samples show (see page 34) mtDNA haplogroup D in Dorset remains and mtDNA haplogroup A in Thule remains. The timing of the Na-Dene arrival is not too certain, but probably many thousands of years after the ca. 14,000 years ago that the founding group of modern humans arrived in the Americas from Beringia, probably sometime between 8,000 years ago and 1,000 years ago.

The possibility of a late arrival of the Na-Dene to North America could also help to explain the fact that the Na-Dene are rich in some haplogroups that are relatively rare in the Americas and almost absent from indigneous people in Latin America. Those Na-Dene enhanced haplogroups could be the haplogroups of the original Na-Dene population that was diluted over time as the Na-Dene admixed with local populations and their genes in turn, introgressed into other geographically adjacent populations that were not linguistically related to the Na-Dene. Thus, Inuits and Na-Dene might be the main sources of Y-DNA Q (xQ-M3)especially Q-M242 and C3b, and mtDNA haplogroup D2a1, while Y-DNA R1a and mtDNA X2a show very similar North American distributions, although not necessarily Na-Dene and Inuit centered and could be a result of either introgression from later migrations (e.g. one of the two wave of Paleoeskimos that had ceased to be present in the pre-Columbian era) or a split in the original source population. Notably, this reading also suggests that the initial founding population of the Americas was probably smaller than a population size that relies on genetic contributions that actually only arrived with the Inuits or the Na-Dene or other later population waves.

Some mtDNA studies have supported this more than one wave theory, with a Na-Dene wave perhaps 6,000-10,000 years ago. Another study based on internal genetic variation likewise weakly suggested a more recent Na-Dene origin than other Amerindian populations.

There is not a consensus on this point from genetic studies although the Inuit and Na-Dene populations do show stronger genetic links to Siberia than other Amerindian populations. For example, the above linked study from 2010 shows that:

[T]here is a clear genetic HLA relatedness between isolated populations close to Beringia: Eskimos, Udegeys, Nivkhs (North East coast of Siberia) and Koryaks and Chukchi from extreme North East Siberia (Fig. 2), and North West American populations: Athabaskan, Alaskan Eskimos (Yupik) and Tlingit. . . . Asian populations which are geographically not close to Beringia (Japanese, Ainu, Manchu, Singapore Chinese, Buyi) do not cluster with North Americans neither in NJ dendrogram (Fig. 2) or correspondence analysis (not shown).

Finally, Lakota-Sioux Amerindians which have inhabited in North United States, are not related with Asians and West Siberians (Fig. 2) but with Meso and South Americans.

- Meso-Americans. Most frequent haplotypes (not shown), relatedness dendrograms (Fig. 2) and correspondence (not shown) do not relate these Amerindians with any Asian population, including North East Siberians. Haplotypes of Meso-Americans are shared with other Amerindians and one of them with Alaskan Eskimo (Yupik): A*02-B*35-*DRB1*0802-DQB1*0402.

- South-Americans. These Amerindian speaking groups are related to other South-American Amerindians and to Meso-American Amerindians (Fig. 2). Most frequent haplotypes are shared with other American Amerindians, but not with Asians.

An overview of indigeneous Native American genetics recaps some of the relevant facts and issues.

This scenario also means that there are extant linguistic counterparts to Siberia to both the Eskimo-Aleut and Na-Dene languages, with only the remaining languages, which Greenberg lumps together as Amerind, despite the existence of linguistic families within this category and a lack of obvious proto-language links to bind them into a language family, and the assumption supported by archaeology and genetics that the populations speaking these languages once formed a single founding population ca. 14,000 years ago that has not encountered significant outside influence until the arrival of Columbus and in North American, Na-Dene, Eskimo-Aleut, and two waves of Paleo-Eskimo encounters, that left any potential linguistic trace.

(Leif Erikson's group from Iceland ca. 1100-1300 CE apparently did not leave any linguistic or genetic trace in North America, although it left some material culture remains).

Efforts to link the Na-Dene language and Yenesian languages to the Caucasian languages or to the Basque language are in my view utterly unsupported. There is no genetic overlap, the geographic and timing of the historical cultures don't match, and the linguistic similarities, if any, are remote. They may be a remote link between Basque and some Caucasian languages, at a time depth of about 5000 years (about the time depth between the most remote of the Indo-European languages), although I wouldn't necessarily consider it to be solidly established.

Efforts to link the Caucasian languages to the Uralic languages likewise fall short, and suggestions that Indo-European languages and the Uralic languages have a common origin are at the very least, ill established, although the two languages would have been spoken by adjacent populations on the Eurasian Steppe for thousands of years.

The Altaic language hypothesis, is very credible for the core of Turkish, Mongolian and Tungistic languages, and plausible when it comes to further links to Korean and Japanese.

The notion that Sino-Tibetan languages are derived from or genetically related at any reasonable time depth to Altaic, Na-Dene-Yenesian, or Uralo-Siberian protolanguages has very weak support, although the timing and geography are somewhat less prohibitive. The balance of the evidence puts the origins of all four linguistic groups roughly in the Northeastern quadrant of Asia (and surely one or more proto-Amerindian languages must at some point have been spoken in that region as well).

Wednesday, November 16, 2011

Eskimo-Aleut Languages

I've added a section on the origins of the Eskimo-Aleut languages to the Urheimat page at Wikipedia.

Excess D Meson CP Violation Possible With SM Physics?

A 1989 publication, now touted by its author in comments at a physics blog, predicted the high levels of CP violation seen in the D meson decays at LHC from Standard Model physics. The details of the calculation are rather arcane, and resort to a rule used to make calculations in kaon decays that wasn't well understood in 1989, but the bottom line is that the article concludes that unexpectedly large CP violations could be a result of an error in the Standard Model theoretical prediction because certain terms of the equation of more important than would be naiively expected, rather than a case of experiment deviating from expectation.

Monday, November 14, 2011

AdS/CFT fails for heavy ion QCD

Before string theory, the AdS/CFT correspondence was used to condensed matter physics and it in theory has applications to very hard QCD calculations. Except, it doesn't work.

Turtles and Physics

Blog posts about turtles and physics always deserve a mention at this blog.

Top Quark Hadrons and CP Violation Rumors

Top Quark Hadrons

The conventional account states that top quarks don't hadronize because the time frame in which hadrons form is longer than the mean lifetime of a top quark before it decays pursuant to the weak force.

But, surely, this can't be true. The top quark lifetime is just that, a mean. Some top quarks live longer. If you make enough top quarks, some of them will live for arbitrarily long time periods. A few should live long enough to hadronize, most should not. The percentage of top quarks that hadronize should be a small percentage of all top quarks, but the difference between the characteristic width of the strong force and the decay width of the top quark is not so great that it should be undetectibly negligible in a particle accelerator that is generating large numbers of top quarks. We haven't seen a top hadron yet, but that doesn't mean that we shouldn't at some point observe one. After all, there are also a handful of hadrons without top quarks that we have not yet observed.

Of course, these hadrons should have very short lifetimes, on average. And, I don't know the QCD equations well enough to have a sense of the relative frequency of different species of mesons and baryons with top quarks in them, although I would think that they would rather closely mirror the spectrum one sees with mesons and baryons containing charm quark, although perhaps exacerbating the differences one sees between the spectrum of kaons, pions and other exotic hadrons that one sees that contain up quarks and down, strange or bottom quarks with those that contain charm quarks and down, strange or bottom quarks.

Conversely, some of the other five kinds of quarks, which usually live long enough to hadronize, should sometimes live lifetimes too short to do so and transform immediately into another kind of quark via the weak force before having an opportunity to hadronize.

Calculating the frequency of both events ought to be, if not precisely elementary, certainly less challenging that lots of the calculations that go into figuring out QCD backgrounds apart from these effects. It could be that hadrons with top quarks should be so rare that they are below current experimental sensitivities. But, the standard explanation of a lack of top quark hadrons shouldn't rule them out entirely, just make them very rare.

Back of napkin, in my head calculations would suggest that the suppression of top quark hadrons ought to be on the order of 80%-99%, a major impediment to finding them, but far from a complete decree of non-existence.

D Meson Rumors

Rumor has it (via Jester commenting at Woit's blog) that we will be treated before the end of the year to a report from the LHCb experiment (a b quark factory), that D meson to kaon-kaon decays and D meson pion-pion decays exhibit unexpected CP violations at a three sigma level.

D mesons are charm quark-(up/down/strange quark) mesons with antiparticle permutations. Kaons are up-strange mesons with antiparticle permutations. Pions are up-down mesons with antiparticle permutations.

The implication would be a flaw in the CKM matrix of the Standard Model, that sets forth the likelihood of a quark of one kind changing flavor to a quark of another kind, via W boson emission or absorbsion.

Conventionally the four parameters of the CKM matrix (any 3x3 unitary matrix can be fully described with four parameters, but there are many ways to go about doing it), involve three parameters that are CP symmetric and one CP violating phase.

To get an inconsistency from the Standard Model in CP violation phases between D meson to kaon-kaon decays and D meson to pion-pion decays, would suggest that it might take at least two parameters with a CP violating component (hence at least one that is a mix of a non-CP violating mixing angle and a CP violating phase) if there are only three generations of particles, or a CKM matrix that was more than 3x3 so that it would have more degrees of freedom, one or more of which could be devoted to an additional CP violating phase (e.g. in a four or more generation of fermions extension of the Standard Model, which has 9 rather than 4 degrees of freedom in the CKM matrix). Alternately, perhaps the one CP violating phase also has a non-CP violating element, while three completely non-CP violating parameters could be salvaged.

Experimental data have waivered. Some experiments have seemed to show two CP violating phases, while others have seemed to rule it out. Early reports from the LHC on B meson decay (which involve mesons that have b quarks in them), had seemed to be consistent with the absence of more than one CP violating phase in the CKM matrix as the Standard Model predicts and seemingly contrary to hints of more than one CP violating phase in earlier B meson decay experiments. But, rumor has it that the new data kick us back to a more than one CP violating phase situation at the three sigma level, which while not really a discovery, starts to arouse strong interest.

An early CDF finding of a 3.4 sigma top-antitop quark asymmetry, which has since been contradicted by the latest LHC data, produced a paper this past spring detailing various kinds of new physics (W' bosons, Z' bosons or axigluons) that could explain it.

UPDATE: More details on the rumor here: "LHCb observed direct CP violation in neutral D-mesons decay. More precisely, using 580 pb-1 of data they measured the difference of time-integrated CP asymmetries of D→ π+π- and D→ K+K- decays. The result is ΔA = (-0.82 ± 0.25)% [which is] 3.5 sigma away from the Standard Model prediction which is approximately zero! . . . the CP asymmetry estimated at the level of 0.01-0.1%. On the other hand, LHCb measured it is closer to 1%."

Another nice account here and referencing the power point slideshow for the announcement, with page 6 notable showing prior measurements of equal put opposite asymmetries, but at much lower levels than in this experiment and page 7 showing higher past direct CP violation measurements possibly consistent with this result. Also, notably, this is the first evidence of CP violation in D mesons (with charm quarks) although it has previously been seen in B meson (with bottom quarks) and kaons (with strange quarks).

Friday, November 11, 2011

Mass Matrix Musings And The Higgs Mechanism

A very completely realized paper discussing the phenomenology that might relate the CKM matrix, PMNS matrix and masses of the fundamental fermions from December 2008 in introduced and concludes with these observation (note that the authors' native language is French and not English, emphasis added and some obvious misspellings corrected):

If the Higgs particle turns out to be exactly as expected, then the Standard Model is closed from a mathematical point of view. In that case, it is conceivable that any new physics will be far beyond the reach of future colliders. On the contrary, if the data do not reveal any scalar particle, it is likely that the LHC will spot some unexpected or unexplained events since the unitarity of the theory is broken around 1 TeV. From a mathematical point of view, the first possibility is probably more appealing but the latter is certainly the most interesting one for us, now. Indeed, in spite of its simplicity, the Brout-Englert-Higgs mechanism also raises some questions.

In the Standard Model, all forces are explained in terms of boson exchange. These bosons are associated with a gauge symmetry. However, the scalar sector of the theory as it stands cannot be considered as a fifth force. The Higgs boson is moreover the only particle that knows the difference between the fermion families as well as between the generations. The coupling is indeed proportional to the mass instead of some conserved charge. This particular status is quite intriguing. Furthermore, these masses seem to be completely arbitrary and display a huge hierarchy, not to mention the astonishingly small mass of the neutrinos.

The mass generation mechanism is also intimately connected with the SU(2)L gauge symmetry. This connection is at the root of some of the most interesting properties of the Standard Model, namely flavour mixing and CP violation.

The Standard Model requires thus some new flavour physics, in particular to explain the fermion mass spectrum and the number of families and generations. These numbers must be somehow connected with the mass generation mechanism. Most of the proposed extensions of the Standard Model fail to meet these criteria. Some problems of the Standard Model have indeed been solved by grand unification theories, supersymmetry, technicolour, horizontal symmetries and others. However, it has always been at the expense of a complexification of the theory, for instance a zoo of new particles or a rather large group of symmetry. . . .

Fermion mixings arise naturally in the theory of electroweak interactions and result from a mismatch between mass- and weak-eigenstates. Within the assumption of a Higgs mechanism, mixings and masses have the same origin and are bound together in the Yukawa couplings. This mechanism however fails to give an explanation of the observed fermion spectrum. Therefore, the Standard Model is probably not the end of the story but only a low energy effective theory.

In this work, we do not aim at finding a new mechanism that could explain this spectrum, but we rather assume that fermion masses and mixings are calculable in a yet-to-be-found more fundamental theory. Our goal is to glean as much information as possible from the observed fermion masses and mixings in order to identify some hidden structures that could significantly lower the number of free parameters and help us to get some clues about what could be this fundamental theory.

To achieve this goal, we follow two distinct paths:

• The analysis of the various parametrizations of the flavour mixing matrix points us to a specific decomposition. We note that the parameters of this decomposition can be independently and accurately computed if we impose some simple textures to the Yukawa couplings. We propose then a straightforward combination of these interesting textures in order to recover the observed quark flavour mixing.

• We study the properties of a successful mass relation for the charged leptons [i.e. Koide's formula]. We propose some generalizations of this relation which also apply to the neutrinos and the quarks. One of them successfully combines the masses and mixings in a kind of weak eigenstate mass. Another one describes the lepton masses through a well-defined geometric picture. . . .

[T]hese two paths lead to similar conclusions and allow us to
speculate about some interesting properties new flavour physics should
display. . . .

If the mass matrices are actually built by squaring a more fundamental matrix, the mixing angles seem a priori computable from simple ratios. We have proposed a simple texture for this “square root” matrix which leads to a quark flavour mixing similar to the observed one. . . .

All the observations we have done plead for a deeper modification of the Standard Model than just adding new symmetries or particles to the Lagrangian. In regard to all these results, our guess is that the masses do not result from a coupling to an elementary Higgs field.

If we speculate on the mechanism responsible for the electroweak symmetry breaking, we would say that preon models could fulfill most of the properties presented here. On the one hand, a dynamical symmetry breaking could in principle lead to some relations between the pole masses. On the other hand, preons constitute a suitable framework where square roots of masses may appear. The practical way to implement such a dynamical model is beyond the scope of this work but constitutes its natural outcome.

Another great observation from the same paper is that "One can
also wonder how the top quark is so heavy while it is as point-like as the
electron in the Standard Model for electroweak and strong interactions.
It has indeed the same mass as one molecule of vitamin C (C6H8O6)[.]"


Another way of putting the apparent connection between the mixing matrixes and masses is to say that we seem to know, to the extent of our ability to empirically test our theories, everything that there is to know about the strong force and electromagnetic force (i.e. the SU(3) and U(1) parts of the Standard Model, although this is muddled a bit by the intertwining of the weak and electromagnetic components of the unified electroweak force). Even if we discovered, for example, that CP violation occurs at some very low level in quantum chromodynamics it would be trivial to include a natural term in that equation to account for those experiments.

We don't have experiments indicating the presence of unexplained fundamental forces, missing fundamental particles, or statistically disagreements between experiment and calculations at the energy levels of the LHC to date or any prior high energy physics experiments. We can perfectly easily fit all observed dark energy phenomena simply by assigning a value to the cosmological constant in the equations of general relativity. We do need some mechanism that explains observed dark matter effects, which isn't a good fit to any of the forces or fundamental particles or hadronic matter of the Standard Model, but that is all we are experimentally motivate to look for at this point.

Everything we don't know about fundamental particle mass and the CKM/PMNS matrix parameters, and the Higgs mechanism, seems to arise from an incomplete description of the SU(2)L weak force part of the theory.

The existing Standard Model SU(2)L theory seems to be both underconstrained in its parameters and missing some subtle term or piece that solves its problems at higher energies. It seems as if we have left a rule or two, and a Lagrangian term or two (or perhaps a renormalization calculation step) out of what we are working with now. There ought to be an explicit quark-lepton complementarity rule or mechanism that gives rise to it, and there ought to be some sort of mass generation mechanism, perhaps in the nature of a see saw type mechanism that gives rise to fundamental particle mass from the weak force mixings themselves and should be capable of being described by far fewer free parameters.

The problem doesn't seem to be so much that what we know is wrong, at least as a low energy effective theory, but that our current formulation is incomplete. It doesn't make explicit deeper connections between its parameters that seem to exist, we are having trouble finding the Higgs boson it predicts which we may not need in a proper formulation, it ceases to predict unitary decays at the TeV level, we observe just three generations of particles but don't have good theoretical reasons for believing that there aren't more generations, we're not sure why it makes sense that there don't seem to be right handed neutrinos (or even if we can really say with comfort that there aren't any).

The other lingering issues that are pretty much purely theoretical. The hypothetical point-like nature of massive particles in quantum mechanics (which give rise to singularities) and the non-local effects in quantum mechanics (which is continous, local and causal) are at odds with general relativity. Moreover, while in principle the path to describing general relativity is a boson mediated force via a spin-2 graviton would seem to be capable of replicating general relativity, efforts to realize this in practice have not produced a consensus solution. But, these quantum gravity problems aren't obviously necessary to resolve to fix other less than ideal features of the Standard Model.

They may, of course, be related. There may be something unsound in a subtle way about shoehorning a toy model with massless fundamental point-like particles when the reality may involve non-point-like particles that are massive through a mechanism more natural and emergent than the Higgs field.

There is also a lingering sense that a Grand Unified Theory in which the three coupling constants of the Standard Model and its three Lie Groups can be understood as a spontaneously broken symmetry of a single Lie Group with a single coupling constant is possible, but if we've done our job properly with the Standard Model characterizations of the three fundamental forces, this is pretty much icing on the cake.

Indeed, one way to see the persistent failure of efforts to produce a GUT that doesn't include particles and forces that we don't see, while not explaining everything that we do see, is that the flaw in the way we have formulated the weak force is preventing the pieces from fitting together into a coherent whole in the way that they should.

It seems as if there ought to be a more elegant way to formulate this part of the Standard Model that might remove all of its pathologies in one fell swoop, and with more data that rule out many of the alternatives that we are so close to really grasping the connections that have so far eluded the entire global theoretical physics community, which has been stymied by group think pursuing dead end paths to solve this problem like SUSY and String Theory and Technicolor.

Thursday, November 10, 2011

PMNS Theta 13

The parameter theta 13 in the PMNS matrix appears to be non-zero according to the latest reactor data.

A lot of modeling of neutrino oscillations assumes this is zero, since it is very small, but a non-zero value would be more consistent with quark-lepton complementarity, because that would not be a match to the corresponding angle in the CKM matrix is off by about two tenths of a degree from the value that would suggest a zero value for theta 13 in the PMNS matrix. The prediction from quark-lepton complementarity (as of 2007 from the CKM matrix figures known at that time) would be a theta13 of 8-10 degrees. This result implies a theta13 of 8.5 degrees (working my way back from sine squared two theta equal to 0.085).

More at Resonaances poking fun at the low level of statistical significance involved while not dismissing out of hand the result which in line with other experimental measurements and is plausible from a theoretical point of view.

Musings On Higgs Boson Coupling Constants

The Higgs mechanism for creating mass in the Standard Model supposes that each fundamental particle has a coupling constant with the Higgs boson that is a function of its mass. In other words, mass is to the Higgs boson what weak isospin is to the weak force, what electric charge is to electromagnetism, what color charge is to the strong force.

This is really quite odd.

All fundamental particles have one of five values for weak isospin: zero, +/- 1/2, or +/- 1. Every fundamental particle has one of seven values for electromagnetic charge; zero, +/- 1/3, +/- 2/3, +/- 1. Any given quark has one of six possible color charges, there are eight differen possible color charges for gluons, all of which are some kind of linear combination of the six color charges available for quarks, and all other fundamental particles have no color charge.

The notion that Higgs boson coupling constants could come in values so wildly at odds with pattern for all other force carrying bosons. The fundamental particles of the Standard Model come in seventeen different values for mass, and no two of those masses are neat integer or simple fraction variants of the other.

There is obviously some method to the madness that produces the seventeen particle masses (twelve fermion masses, two weak force masses, the zero photon and gluon masses, and a Higgs boson mass) in the Standard Model. The two mass values associated with the weak force bosons could be predicted with considerable accuracy before their discovery from the other constants of the Standard Model. Higher generation particles are always heavier than lower generation particles of the same type. All neutrinos are lighter than all other fundamental particles. Charged leptons appear to be lighter than either of the quarks of the same generation (although the uncertainty in the empirically measured mass of the strange quark, which has a central value almost identical to the muon but an uncertainty of about +/- 25%, makes it somewhat unclear if it is heavier or lighter than the muon). Charged particles of one generation have more mass than all charged particles of all lower generations. Particles and their antiparticles have the same mass. Quarks and gluons have the same mass, regardless of color charge. Some of the ratios of one kind of particle to another approximate the dimensionless coupling constants of the fundamental forces. One can devise formulas that approximately predict the masses of different fundamental particles relative to each other. In the case of Koide's formula relating the masses of the charged leptons to each other, the relationship appears to be exact. There seems to be a relationship between the values of the CKM matrix and the PMNS matrix to the masses of the fundamental fermions.

All unitary three by three matrixes in which every entry must be positive or zero, including the CKM matrix and the PMNS matrix (which must meet this condition because they code the probabilities of a complete set of possibilities) have four degrees of freedom - i.e. it takes four numbers to determine the value of all nine cells. One could do this by simply determing the value of any four arbitrary entries in the nine by nine matrix, but a parameterization of the matrixes in terms of four different angles is the more conventional way to go about characterizing these matrixes.

A hypothesis called quark-lepton complementarity, first proposed in 1990, states that if you parameterize the CKM matrix and PMNS matrix in a particular way, that for each angle in the CKM matrix parameterization, there is a corresponding angle in the PMNS matrix parameterization, and that the sum of these two angles is in each case equal 45 degrees.

If this hypothesis is correct, then the CKM matrix and PMNS matrix combined actually have only four degrees of freedom and can be fully characterized by a single unit vector in the +,+,+,+ quadrant of four dimensional space (and appropriate quardrant limitation because probabilities are always positive). It also implies that the CKM matrix and PMNs matrix, when looked at in a particular way in this four dimensional probability space are orthogontal, or half-orthogonal (i.e. at 45 degree angles) to each other.

The same line of thinking involved in the notion of quark-lepton complementarity, also implies that a suitable form of matrix multiplication of the CKM matrix, the PMNS matrix, and another trivial matrix (the correlation matrix) to transform the product into the proper form, can produce the mass matrix of all of the fundamental fermions as a function of an arbitrarily chosen mass of any one particle, in some versions, with some modest complicating analysis (also here by the same author).

It is a bit hard to test this hypothesis empirically because there is a great deal of uncertainty in the empirically determined masses of the five lightest quarks, because they can only be measured as part of composite particles, and because the values of the entries in the PMNS matrix are not known to any great accuracy. For example, this method has been used to predict one of the values of the PMNS matrix angles.

But, suppose for a moment that this hypothesis is true, and that knowing these values (and suitable coupling constants), that one can infer the rest masses of th W and Z bosons (as was actually done in fact). This implies that:

* The seventeen Higgs boson couplings and eight degrees of freedom in the CKM and PMNS matrixes are actually fully determined by just a single four dimensional, all positive valued unit vector (one might call it the "weak force transition vector"), and an arbitrarily chosen single reference mass.

* In this situation, the mass matrix of the fundamental fermions and weak force bosons are predominantly a function of the three of the four Standard Model Higgs bosons that are "eaten" by the W+ boson, W- boson, and Z boson respectively. The remaining scalar Higgs boson simply sets the overall mass scale of the fundamental particles generally, without having any impact on the relative masses of particular fundamental particles to each other.

* Twenty-four of these constants, determined by five independent constants (and perhaps also the weak and electromagnetic force coupling constants, but probably independent of the strong force coupling constant), seem to be independent of the Higgs boson mass (although, as I have suggested in a prior post, it isn't too hard to imagine formulas by which one could derive a Higgs boson mass from the other masses, just as one can in the case of the W and Z bosons).

* If one uses this system to express Higgs boson couplings of fundamental particles functionally, it would be possible to attribute the arbitrarily chosen single reference mass to a single universal coupling constant of the Higgs boson with all massive fundamental particles. If one was truly clever and lucky one might even be able to express this universal coupling constant in terms of something else.

* In this schema, the constants of the Standard Model (in addition to the various equations of the three fundamental forces that it describes and the Lie groups of the particles that it contains) are four coupling constants (strong, weak, electromagnetic, Higgs boson fundamental particle mass scale) which could perhaps themselves be expressed as a four component vector, the four component weak force transition vector, the speed of light, and possibly some constants involved in the running of the strong, weak, and electromagnetic force coupling constants.

* It also seems possible that it might be possible to dispense with the Higgs boson entirely and derive fundamental particle masses entirely from weak force interactions involve the four electroweak bosons, through some mechanism other than the Higgs boson. If we were lucky, could such a mechanism were devised, it might also "coincidentally" resolve other pathologies of the Standard Model, such as TeV scale mathematically pathologies and the hierarchy problem.

I have some doubts about the viability of establishing the fundamental fermion masses from the CKM and PMNS matrixes, although it is a beautiful thing, and even more doubt about the grand unification models that inspired the concept (e.g. here), but I am inclined to think that the likelihood that quark-lepton complementarily will be confirmed experimentally as the precision with which we can determine the values in the PMNS matrix improves.

My guess is that calculations made on this assumption, using the combination of CKM and PMNS angle experimental values to reduce the uncertainty in the theoretical values of each, will consistently be more accurate than calculations made using the raw values of each matrix entry independently.

We already have enough data to quite precisely describe a four component weak force transition vector, and pooling the data for corresponding components would slightly improve that precision (the CKM matrix entries are mostly much more precisely known than the PMNS matrix entries, but for a couple of the parameters, some enhancement of the CKM matrix values might be possible). So far, that data is consistent with the hypothesis.

Wednesday, November 9, 2011

Adding To the Fundamental Particle And Periodic Tables

The Periodic Table

The periodic table of the elements is old news (some interesting variant typographic representations of it exist, by the way), but I'll summarize it anyway.

Particles with the same number of protons bound together in a nucleus by the nuclear binding force have the same number of electrons in orbits around them, and as a result, have similar chemical properties. This group of similar atoms is called an "element" with a particular atomic number. Different elements can have different numbers of neutrons, which are called "isotypes" of the element. Apart from their mass per atom and their varying likelihoods of decaying into something else, isotypes of the same atom behave identically chemically.

Electrons arrange themselves around atoms in concentric "shells." For elements 1-118, there are up to four subshells (S, F, D, and P) for each of the seven periods. An additional shell (G) is theoretized to exisst for elements of atomic number greater than 118. Hydrogen and helium have only the S shell. Atoms 3-18 (Lithium to Argon) have both S and P shells, and the P shell is the outer shell. Elements 19-54 (Potassium to Xenon) also also have the D shell. Elements 55-118 (all discovered elements from Cesium on up) also have an F shell. The chemical properties of elements are determined mostly by the number of unfilled positions in the outermost (a.k.a. valence) shell.

For example, all elements with full valence shells are noble gases that are extremely unreactive chemically; those with just one electron vacancy in their outermost P shells are highly reactive halogens. Alkali metals have one electron in their outermost S shell, and Alkali earth metals have full outermost S shells. Ordinary transition metals are filling their outermost D shells. The inner transition metals (lanthanides (lanthanoids) and actinides (actinoids)) are in the process of filling their outermost F shells. The post-transition metals, metalloids and non-metals are in the process of filling their outermost P shells.

Some isotypes are unstable and their neutrons have a particular likelihood of decaying into a proton and additional decay products (beta decay), or jettisoning a group of nucleons together (alpha decay), because the nuclear binding force isn't sufficient to hold them together is a stable way. The resulting new number of protons turns the old isotype of one element into an isotype of another element with a lower atomic number. Atomic nuclei can be split in other ways too, which is called nuclear fusion and emit energy if the binding energy of the larger atom is larger than the binding energy of the smaller atom. Atomic nuclei can also be forcibly joined at high energy and if the binding energy of the resulting atom is less than the binding energy of the joined atom nuclear fusion releases atomic energy.

Generally speaking, binding energy per nucleon declines through iron (atomic number 26), and increases thereafter. So, atoms heavier than iron can be split to create nuclear energy, while atoms lighter than iron can be fused to create nuclear energy.

All elements beyond 94, plutonium, do not occur in nature (the last two elements to be discovered in nature were francium discovered in 1940 and plutonium which was synthesized in 1940 but discovered in nature in 1971), although only artificially synthesized examples produced in 1939 or later of the twenty four elements 95-118 and element 93 (neptunium) exist. Elements with atomic numbers greater than 82 (lead), as well as technetium (43) and promethium (61), have no stable isotopes and neither technetium nor promethium are found in nature although they have been synthesized.

There is no generally accepted theoretical limit to the maximum atomic number that a synthetically made atom can form. No chemist or physicist seriously doubts, for example, that we can create element 119.

There is some argument over what chemical properties the heavier synthetic elements would have, particularly beyond atomic number 138, and there is a great deal of interest in locating "islands of stablity" consisting of synthetic elements that are metastable relative to the elements that came before them in the vicinity of atomic number 126, although no one really expects to discover any more completely stable isotypes, in addition to the 255 known stable isotypes of elements greater than 82. Another 84 isotypes are found in nature but have observed radioactivity.

In general, the higher the atomic number an element has the less stable its isotypes, and isotypes tend to be less stable as they acquire more or fewer neutrons than the most stable isotype of an element. Reasoning extrapolating the causes of this growing instablity in fundamental physics have hypothesized that the maximum atomic number may be 130-173. "The light-speed limit on electrons orbiting in ever-bigger electron shells theoretically limits neutral atoms" to an atomic number of approximately 137 in a Bohr model of an electron, although considering the Dirac equation which takes into account relativistic effects, it might be able to have as many as 173 electrons coherently around an atom. Nuclei with more protons than this might be possible only as electrically charged ions, as they couldn't maintain their full electron shells.

But, the issue is largely academic because as the table of nuclides (i.e. isotypes) indicates, atoms with more nucleons become more unstable anyway. No isotype of atomic number 106 or greater has a half life of as much as a day. All but 905 of about 3000 experimentally characterized isotypes have half lives of less than an hour.

The 2400 well characterized isotypes with half lives of less than one hour are all synthetic, as are 556 isotypes with half lives of more than one hour. The longest lived isotype of atoms of atomic number 118, for example, are a bit less than a millisecond (i.e. 10^-3 second). Heavier isotypes would decay more quickly.

Fundamental Particles

The known synthetic elements, all made up of ordinary protons and neutrons, last much longer than any of the unstable fundamental particles (i.e. second or third generation fermions, W bosons and Z bosons), the longest lived of which, the muon, has a mean lifetime of about 10^-6 seconds (although in fairness, accurate estimates for muon neutrinos and tau neutrinos are not available). No hadron other than a proton or a neutron lasts longer than about 10^-8 seconds.

Most "exotic" atoms, such as muons hydrogen, are as unstable as their exotic component, although in principle, anti-hydrogen made up entirely of antiparticles, ought to be more stable so long as it is kept away from ordinary matter.

Fourth generation quarks (conventionally labeled t' and b') would presumably be significantly heavier than the top quark (about 174 GeV) and they are experimentally excluded for the b' below 199 Gev and for the t' below 256 GeV, would presumably have a half life of quite a bit less than 7*10-25 seconds and thus would presumably not hadronize, and presumably, like the top quark, the t' would almost always and instantly decay to a W+ boson and a bottom quark, while the b' would almost always and instantly decay to a W- boson and a top quark (with the antiparticles, of course, experiencing the reverse reaction).

Thus, in a collider, a t' looks just like a top quark decay with additional very energetic W+ and W- boson decays (although not energetic enough to give rise to t' and anti-t' pairs), and a b' looks just like a top quark decay with an additional very energetic W- boson decay (although not energetic enough to give rise to t' and anti-t' or b' and anti-b' pairs). Neither the t' nor the b' would give us anything more interesting than these decays, although W bosons of these energies might be expected to produce fourth generations leptons as well.

If the t' to t quark mass ratio were similar to the t quark to c quark mass ratio, one would expect the t' quark to have a mass of 20 TeV; the b quark to s quark ratio would imply a t' quark mass of 7 Tev; the s quark to u quark mass ratio would imply a t' quark mass of about 3.5 Tev; the tau to muon mass ratio would imply a t' mass of about 3 TeV. The ratio of c quark mass to u quark mass, or of muon to electron mass would suggest an even greater t' mass.

Of the b/c, c/s, and s/d mass ratios, none are smaller than a factor of about 3, so one would expect a b' mass of not less than about 522 GeV and much heavier masses on the order of 1 TeV or more would be plausible given expectations for the t' mass.

The lower experimental bound on a tau prime (i.e. fourth generation lepton) mass is about 100 GeV, which is about 55 times the tau mass, and d/s/b type mesons appear to tend to be within an order of magnitude of mass of their same generation charged lepton, so a mass in the high hundreds of GeV would not be particularly unexpected.

These values are not updated for the latest 2011 LHC bounds, which are higher. LHC puts a lower bound on the b' mass of 385 GeV, and the puts a reasonably expected t' mass values, if there is a t' quark, higher.

One possibility that could explain the thus far observed three and only three generations of fundamental fermions rule in particle physics is that there are deep reasons that prohibit particles that decay faster than W boson, a bound that the top quark already approaches and that any heavier quark would likely exceed.

Another possibility would be that there is some fundamental limit on the amount of energy that a W boson can hold. For example, the combination of the short lifetime of a W boson and its high mass translates into a maximum amount of kinetic energy or wavelength for the W boson (does a W boson or Z boson or gluon have a frequency in the way that a photon does?), that might be on the order of c (3*10^8 m/s) times the half life of a W boson (3*10^-25 s), which would be about 10^-18 meters, which is the approximate effective range of the weak force.

We know that a Z boson can give rise to a top/antitop pair with a combined mass of 348 GeV which far exceeds the 90 GeV of the Z boson rest mass. But, perhaps at some point there is a limit, and if that limitation is less than the correct mass for fourth generation fermion if there was one, then that limit would prevent fourth generation fermions from arising.

Honestly, the greatest bound on fourth generation fermions, in my mind, is the fact that we have seen Z boson decays energetic enough to produce top/antitop pairs, but have not seen any evidence of a fourth generation neutrino in the Z boson decays, which would seem to be energetically permitted up to 173-174 GeV neutrinos. All previous experience leads us to think that generations of fermions come in fours, and a 173 GeV or heavier neutrino would be so far outside the range of what we would expect given the experimental bounds on the first three generations of neutrinos, that it would seem to rule out a fourth generation of fermions entirely, and at least, would seem to rule out a fourth generation of fermions with masses low enough to be experimentally detected any time in the next century or so. The conventional statement of the experimental limitation on fourth generation neutrino mass from precision electroweak measurements is 45 GeV, which is still 2*10^6 times the approximate bound on third generation neutrino mass, when no other one generation mass increase of the same kind of fermion increases by even a factor of 10^3. (Of course, this limitation wouldn't apply to a sterile neutrino that doesn't interact with the weak force, for example, because it is is a right handed, non-antiparticle.)

If we knew the true laws of quantum mechanics better and knew they existed, we probably wouldn't care that much about finding the t' or b' or a fourth generation charged lepton experimentally. The stakes have more to do with learning the rules of the game by extending the available patterns than they do with finding that incredibly hard to create, anti-social and extremely ephemeral particles themselves. But, since we have lots of question marks about how the Standard Model functions at high energy levels, we look anyway.