Tuesday, December 31, 2013

Y-DNA Haplogroup I Predominant in European Hunter-Gatherers

We have long known that mtDNA haplogroup U (especially U5) was predominant in European hunter-gatherers.  We now have a substantial set of ancient Y-DNA from early European hunter-gathers that establishes that Y-DNA haplogroup I (particularly I2) was predominant in those same European hunter-gatherers.

The data is, of course, incomplete.  It isn't entirely clear how true this is of European hunter-gatherers in Southwest Europe, for example.  But, the fact that all of the half a dozen Mesolithic European hunter-gatherers from the two sites for which data is available have Y-DNA haplogroup I is quite powerful evidence (none of the samples exclude I2, but not all are that specific).

Today, Y-DNA haplogroup I accounts for a little less than 20% of Europeans with a much higher percentage of Scandinavians (among the last population of Europe to adopt farming) and in the Balkans. I1 which is not present in the ancient samples currently more common than I2.  I1 appears to have expanded much later, perhaps around the time of the Nordic Bronze Age or maybe even a later phase of the Nordic Bronze Age.  Thus, less than 10% of European men are in the I2 Y-DNA clade.   Its mtDNA counterpart, mtDNA haplogroup U, is found in about 11% of Europeans (the non-U5 clades are mostly outside of Europe or rare).  These estimates aren't perfect since some clades of each haplogroup may have been late arrivals, and there may be other haplogroups in ancient DNA of that period which has not yet been discovered.

But, it is a fair guess that the Paleolithic European hunter-gatherer population in which both Y-DNA I and mtDNA U were predominant is the source of something on the order of one-ninth or one-tenth of the modern European gene pool.  Likewise, it is fair to say that modern Uralic and Nordic European populations in far Northern Europe in modern Europe, which have high concentrations of both haplogroups, are probably closest to these ancestral populations within Europe.  The corollary of this observation is that the contribution of this Paleolithic hunter-gatherer population in other parts of Europe may be even more shallow.

Y-DNA haplogroup R1 is the dominant Y-DNA haplogroup in almost all of the rest of Europe (with more R1b in the West and more R1a in the East, generally speaking); similarly mtDNA haplogroup H is dominant in the modern European gene pool.

The Eurogenes blog recaps a lot of the key figures relating to overall autosomal data from the recent paper.  Sardinians, unsurprisingly, are one of the closest matches to Early European Farmers.  Early European Farmers are supposed by the authors to about about 44 +/- 10% "basal European" farmer stock and a remainder assimilated indigenous European foragers.  But, Sardinians exceptionally low level of Ancient Northeast Eurasian (ANE) admixture could actually reflect a higher percentage of this basal European component, as almost all other Europeans are in a much narrower ANE admixture band that is very low across the board.  Thus, while a "three component" mixture (Western Hunter Gatherer, Ancient Northeast Eurasian, and Early European Farmer with a possible additional Scandinavian Hunter Gatherer component) is proposed, for most Europeans a simple two component (Western Hunter Gatherer-Early European Farmer) admixture does a good job of explaining the European autosomal data.

Archaeology and modern population genetics tends to support a Mesolithic expansion of a maritime hunter-gatherer population associated with mtDNA haplogroup V, possibly from a Franco-Cantabrian refugium, whose impact stretched from Northwest Africa to Arctic Scandinavia in coastal areas along the Atlantic Ocean and Baltic Sea coast as a minor contributing population to the more dominant mtDNA U haplogroup population that was present in Europe before the Last Glacial Maximum ca. 20,000 years ago, retreated to European refugia as Northern Europe was completely depopulated at that time, and then repopulated Europe as soon as the glaciers retreated.

We don't know what Y-DNA haplogroups were present together with mtDNA haplogroup V peoples.  The ancient DNA from Mal'ta boy and the location of modern Y-DNA R1, R1a and R2 bearers all support the notion that R1b had origins far to the East of its current distribution.  But when, given the unreliability of mutation rate dating for Y-DNA?

Eurogenes also notes that the Ancient Northeast Eurasian component appears to be a relatively recent arrival admixing just 5,500 to 4,000 years ago - a match strong suggestive of the expansion of the Uralic languages (such as Finnish, Estonian and Hungarian).  The following observation is particularly notable:
Nevertheless, if not for the ANE, we'd simply have a two-way mixture model between indigenous European foragers and migrant Near Eastern farmers, at least for most Europeans anyway. Moreover, the seemingly late arrival of ANE in much of Europe is fascinating, because it's yet another smoking gun for a major genetic upheaval across the continent during the Copper Age (aka. Late Neolithic/Early Bronze Age). Interestingly, archeological data suggest that this was also the period which saw the introduction of new social organization and perhaps Indo-European languages across most of the continent. None of this was lost on the authors of the paper, but it appears they'd rather be cautious pending more ancient genomic data, because they chose not to explicitly mention the Indo-Europeans.
He then quotes the study itself (from which I've omitted end note citations for easier reading):
The absence of Y-haplogroup R1b in our two sample locations is striking given that it is, at present, the major west European lineage. Importantly, however, it has not yet been found in ancient European contexts prior to a Bell Beaker burial from Germany (2,800-2,000BC), while the related R1a lineage has a first known occurrence in a Corded Ware burial also from Germany (2,600BC). This casts doubt on early suggestions associating these haplogroups with Paleolithic Europeans, and is more consistent with their Neolithic entry into Europe at least in the case of R1b. More research is needed to document the time and place of their earliest occurrence in Europe.Interestingly, the Mal’ta boy belonged to haplogroup R* and we tentatively suggest that some haplogroup R bearers may be responsible for the wider dissemination of Ancient North Eurasian ancestry into Europe, as their haplogroup Q relatives may have plausibly done into the Americas.
Regular readers of this blog know that my working hypothesis, although not an established fact, is that R1b does expand in Europe via the Bell Beaker culture and its successors, while Indo-Europeans exemplified by the Corded Ware culture were probably the source of R1a in Europe.  Further, my working hypothesis is that the Bell Beaker culture was Vasconic (i.e. Basque-esque) rather than Indo-European linguistically, although it and its successor cultures were a technological and military peer of the Corded Ware Indo-Europeans for about a thousand years or so (basically until Bronze Age collapse ca. 1200 BCE).  Eurogenes has a post largely in accord with this hypothesis.  See also here and here and here and here and here.

My working hypothesis is that Bell Beaker culture has origins geographically near that of some early Indo-European culture (somewhere east of France) and absorbed many of the key technological innovations associated with the Indo-Europeans, but retained their own language and culture and then rapidly expanded in population genetic terms in parallel to the Indo-Europeans.  Thus, when Indo-European Urnfield and Celtic peoples arrive in Western Europe on the eve of and after Bronze Age collapse, the already technologically advanced Western Europeans adopt the language and much of the culture of the conquering Indo-Europeans, but don't have a huge demographic impact when they arrive.

This working hypothesis is not without its flaws.  There is not strong archaeological evidence for substantial population replacement in the Bell Beaker evidence.  But, R1b is present at very high levels in the Basque whom all agree were pre-Indo-European in Western Europe, and it is absent in European hunter-gatherer ancient DNA to date and in all available early Neolithic ancient Y-DNA from Europe.  There aren't a lot of other good candidates to bring about this change in the right time and place.

Of course, it looks likely that a flood a new early and middle Neolithic ancient DNA is likely to be analyzed in the next several years and this may resolve the question of the source of R1a and R1b in Europe more definitively.  If my working hypothesis is correct, there will be little or no R1b in Southwest Europe prior to the arrival of the Bell Beaker culture there ca. 3000 BCE, with earlier Neolithic ancient Y-DNA in Southwestern Europe being mostly Y-DNA haplogroup G2 and other early Neolithic Y-DNA clades, and R1b would also be absent from Mesolithic (aka Epipaleolithic) ancient Y-DNA in Southwest Europe.


Sunday, December 22, 2013

On the CKM Matrix

The Cabibbo-Kobayashi-Maskawa (CKM) matrix is a three by three unitary matrix, each element of which contains the square root of the probability that the up-type quark for that row will be converted into a down-type quark of that column in the event that it emits a W+ boson.

Seven of the nine elements can be explained by an utterly transparent theory.

The ratio of the probability that up-type quarks will transition into strange quarks to the probability that up-type quarks will transition into down quarks is always within the margin of experimental error of the ratio of the mass of the strange quark to the mass of the down quark, for up quarks, for charm quarks and for top quarks.

Likewise, the ratio of the probability than an up quark will transition into a bottom quark to the probability than an up quark will transition into a strange quark is within the margin of experimental error of the ratio of the mass of the bottom quark to the mass of the strange quark.

Alas, this very simple relationship between the quark masses and the CKM transition probabilities breaks down in two instances. The ratio of the probability that a top quark will transition into a bottom quark is about fourteen times greater than the ratio of the probability that a top quark will transition into a strange quark (the ratio is about 611-1 and should be about 44-1).

Likewise the ratio of the probability that a charm quark will transition into a bottom quark is about 588 times the probability that it will transition into a strange quark, about 13 times as great as the 44-1 ratio that would be expected in a simple formula.

The top to bottom transition probability, and the charm to bottom transition probability, are both about what one would expect if the bottom quark had a mass of 51-63 GeV (notably, more than 1/2 of the Z boson mass but less than 1/2 of the Higgs boson mass), rather than the canonical 4.2 GeV.

These discrepancies are too great to address with various combinations of the running of quark masses with energy scales.

One almost imagines that there are really two bottom quarks, one with a mass of 4.2 GeV that couples to up quarks, but not to charm and top quarks, and another with a mass of 55 GeV that couples to charm and top quarks, but not to up quarks, that are otherwise identical, although this is ridiculous and would have been discovered long ago if it were true, one would suppose.

While this transparent theory about CKM matrix elements clearly doesn't work unmodified, it is worth noting how well it fits seven of the nine elements in manner that suggests that the CKM matrix elements flow very directly from the quark masses alone, before looking for an elusive fix to this flawed theory.

Friday, December 20, 2013

The Latest CMS Update On The Higgs Boson (And Other New Experimental Data)

The CMS experiment at the Large Hadron Collider has confirmed, in yet another set of measurements released at year end, that the particle discovered is consistent in all respects measured with the properties of a 125.6 GeV mass Standard Model Higgs boson.

The properties of a Higgs boson candidate are measured in the H to ZZ to 4l decay channel, with l=e,mu, using data from pp collisions corresponding to an integrated luminosity of 5.1 inverse femtobarns at center-of-mass energy of sqrt(s)=7 TeV and 19.7 inverse femtobarns at sqrt(s)=8 TeV, recorded with the CMS detector at the LHC. The new boson is observed as a narrow resonance with a local significance of 6.8 standard deviations, a measured mass of 125.6+/-0.4 (stat.) +/-0.2 (syst.) GeV, and a total width less than 3.4 GeV at a 95% confidence level. The production cross section of the new boson times the branching fraction to four leptons is measured to be 0.93 +0.26/-0.23 (stat.) +0.13/-0.09 (syst.) times that predicted by the standard model. Its spin-parity properties are found to be consistent with the expectations for the standard model Higgs boson. The hypotheses of a pseudoscalar and all tested spin-one boson hypotheses are excluded at a 99% confidence level or higher. All tested spin-two boson hypotheses are excluded at a 95% confidence level or higher.

"Measurement of the properties of a Higgs boson in the four-lepton final state", CMS Collaboration (Submitted on 18 Dec 2013).

Collapsing the multiple error estimates into a single error estimate by taking the square root of the sum of the error sources in quadrature, this means that:

* The mass of the Higgs boson is 125.6 +/- 0.45 GeV.

This is essentially unchanged from previous estimates.  This is based on the aggregation of estimates from 4 electron decays (126.2 +1.5/-1.8 GeV N=4), 2 electron and 2 muon decays (126.3 +0.9/-0.7 GeV N=13), and 4 muon decays (125.1 +0.6/-0.9 GeV N=8).

Based upon the conjecture that two times the Higgs boson mass equals the sum of two times the W boson mass plus the Z boson mass, and using global fit values of the W and Z boson masses, my personal expectation is that the true Higgs boson mass is 125.97 GeV, which is at the high end of the experimentally permitted value, but is certaintly not ruled out at this point.

While an uncertainty of 0.3% is impressive for a particle discovered less than two years ago, the uncertainty in the Higgs boson mass measurement and the top quark mass measurement (which also has a less than 1% precision) are the dominant sources of uncertainty in efforts to understand global properties of the mass matrix of the Standard Model fundamental particles and couplings such as: (1) the question of whether the Higgs boson Yukawa vector is unitary, (2) the question of the extent of not quite fermion-boson mass inequality in the Standard Model, (3) the possibility of hidden relationships between the Higgs boson mass and other Standard Model constants, and (4) the detection of indirect evidence of particles that are missing from the set of the Standard Model, if any.

CMS experimenters realistically hope to bring the uncertainty in this measurement down to 0.1 GeV or so by the time that the LHC experiment is concluded, a narrowing of the 1 sigma mass range of the Higgs boson by 77%.

* The width of the Higgs boson is less than 3.4 GeV at the 95% confidence level.

This is also no surprise. The Standard Model expectation is about 4 MeV which is about 0.13% of the directly measured two sigma limit, and the measured result is perfectly consistent with the Standard Model expectation.

This large discrepancy is entirely due to the inability of the experimental apparatus at the LHC to resolve a direct measurement of the Higgs boson width more precisely.

The indirect and model dependent way to measure the Higgs boson total width is to estimate the extent to which particular measured decays in decay paths expected in the Standard Model deviate from the Standard Model expectation, and to use some form of weighted average of the measurable decay paths to estimate the percentage by which the total decay width of the Higgs boson deviates from the Standard Model expectation. Using this approach, the experimental measurements match the theoretically predicted values to within 10%-20% or so of the theoretically predicted values, implying a total decay width that probably deviates by less than 1 MeV from the theoretically predicted value.

By way of comparison, the width of the W boson is about 2 GeV, the width of the Z boson is about 2.5 GeV, and the width of the top quark is a few percent less than 2 GeV. The expected width of the Higgs boson implies that it has a mean lifetime that is about 666 times a long as the W boson, although this is still shorter than, for example, the tau lepton's mean lifetime which is about 108 to 109 times as long as the W boson.

Generally, heavier particles have shorter lifetimes than lighter particles, but the Higgs boson's expected mean lifetime is much longer than the lighter W and Z bosons and is also much longer than that of the only modestly heavier top quark.

* The measured four lepton Higgs boson decay branching fraction, relative to a Standard Model expectation value of 1.0, is 0.93+ 0.29/-0.24.

This is within about 1/4 of a Standard Deviation from the expected value.  The sample sizes used to develop these estimates are small enough that there is a great deal of statistical uncertainty in the measurements. The analysis centered on just 25 four lepton events attributable to Higgs boson decays (based upon the inferred mass of the source particle) that were culled from just 470 four lepton observations (over a much larger inferred mass range), with both the 25 and 470 event sets further broken down into three subcategories of events.

In the long run, the hardest part of the process of confirming that the experimentally observed particle matches the Standard Model Higgs boson is the task of determining if any of its branching fractions differ significantly from the theoretically expected values.  For each of the half dozen of so most important Higgs boson branching fractions measured so far, there is no statistically significant difference from the theoretically expected value.

The branching fractions of the Higgs boson are highly sensitive to the existence of beyond the Standard Model massive fundamental particles.  So, the closer the Higgs boson branching fractions are to their expected Standard Model values, the less likely it is that there are undiscovered massive fundamental particles of an electroweak scale mass.  Experimental uncertainty in mass measurements makes it hard to rule out very light particles on this basis, and there isn't enough mass-energy in even a very energetic Higgs boson for its decays to produce particles much much heavier than a Higgs boson (e.g. at the 10 TeV mass scale) or even for the existence of those particles to have very much impact on observed Higgs boson branching fractions.

* The confidence that the Higgs boson is a spin-0 scalar boson (which is the Standard Model expectation) rather than a pseduo-scalar boson is at least three sigma, as is the confidence that it is not a spin-1 boson. The confidence that the Higgs boson is not a spin-2 boson is at least two sigma.

Realistically, the important distinction is that it is not pseudo-scalar, which is the most subtle difference from the Standard Model expectation.  This is ruled out at the 99.9% level.

One very plausible possibility, in theories with multiple Higgs bosons (including all supersymmetry theories) is that there were two Higgs bosons of identical or almost identical mass, one of which was scalar (typically called H or h), and one of which was pseudo-scalar (typically called A), that combined, the H and A bosons (or h and A bosons) act like a Standard Model Higgs boson on average. A three sigma or better conclusion that the particle observed is scalar rather than pseudo-scalar largely rules out this scenario.

The spin-1 models are actually ruled out at the 99.97% level.  The exclusions aren't quite so powerful for the spin-2 models, but there is very little theoretical motivation for the Higgs boson to be spin-2 and they are still strongly disfavored relative to the spin-0 model.

Further, it is important to observe that the CMS spin and parity determinations are confirmed by the ATLAS experiment's published results:
The pseudoscalar hypothesis is excluded by CMS and ATLAS experiments at a 95% CL or higher. ATLAS has also excluded at 99% CL the hypotheses of vector, pseudovector, and graviton-like spin-two bosons, under certain assumptions on their production mechanisms.
This confirmation makes the CMS conclusions more powerful.

A non-Standard Model scalar Higgs boson model with the same spin and parity as the Standard Model Higgs boson, but that does not participate in electroweak symmetry breaking and has higher dimensional operator terms than the Standard Model Higgs boson, is the least powerfully excluded of the models at CMS (unsurprisingly since it differs so subtly from the Standard Model Higgs boson), and this model has also apparently not yet been tested at ATLAS. This is excluded at a 93% confidence level.

Keep in mind that these strong exclusions have been possible despite the fact that so far CMS is using only 25 data points to reach these conclusions.  As the sample sizes increase, the extent to which non-Standard Model spin and parity models can be excluded is likely to increase a great deal until the samples are about 20-40 times as large as they are now, at which point the marginal benefits of larger sample sizes starts to taper off.

Other Results Today:

* Higgs data and SUSY Fits

Another new paper by John Ellis evaluates the impact of new Higgs boson data (although probably not today's results) on SUSY parameter space using a broader set of experimental input data, but a less generic set of models, than the recent LHC specific analysis by Matt Strassler, et al. that was conducted in a fairly model independent manner.

As Ellis explains, other than the discovery of a Standard Model-like Higgs boson, the two LHC experiments, ATLAS and CMS "have found no trace of any other new physics, in particular no sign of supersymmetry."  Not every supersymmetry model has been excluded by these negative results.  But, the failure of a theory that has been around since 1964 to have even a single clear experimental confirmation in the subsequent 49 years isn't very impressive.

Some of the notable charts in his paper show:

(1) the very small non-exclusion range for SUSY theories with a lightest supersymmetric particle lighter than a top quark and a stop (SUSY top quark partner) of less than 800 GeV,

(2) the strong exclusion of an LSP of less than 400 GeV together with a gluino of less than 1200 GeV, and

(3) best fits of two simplified SUSY models to a wide range of collider and astronomy data involve characteristic masses for supersymmetric fermions and bosons in the single digit TeV range (as the heavy mass scale best fit), or alternately with fermion masses on the order of 1 TeV and boson masses on the order of half that amount (as the light mass scale best fit).

The best fit of the parameter space of mSUGRA is very tightly confined with a characteristic fermion mass around 1,400 +/- 200 GeV, a characteristic boson mass of around 1,000 +/ 100 GeV and the tan beta = ca. 42 (implying a heavy scalar Higgs boson of about 5,250 GeV).  Dark matter considerations nudge the parameters to favor a point around 1600 GeV characteristic fermion mass (e.g. applicable to gluinos) and 900 GeV characteristic boson mass (e.g. applicable to stops).

Ellis finds best fits for SUSY dark matter at the high end of the range from 10 GeV to 100 GeV with exceedingly low cross-sections of interaction with other matter on the order of 10^-45 to 10^-48 (well below that of neutrinos, for example).  This is a poor fit to astronomy data which discriminate between cold dark matter and warm dark matter, however, which favor dark matter particles which are each roughly a million times lighter than those best fit values.  Neither the CSMSSM nor the NUHM1 models he examines have any meaningful capacity to accommodate dark matter candidates with masses consistent with warm dark matter models.

It is increasingly hard to stomach theories that put new, heavy SUSY particles just around the corner from discovery at the electroweak scale without giving rise to even the faintest real experimental indicator of their existence at energy scales that we can measure.  SUSY was invented to tame electroweak symmetry breaking, yet seems to have virtually no phenomenological manifestations at that energy scale.

* Why 125.6 GeV?

Is it significant that the Higgs boson mass is very nearly the one that maximizes the rate at which a Higgs boson decays to photons?

It is remarkable that the measured Higgs boson mass is so close to the value which maximizes the Higgs decay rate to photons as predicted by the Standard Model. In this letter we explore the consequences to assume that an ∼126 GeV Higgs boson mass is not accidental, but fixed by some fundamental principle that enforces it to maximize its decay rate into photons. We provide evidence that only a very narrow slice of the parameters space of the Standard Model, which contains their measured values, could lead to a maximal Higgs boson with that mass. If the principle actually holds, several Standard Model features get fixed, as the number of fermion families, quark colors, and the CP nature of the new boson, for example. We also ilustrate how such principle can place strong bounds on new physics scenarios as a Higgs dark portal model, for example.

"Is There a Hidden Principle in the Higgs Boson Decay to Photons?", Alexandre Alves, E. Ramirez Barreto, A. G. Dias (Submitted on 18 Dec 2013).

The paper notes that the combined CMS-ATLAS Higgs boson mass is 125.66+/-0.34 GeV. The photon decay maximizing value is estimated to be 126.16 +/- 0.35 GeV. It notes that consistent with that principal, any "scalar dark matter" would have to have a mass of 55-63 GeV and a coupling to the Higgs boson of less than 0.01. The paper also observes that peak gluon-gluon decays, and peak photon-Z boson decays are also quite close to the measured Higgs boson mass.

This analysis a similar flavor to the the observation that the Higgs boson masses sits squarely in the metastable region between vacuum instability in the universe and a stable vacuum, at a point where the expected lifetime of the universe's vacuum stability is on the order of the age of the universe but it is not fully stable over an infinite time scale, which was the basis of one of the most successful advanced predictions of the Higgs boson mass. It also further prompts the question - why should a Higgs boson mass that maximizes photon production also imply a metastable vacuum?

* FCNC's in Top Quarks.

The proportion of top quark decays showing a flavor changing neutral current in the Standard Model is expected to be 10^-12 to 10^-17.  In some BSM models, it is expected to be as much as 10^-3.  The latest experiments establish that FCNC branching fractions from top quarks are less than 10^-2, and ultimately, if there are none will be able to rule out FCNC branching fractions at the 10^-4 to 10^-5 level, ruling out some BSM models on that basis.  But, this is not a measurement likely to rule out whole classes of models any time soon.  Suppressing FCNC's in top quarks by a factor of 100 from a naively expected value in a BSM theory simply isn't all that hard to do.

Thus, the current results aren't very interesting because nobody has theories that predict FCNC's in top quark decays at rates that could be detected so far at the LHC.  But, we are very close to the threshold where that will start to happen.

* Testing QCD Predictions.

Almost every collision at the LHC tests some aspect of QCD (i.e. the Standard Model theory of the Strong Force which is known as quantum chromodynamics). A recent paper reviews the results.

The four best available independent estimates of the strong force coupling constant to date, normalized to the energy scale of the Z boson mass (one from each of the LHC experiments, one from Fermilab, and one from another experiment) are consistent with the world average value of 0.1184(7). But, uncertainty in theoretical QCD modeling is the dominant source of uncertainty in each of the estimates.

Equally important, energy scales from hundreds of MeVs to 1 TeV have been studied and there has been no deviation from the expected running behavior of the strong force coupling constant with energy scale from the Standard Model expectation for this coupling constant's beta function.

This conclusion, if continued to higher energy scales, is a clear way to discriminate between the Standard Model and SUSY alternatives, because supersymmetry theories, generically, have a running of the strong force coupling constant that is very different from that of the Standard Model, which makes gauge coupling unification (which does not naively appear in the Standard Model) possible. In the case of the running of the strong force coupling constant, the differences are quite stark and should be discernable even at energy scales of a few TeVs which are potentially within the reach of the LHC.

In principle, there are material differences between SM and SUSY predictions for the strong force coupling constant even at energy scales of just 1-2 TeV that have already been probed, but getting sufficient precision in this subset of QCD measurements and theoretical calculations at the extreme fringe of the existing data set is challenging and has thus far not been achieved.

More generally, while the experimental results across the board in countless experiments at colliders to date and the LHC in particular match the theoretical predictions of QCD, this is less impressive a feat than it is in the precision electroweak measurements.  This is because the theoretical predictions of QCD have so much uncertainty, because doing the math involved in making a QCD prediction is so hard.

For example, even though we have huge data sets measuring the properties of hadrons containing up, down, strange and charm quarks, with the properties of the composite mesons and baryons often measured with extreme precision, turning those inputs into fundamental QCD theoretical inputs is extremely challenging because the calculations that have to be used to reverse engineer first and second generation quark properties from the hadrons that they form are very difficult to pin down correctly.  The proton and neutron's mass, for example, are known to many significant digits of accuracy, but the mass of the up and down quarks that combine to form protons and neutrons are known only to 25% and 10% accuracy respectively, and theoretical estimates of nucleon masses from QCD first principles have only about a 1% precision.  The theoretical values match the experimental ones, but the theoretical predictions aren't very specific at all.

This was the situation when Richard Feynmann wrote his book "QED" in 1988 and while progress has been made in the intervening 25 years, it remains the basic problem facing QCD physicists today.

This is why theoretical developments in how to do QCD calculations more efficiently, like the Amplituhedron or new Monte Carlo methods for doing QCD calculations hold so much promise for the future of high energy physics.  The former works by discovering hidden symmetries in the calculations that makes it possible to ignore redundant calculations, while the latter uses a statistical sampling of the vast number of sub-calculations that go into the final result to make a reasonable estimate of it.  Should one or more of these results work, the experimental power of almost every collider experiment ever conducted will be greatly enhanced, because the amount of uncertainty in the background estimates will be greatly reduced, making the signals of the phenomena that experimenters are trying to observe much more clear.

Thursday, December 19, 2013

Exotic Higgs Decays Bounded

Matt Strassler and others have published a new paper on various generic forms of beyond the Standard Model Higgs decays, mostly with an eye towards analysis of existing and future data to see if they are there in the data, but also coming up with some limits on these kinds of decays from existing data.  The paper is summarized here with the teaser that:

[O]ne of the strongest limits we obtained, as an estimate based on reinterpreting published ATLAS and CMS data, is that no more than a few × 10-4 of Higgs particles decay to a pair of neutral spin-one particles with mass in the 20 – 62 GeV/c2 range… and the experimentalists themselves, by re-analyzing their data, could surely do better than we did!

I've argued for similar limitations based upon the apparent unitary nature of the Yukawa couplings of the Higgs boson and other data, that would limit all undiscovered fundamental particles with non-zero rest mass (i.e. that couple to the Higgs boson) of all spins to less than 8 GeV in combined mass (more accurately a combined sum of squared masses of not more than 64 GeV^2 which isn't precisely the same thing), each of which must be "sterile" with respect to all three of the Standard Model forces.

Proton and Neutron and Electron Masses In Various Formats

Relative Mass

Neutron = 1
Proton = 0.99862349 (99.862349%)
N-P= 0.00137651 (0.137651%)
Electron = 0.00054386734

Neutron = 1838.6837
Proton = 1836.1527
N-P = 2.53096
Electron = 1

Kilograms

Neutron = 1.6749286*10-27 kg
Proton = 1.6726231*10-27 kg
N-P= 2.3052*10-30 kg
Electron = 9.1093897*10-31 kg

MeV

Neutron = 939.56563 MeV
Proton = 938.27231 MeV
N-P= 1.29332 MeV
Electron = 0.51099906 MeV

The relevant portion of the relevant PDG entry states:

u-QUARK MASS2.3 +0.70.5 MeV 
d-QUARK MASS4.8 +0.50.3 MeV 
m¯¯¯ = (mu+md)/23.5 +0.70.2 MeV 
mu/md MASS RATIO0.380.58  

As a first order approximation, hadron masses are equal to the sum of the masses of the constituent quarks plus an amount that is roughly the same for any hadron with the same number of quarks that has the same spin.  Thus, this binding energy contribution is roughly the same for all spin-3/2 baryons.  This approximation understates hadron mass by an amount that is a function of the particular quarks in the hadron which increases, but less than linearly, with the sum of the constituent quark masses (and possibly also varies slightly with combinations of quark charges in the hadron).

I have seen a paper (which I will try to find a reference for) which suggests that QCD models predict a proton-neutron mass of 870 MeV in the a model with massless up quarks and down quarks - the portion of the mass purely attributable to gluon interactions via the color force between the three light quarks in each nucleon.  The mass of the component quarks in the proton is about 9.3 MeV, and the mass of the component quarks in the neutron is about 11.8 MeV, a difference of about 2.5 MeV (using PDG values).  Thus, the presence of massive quarks in the proton has a non-linear contribution to proton mass of about 59 MeV and a non-linear contribution to neutron mass of about 58 MeV.

More LC & P Papers

Lopez Castro and Pestieau have published a number of papers on interesting numerical coincidences in physics in addition to one paper that I previously blogged. Footnote 6 of that paper is also notable (but not part of the paper that I previously discussed):

(1) Higgs mass from electric charge, W boson mass, and Z boson mass.
It is interesting to note first that before the discovery of the scalar boson, the mass value mH = (124.5 ± 0.8) GeV was obtained using electroweak data available at that time (see J. Erler, arXiv:1201.0695 [hep-ph]). Second, using the measured masses of gauge bosons and the empirical relation mZ = emH/(sin θW cos θW ), with cos θW = mW /mZ and e =√4πα the electron charge, we have obtained mH = 125.4 GeV.
The experimentally measured value of mH is currently 125.6 +/- 0.4 GeV.  These values are consistent at a two sigma level.

As the comments below tend to show, much of this may be more numerology than something that indicates a deeper theoretical relationship - making them more useful as tricks to remember approximate values than as tools for penetrating fundamental physics.  But, there are still worth noting.

Here are some of them:

(2) Pure math values for electric charge and the weak mixing angle.
We propose the relations 1/e - e =3 and tan(2theta_W)=3/2, where e is the positron charge and theta_W is the weak angle. Present experimental data support these relations to a very high accuracy. We suggest that some duality relates the weak isospin and hypercharge gauge groups of the standard electroweak theory.
- "Arithmetic and the standard electroweak theory", G. Lopez Castro, J. Pestieau (Submitted on 10 Apr 1998).

This implies a theta W of about 28.155 degrees (the experimentally measured value is 28.196 degrees using current global fits for the W and Z boson masses linked below), a difference of about 0.041 degrees, with source inputs that are precise to about five significant digits.  The experimentally measured value of tan(2theta_W) is about 1.504665.

This implies an e of about 0.3027756377 which compares to an experimentally measured value of 0.30282212.  While this is close in absolute terms, since the experimental value is accurate to about 8 significant digits, this is still hundreds of standard deviations or more from the experimentally measured value.

(3) Higgs vev and electric charge from W and Z boson masses.
In the electroweak standard model we observe two remarkable empirical mass relations, m_W + m_B = v/2 and m_W - m_B = e v/2 where m^2_Z = m^2_W + m^2_B, e is the positron electric charge and v, the strength of the Higgs condensate.
- "Remarkable Mass Relations in the Electroweak Model", Jean Pestieau (Submitted on 29 May 2001)

Note that using current global fits for the W and Z boson masses, that the best estimate for the theoretical B mass discussed above is 43.0855 GeV, implying by their relation that the Higgs vacuum expectation value v=246.905 GeV and e=0.301999.  This differs from their 1/e-e=3 formula value by about 0.3% and is further from the measured value than the original formula which really isn't too impressive for a formula with five significant digit inputs.  The accepted value of the Higgs vev is 246.2279579 GeV.  This can be tweaked a fair bit, however, if one uses global electroweak best fit values for the source parameters.

(4) Effective Electroweak Mixing Angle 
A precise empirical relation between the electromagnetic coupling alpha(m_Z) and sin^2(theta) --where theta is the effective electroweak mixing angle extracted from Z leptonic decays-- is made manifest: alpha(m_Z) = sin^3(theta)*cos(theta)/(4*pi).
- "A Remarkable Relation in the Gauge Sector of Electroweakdynamics", Jean Pestieau (Submitted on 17 Jan 2003) (abstract edited to provide more readable, but equivalent, notation).

This is confirmed by the experimental data at this point.

(5) Formulas For Top Quark Mass, Higgs Boson Mass, W Boson Mass and Z Boson Mass from Electric Charge and Higgs vev.
We propose some empirical formulae relating the masses of the heaviest particles in the standard model (the W,Z,H bosons and the t quark) to the charge of the positron e and the Higgs condensate v. The relations for the masses of gauge bosons m_W = (1+e)v/4 and m_Z=sqrt{(1+e^2)/2}*(v/2) are in excellent agreement with experimental values. By requiring the electroweak standard model to be free from quadratic divergencies at the one-loop level, we find: m_t=v/sqrt{2} and m_H=v/sqrt{2e}, or the very simple ratio (m_t/m_H)^2=e.
- "The unit of electric charge and the mass hierarchy of heavy particles", G. Lopez Castro, J. Pestieau (Submitted on 13 Sep 2006) (this paper predicted, incorrectly, that the Higgs boson mass would be 317.2 GeV, a mass that had already largely been ruled out at the time).