Monday, June 30, 2014

Cosmic Ray Flux Related To Atmospheric Temperature

Cosmic rays, a term that includes not just photons but also fast moving particles headed towards Earth, would kill us all were it not for the protective shield provided by our atmosphere.  Analysis of cosmic rays hitting earth are also a key to trying to detect dark matter, the properties of neutrinos, and more.

Seasonal variation in these cosmic rays has often been proposed as a key to distinguishing dark matter derived signals from other data.  But, it turns out that the strength of the shield that the atmosphere provided against cosmic rays is a function of its temperature.

This makes sense.  But, a results from the MINOS experiment quantity this seasonal background factor based on differences in intensity of muon fluxes (which often are produced when cosmic rays strike the atmosphere).  This is important because is strongly suggests like seasonal variation in cosmic ray detection that might otherwise be assumed to be a significant signal, is really just due to atmospheric temperature variations.

Also, detection of sub-GeV energy dark matter signals requires that neutrinos and other cosmic ray backgrounds be well understood.  This kind of stepping stone makes that possible.

* * *

Precision measurements of neutrinos can also serve as a probe of Lorentz invariance violations (i.e. violations of special relativity), that are expected in some quantum gravity theories.  The experimental signatures of Lorentz invariance in neutrino experiments are discussed in another new paper.

Friday, June 27, 2014

Up Quark, Down Quark Mass Difference Known With Unprecedented Precision

The News

June 18, 2014 paper estimates that differences in electromagnetic field strength between the proton and neutron account for 1.04 +/- 0.11 MeV of the proton-neutron mass difference using lattice QCD methods, an estimate three or four times more precise that previous state of the art estimates of this component.  The proton gets more of its mass from its electromagnetic field than the neutron does.  (An April 11, 2014 power point description of this paper is also available.)

The same paper estimates, using this calculation, that the difference between the up quark mass and the down quark mass is 2.33 +/- 0.11 MeV.  This is the most precise estimate of the up quark-down quark mass difference to date.  The previous state of the art estimate had a best fit value of 2.5 MeV (within the two sigma confidence interval of the new result), but with an uncertainty of about 0.7 MeV.

Some Details For Experts

The paper assumes that the difference in the strength of the strong force field between the proton and neutron is assumed to be negligible relative to the differences in quark masses and electromagnetic field strengths.  This seems intuitively reasonable, because up quarks and down quarks have identical strong force couplings to each other and both have very small masses relative to the total mass of a nucleon, while having very different electromagnetic couplings.

Put another way, QCD contributions to the mass splitting between the proton and the neutron are almost entirely a function of the difference between the rest masses of the up and down quarks.  This is true even though the gluon field accounts for more than 98% of the mass of protons and neutrons respectively, while the contributions of the rest masses of the quarks themselves is modest.

The dominant source of uncertainty in this estimate arises from lack of clarity over whether the dipole form factor of the inelastic subtraction term in the contribution of the electromagnetic field strength to the mass of the proton and neutron scales as a cubic or quartic polynomial (i.e. whether an exponent in an obscure subpart of the overall equation is 3 or 4).  If it is cubic, then the actual value is higher by 0.09 and the uncertainty drops from +/- 0.11 MeV to +/- 0.04 MeV.  If it is quartic, then the actual value is lower by 0.09 and the uncertainty drops by the same amount.

This paper replicated the result of another paper released in pre-print form by an independent group of lattice QCD investigators just two days earlier on June 16, 2014, using a moderately different approach, so the result can be considered quite reliable.  This paper's result for the difference in mass due to electromagnetic field strength between the proton and neutron is 1.00 MeV +/- 0.16 MeV, and an up quark-down quark mass difference of 2.52 MeV+/- 0.29 MeV.  But, it predicts a total difference between the proton and neutron mass of 1.51 +/- 0.28 MeV (compared to the physical value of 1.2933322 MeV), which is not inconsistent with experiment but has a central value that is high by about 16%.

Incidentally, while both papers have produced highly precise measurements of the difference in nucleon mass attributable to the electromagnetic field energy in these baryons, I am not able to cite any source regarding the absolute value of this contribution of the masses of the proton and neutron, relative to the contribution from the gluon field in these baryons (and to the extent that the two are interrelated this could conceivably be an ill defined quantity).

Both results exploit the Coleman-Glashow relation (which dates to at least 1982 or earlier) which argues that the sum of the mass differences in three different pairs of charged and neutral hadrons (one of which is the proton and neutron pair) with carefully chosen combinations of light scalar differences and other factors that cancel out due to symmetries, should equal zero.  This hypothesis is confirmed experimentally true to the current limits of experimental precision (something that a 2000 paper called a "miracle", but which flows quite naturally from the quark model of QCD and symmetry considerations).

The Masses And Mass Differences Of Exclusively or Predominantly Light Quark Hadrons

Protons and Neutrons

A neutron has a rest mass of 939.565,379(21) MeV.  A proton has a rest mass of 938.272,046(21) MeV. The difference between the rest mass of the neutron and the rest mass of a proton is known to somewhat greater precision; it is 1.293,332,2(4) MeV.

The rest mass of an electron is 0.510,998,928(11) MeV.

The difference between the rest mass of a neutron and the sum of the rest masses of a proton and an electron is 0.782,333,2(4) MeV.  In other words, this is the minimum energy of a photon produced in ordinary beta decay.  About one part in 1201 of the rest mass of a neutron is converted into energy in ordinary beta decay.  The other 1200 out of 1201 parts of the rest mass of a neutron is converted in other kinds of rest mass.

A neutron has one up quark and two down quarks, producing a zero net electric charge.  A proton has two up quarks and one down quark, producing a +1 net electric charge.  Both have total angular momentum equal to 1/2.

Quantum chromodynamics (QCD) estimates of the proton and neutron mass from first principles are accurate to about +/- 1% (i.e. about +/- 1 MeV), although QCD estimates of the proton-neutron mass difference are now significantly more precise than that (while still remaining far less precise than experimental measurements of this quantity).

Other Hadrons Made Up Only Of Up and Down Quarks

Due to confinement, no light quark is ever observed at an energy scale of less than that of a pion (about 140 MeV for charged pions and 136 MeV for neutral pions), which is the lightest particle made up of quarks, and the only pseudoscalar mesons (total angular momentum of o and negative parity) made up only of light quarks.

The vector mesons, with total angular momentum 1, that are made up only of up and down quarks are the rho mesons (vector) (cursive p) which has a mass of 775.11 MeV when charged and 775.49 MeV when neutral, and the omega mesons (vector) (cursive lowercase w) which has a mass of 782.65 MeV.

All four of the delta baryons, which are three quark particles made up of combinations of up and/or down quarks with total angular momentum 3/2 (unlike the 1/2 of the proton and neutron) have masses of about 1232 MeV.

The lighest scalar meson (with total angular momentum of 0 and positive parity), the f0(500) has a mass of about 500 MeV and does not have a consensus interpretation of its makeup in a simple quark composite model, although it is sometimes interpreted as primarily consisting of linear combinations of pions, which are in turn made up only of up and down quarks.

Interpreting the Proton-Neutron Mass Difference

Some of the difference in rest mass between a neutron and proton may be attributable to a difference in mass between an up quark and a down quark.  Some of the difference in rest mass between a neutron and a proton may be attributable to a difference in the amount of energy in the strong force field and electromagnetic field in the proton and neutron respectively.

To only slightly oversimplify, using the canonical values for the up and down quark masses (discussed further below), the quarks in a proton (about 9.4 MeV) are 2.5 MeV lighter than in the neutron (about 11.9 MeV), but the proton has combined contributions of the strong force fields and electromagnetic fields that are 1.2 MeV stronger (928.9 MeV in the proton v. 927.7 MeV in the neutron), a difference of about a tenth of a percent in field strength.

The question of how much of this difference is due to differences between up quark and down quark rest masses, and how much of this differences is due to gluon and photon fields in the proton and neutron is highly model dependent.

The particle data group estimate of the mass difference between the up and down quarks is 2.5 MeV, rather than the 1.3 MeV that one would naively expect from the difference between the neutron mass and the proton mass, a value that would struggle to fit in the two sigma error bars of current estimates of absolute masses and mass ratios for these quarks.

According to the Particle Data Group, the up quark has an estimated mass of 2.3 MeV + 0.7/-0.5 MeV, and the down quark has an estimated mass of 4.8 MeV +0.5/-0.3 MeV.  Viewed together, rather than in isolation, the up quark rest masses is estimated to be from 0.38 to 0.58 of the rest mass of the down quark.  The precision with which we know the up and down quark masses is roughly ten million times less than the precision with which we known the difference in rest masses between the proton and neutron.  These estimates are little improved from estimates made at the very dawn of the quark model in the 1970s.

In late March or early April of 2010, a paper in Physical Review Letters by Christine Davies and others (pre-print here and pdf here) made a much more precise estimate of 2.01 +/- 0.14 MeV for the up quark and 4.79 +/- 0.16 MeV for the down quark, which implies a difference between the two rest masses of 2.78 +/- 0.21 MeV (and a low end 0.42 mass ratio), which is just barely consistent with the new results at the two sigma level.  But, the PDG worldwide average analysis has not proclaimed this result to be correct at that level of precision.

Definitional Issues In Light Quark Mass Determinations

Light Quark Pole Masses

Even the definitions of the light quark masses are fraught with problems.  In the Standard Model, the mass of a particle varies with the energy-momentum scale at which it is measured.  In the case of top quarks, bottom quarks, and charm quarks, it is possible and sensible to measure the "pole mass" of a particle - i.e. the mass of a quark measured at an energy scale equal to its rest mass.

In the case of the light quarks, the rest masses of the quarks are customarily evaluated at energy scales of 2 GeV (i.e. at about the mass of two nucleons that collide with each other), at which they are observed (in a confined context), rather than at the rest mass of the quarks themselves.

But, for example, extrapolation of the running of the quark masses suggests that the light quarks should be about 35% heavier at a 1 GeV energy scale.  Naively extrapolating the running of the light quark masses downward from 2 GeV to estimate their "pole mass" produces masses in the hundreds of MeVs, but a more accurately interpretation is that the pole masses of the light quarks are simply ill defined and the extrapolation is being applied beyond the scale where it is valid.  Instead, different definitions of quark masses than pole mass, such as the MS mass scheme are used to generalized the concept of quark masses to the light quarks.  But, it isn't obvious that MS mass is as fundamental a quantity as pole mass.
Koide (1994) calculates the running of the three light quark masses down to their pole masses, even though these values have little practical application, in both a five quark and three quark flavor model. In the five quark flavor model he comes up with pole masses for the up quark of 346.3 MeV, for the down quark of 352.4 MeV and for the strange quark of 489 MeV. In a three quark flavor model he comes up with pole masses of 163.1 MeV for the up quark, 169 MeV for the down quark, and 338 MeV for the strange quark. One could get somewhat lower light quark pole mass values still in a two quark flavor model. 
Koide updated these calculations in 1997 and concluded that the pole mass of the up quark was 0.501 GeV, the pole mass of the down quark was 0.517 GeV and the pole mass of the strange quark was 0.687 GeV (based on their measured values at other energy scales), although all sub-1 GeV values were noted with an "*" mark. 
A more recent update of the calculations can be found at Xing (2008) does not consider masses running to very low energy scales for light quarks, explaining that "The pole masses of three light quarks are not listed, simply because the perturbative QCD calculation is not reliable in that energy region."
Notably, these perturbative QCD calculations become ill defined at masses higher than the mass of the pion.  So, perturbative QCD calculations break down to the point of being unreliable at energy scales somewhere between 140 MeV and 1000 MeV.

The pole masses in the five quark flavor models are quite similar to the conventional dressed quark masses for the up and down quarks discussed below, however.

Dressed Masses For Light Quarks

Another approach is the look at the "dressed quark" mass which for the up and down quarks is about 0.32 GeV (i.e. about a third of the proton mass, which is the lightest three quark composite particle containing only light quarks), which includes a proportionate share of the gluon field mass of a baryon that contains a light quark in its mass.  This is also sometimes described as a "constituent quark model."  (The link in this paragraph is to a 2013 power point presentation that is quite a good starting point for understanding the state of the effort to determine the quark masses.)

Recent state of the art estimates of dressed quark masses determined with Lattice QCD, however, are closer to 0.25 GeV for up and down quarks, and 0.502 GeV for strange quarks.  This implies fundamental light quark masses of about 8.6 MeV and fundamental strange quark masses of about 227 MeV at an energy scale of about 840 MeV.  These values are about 2.45 times the canonical values for these masses at 2 GeV energy scales.  I may be misinterpreting this analysis, but this seems to me to indicate that the total contribution of the electromagnetic field of a proton to a proton's mass is about 188 MeV.

While "dressed quark" masses for the up and down quark can produce not obviously wrong estimates of hadron masses for most hadrons containing just up and down quarks (and also, for example, for the eta meson which is a mix of up, down and strange quarks with a mass of 547.85 MeV).

But, dressed quark masses for the up and down quarks do seem inconsistent with the masses of the pions.  Charged pions have two light quarks but have a mass less than the dressed mass of even one light quark, and neutral pions which are linear combinations of up-antiup and down-antidown quark pairs are even lighter.  In constituent quark models, the binding energy of the pion implied by the dressed quark masses for the up and down quarks must be approximately negative 360 MeV.  But, it isn't obvious that negative energies are are a physical concept in this context.

An alternate way of understanding the light hadron masses which explains why the pion can be so light is explained in a fairly straightforward way here.  Pions play a special role as "Goldstone" bosons in QCD, i.e. force carriers who arise as a result of a broken symmetry, which give them special properties.

Basically, the right formula for the mass of a meson is equal to a constant scale factor times the positive square root of ((two times the average mass of the quarks in the meson) times (the appropriate value for the QCD scale)).  The QCD scale value, in turn, is a function of the number of quark flavors that are accessible at the energy level involved.  It is about 217 MeV when there is sufficient energy to involve five active quark flavors and abut 350 MeV when there is sufficient energy to involve three active quark flavors.  Arguably, it may be necessary in some cases to invoke a two active quark energy scale, although this involves complex self-referential calculations related to the strange quark mass.

Footnote

Another interesting amateur mass hierarchy paper with a numerological flair can be found here.

UPDATED July 2, 2014 for formatting purposes and to add material to the definitions section of this post.

Wednesday, June 25, 2014

Stray Physics Musings

* In the simple Platonic ideal, a black hole is a singularity from which not even light can escape.  In other words, photons can't cross the event horizon.  In this ideal, whether or not the matter and energy sucked up by a black hole had a net electric charge, the electric and magnetic fields generated by particles with electric charge would not escape the black hole because the photons that give rise to those fields would be trapped behind the event horizon.

Now, we know empirically, there is a strong magnetic field in the vicinity of Sagittarius A*,  the black hole at the center of the Milky Way.  The only way this could be possible in the Platonic ideal of a black hole is if the magnetic field is generated by the movement of charged particles just outside the event horizon as they are pulled toward the black hole.

Now, in an ad hoc blend of General Relativity reasoning and quantum mechanics reasoning, have concluded that black holes are leaky, and rather than preventing anything from escaping at all, actually emit Hawking radiation.  But, it doesn't necessarily follow that Hawking radiation is large contributor to the magnetic field around black holes that are observed.

* In principle, mesons and baryons have an infinite number of excited, higher energy states with higher masses than the ordinary versions of these hadrons.  It appears, in contrast, the fundamental fermions come in exactly three generations, rather than an infinite number, which means that an intuitive sense of higher generation fundamental fermions as excitations in the same sense as excited states of hadrons is probably flawed - unless the same process is at work but there is some competing factor that places an upper bound on the amount of excitation that a fundamental fermon can have, but not on a hadron (at least at the energy scales we can observe - one could image that there is exactly some fundamental number of excited states much greater than three that are possible for any given hadron at energies to high to observe).

Many factors argue in favor of exactly three generations with current experimental evidence.  There are direct lower limits on the possible mass of each of the four possible kinds of fourth generation fundamental fermions.  There are also strong theoretical considerations that require each generation to have a complete set of four fundamental fermions.

In ratio of limit to mass of third generation fermion, the bound is greatest in the case of the neutrinos.  The practical upper bound on the mass of the heaviest Standard Model neutrino rest mass eigenstate is on the order of 0.1 eV/c^2.  The lower bound on the mass of a fourth generation neutrino is about 45,000,000,000 eV/c^2.  The ratio of the two masses is 450,000,000,000 to 1.  This is profoundly greater than any of the ratios of masses between a particle of one generation and a particle of the next generation among the three generations of four kinds of Standard Model fermions.

* The Higgs field couples to the rest mass of massive Standard Model particles.  But, unlike gravity, it does not couple to mass arising from gluon fields in hadrons, to gluons, to photons, to kinetic energy, to angular momentum, to pressure, or any other quantity besides rest mass that gravity impacts in General Relativity.  Yet, since General Relativity describes gravity, and the gravitational effects of rest mass swamp all other sources of gravity in most circumstances, General Relativity must have some fairly deep relationships to the Higgs field.  But, the Higgs vacuum expectation value has no seeming correspondence to the cosmological constant aka dark energy.

* The most definitive evidence we have for dark matter comes from comparing the behavior of objects observed by astronomers with the behavior we would expect if only General Relativity applied.  I strongly suspect that this kind of evidence will be the most powerful means of ultimately discriminating between potential models explaining the mechanism of these effects.

For example, we are very near to having a detailed precision map of the shape of the Milky Way dark matter halo that is inferred from a clever method of exploiting the movement of a handful of very special stars which when measured precisely reveal this information in an elaborate variant on triangulation.  Increasingly precise measurements of the shape and size of the Milky Way's dark matter halo already permit quite precise estimates of the matter density of dark matter in the vicinity of our sun, which fixes on key parameter in direct dark matter detection experiments.  Knowing this from direct halo observations eliminates the model dependence of dark matter detection experiments that merely assume a theoretically estimated dark matter density in our vicinity.  Only the mass of individual dark matter particles and their cross section of interaction remain to be determined.

Current methods have come up empty in the GeV mass vicinity down to very low cross sections of interaction, and to move to lower mass candidates need to better distinguish and characterize neutrino and cosmic ray backgrounds, although good progress is being made on these fronts.  Of course, all is for naught if dark matter is genuinely collisionless, at least with matter other than other dark matter.

Accelerator experiments like the LHC also narrow the dark matter particle parameter space.  These experiments have likewise come up empty.  This strongly disfavors dark matter that interacts via any of the three Standard Model forces, again suggesting that it may be effectively collisionless if it is fermionic (the leading theory) except for Fermi contact forces (since fermions can't occupy the same space at the same time), and any dark matter specific interactions.  The lack of Standard Model forces or particles, or General Relativistic equation terms to explain dark matter is really frustrating.  We are missing something huge and we don't know why.

There are some hints at dark matter annihilation, but these are very model dependent and assume we know about all possible sources of cosmic ray signals when we don't.

Computer models that show what the universe would look like given various dark matter properties are also making great strides as greater computational power becomes available.

Friday, June 20, 2014

BICEP-2 Hedges

After peer review, the BICEP-2 paper claiming to observe primordial gravity waves has hedged its conclusions.  A presenter at the Moscow Cosmology Conference claimed that he has unpublished results due to be released in about six weeks that will show that all or most of the BICEP-2 result is attributable to polarized dust.

Saturday, June 14, 2014

A Conflict Between General Relativity and Quantum Mechanics

One of the basic issues in fundamental physics is that general relativity and quantum mechanics are theoretically inconsistent.  Pointing out exactly why that is the case isn't always easy to understand.  One of the key issues is that quantum mechanics relies on a universal standard of time subject only to special relativity, something that does not exist in general relativity.  The issue is explored at a physics forum discussion here.

UPDATED June 16, 2014:

You can capture some of the flavor in the following discussions:
"When one introduces realistic clocks, quantum mechanics ceases to be unitary and a fundamental mechanism of decoherence of quantum states arises. We estimate the rate of universal loss of unitarity using optimal realistic clocks. In particular we observe that the rate is rapid enough to eliminate the black hole information puzzle: all information is lost through the fundamental decoherence before the black hole can evaporate."(http://arxiv.org/abs/hep-th/0406260)

"...general relativity is a generally covariant theory where one needs to describe the evolution in a relational way. One ends up describing how certain objects change when other objects, taken as clocks, change. At the quantum level this relational description will compare the outcomes of measurements of quantum objects."(http://arxiv.org/abs/gr-qc/0603090)

"...as ordinarily formulated, quantum mechanics involves an idealization. That is, the use of a perfect classical clock to measure times. Such a device clearly does not exist in nature, since all measuring devices are subject to some level of quantum fluctuations. The equations of quantum mechanics, when cast in terms of the variable that is really measured by a clock in the laboratory, differ from the traditional Schroedinger description. Although this is an idea that arises naturally in ordinary quantum mechanics, it is of paramount importance when one is discussing quantum gravity. This is due to the fact that general relativity is a generally covariant theory where one needs to describe the evolution in a relational way..."(http://arxiv.org/abs/quant-ph/0608243)
But, while "unitarity" has a usual meaning in quantum mechanics to the effect of, all the calculated probabilities of different possibilities add up to 100%, it isn't totally clear that this is real what the authors mean when they talk about unitarity in this context. As Physics Forum moderator Marcus explains:
So then the question comes back: what does "unitary" mean, in this field theory context, where we no longer have a "wave function" telling simply the amplitude of a particle to be at some particular place at some moment. I think now unitary means more something like preserving information, or preserving coherence, predictability. It is not as clear what the intuitive meaning is.
It is also not clear how much of the "fundamental decoherence" that arises when you try to migrate ordinary quantum mechanics from 4 dimensional Minkowski space where only special relativity holds to the space of general relativity is actually a physical effect, and how much of this is really a function of not being clever enough in how quantum mechanical equations are generalized into a general relativistic context.

Obviously, to the extent that a loss of unitarity takes its conventional meaning of the probability of all possibilities adding up to 100%, we know that in the real world, (whose "equations" are properly formulated) the probabilities of all possibilities still do add up to 100%. We don't get "blue screens of death" where the numbers simply don't add up and "nothing" happens, or more than one outcome happens simultaneously, in the same universe, in the real world.

But, it could be that the lack of the fixed time scale in general relativity and the probabilistic nature of quantum mechanics, do conspire in the real world to erase "information" that would otherwise be preserved in either of those theories acting alone, although I suspect this might an ambiguity between two or more possible sets of information, rather than infinitely many possibilities, in the real world absent highly contrived circumstances that would never actually occur.

Thursday, June 12, 2014

Quick Physics Notes

* The seven standard deviation difference between ordinary the proton charge radius, and the muonic hydrogen charge radius is greatly exaggerated because the margin of error of the proton charge radius measurement was greatly underestimated.  An accurate MOE for the proton charge radius reduces the discrepancy to four standard deviations.

* Another study finds that dark photons cannot explain the muon anomalous magnetic moment (whose measured value shows a 3.6 sigma discrepancy from the Standard Model prediction).  The same data set from 1980-1982 whose implications were not appreciated at the time, also significantly constrains the parameter space of sub-GeV dark matter generally.

* A review of the parameter space of a large class of singlet fermion dark matter models (with CDM particle masses up to 1 TeV) which are mediated by an extra Higgs boson are ruled out experimentally.

Wednesday, June 11, 2014

Skull Shapes Differed In The Pre-LGM and Post-LGM Paleolithic Era

Modern humans arrived in Europe around 40,000 years ago (in very round numbers), replacing Neanderthals by 30,000 years ago.  Then, they were expelled from all but a few refugia in Southern Europe, by a catastrophic ice age which peaked at the Last Glacial Maximum (LGM) ca. 20,000 years ago, when most of Northern Europe was covered by glaciers because the average global temperature was about 4.5 degrees colder than it is today.  As the glaciers receded, over a period of several thousand years after the LGM, modern humans repopulated Europe from the refugia and from outside of Europe entirely.

An analysis of skull shapes shows that pre-LGM Cro-Magnon modern humans had significantly different skull shapes than post-LGM Paleolithic modern humans, than early Neolithic era modern humans, and than modern humans in approximately their modern form a few thousand years after that.

A new study of these skull shapes concludes that about two-thirds of the differences between pre-LGM and modern modern human skull shapes comes from the gap between pre-LGM modern humans and immediately post-LGM Paleolithic modern humans.  The next biggest gap was between post-LGM Paleolithic modern humans and early Neolithic modern humans.  The smallest gap was between early and late Neolithic and later era modern humans.

The study offers no opinion concerning whether this reflects a different genetic makeup or other environmental or epigenetic factors.

The study is notable, because it is the first physical anthropology evidence of which I am aware that demonstrates substantial physical anthropology differences between modern humans in Europe before and after the LGM that must have had some cause, even if the exact cause is not known.

The data from mtDNA samples, in contrast, have tended to show more continuity between pre-LGM and post-LGM hunter-gatherer populations, which is a bit surprising itself given that we known for sure that there was a total replacement of the human population of most of Europe as a result of the LGM.  Autosomal genetic data on pre-LGM modern humans in Europe is so scarce and so preliminary that it is too early to know if it differed materially from post-LGM Paleolithic populations unless you are actually in the labs doing the cutting edge work and have access to inside information as a result.

While analysis of skull shape is largely out of favor in anthropological and social scientific circles, for the very good reason that this data was grossly misapplied and misused in the 19th century hey day of this kind of scientific investigation in physical anthropology, that doesn't mean that modern studies taking this approach in the far more restrained context of modern scientific knowledge, are worthless.  They tell us something, just not as much as 19th century proto-scientists thought that they did.

Neanderthal Admixture and Racial Phenotypes In Modern Humans

Analysis of the recovered ancient Neanderthal genome strongly suggests that they had very light pigmentation in hair, skin and eyes.  Put simply, Neanderthals were white.  Ancient DNA also suggests that the earliest modern humans in Europe, called Cro-Magnons, were dark pigmented relative to modern Europeans.  Simply put, Cro-Magnons were "brown" (but did not have the modern sub-Saharan African phenotype, associated with the folk racial classification "black" in the United States, either).

The questions is whether light pigmentation in some modern humans in Eurasia arose from admixture with Neanderthals.  The Neanderthal light pigmentation genes aren't identical to the modern European light pigmentation genes, but a new paper makes a tentative case that most of the light pigmentation genes in aboriginal Tawainese people, and to a much lesser extent in some other Eurasians (especially East Asians), could have a Neanderthal source.

I'm agnostic regarding how solid this evidence is at this time.  Maybe the paper makes a really strong case for this, but I'm not convinced by what I have read summarizing the paper so far.

Tuesday, June 10, 2014

Thoughts On The Matter-Antimatter Imbalance In The Universe

Background

Every Standard Model process except an extremely high energy and exotic theoretically possible process called a sphaleron, separately conserves baryon number (i.e. quarks minus anti-quarks divided by three), and lepton number (i.e. leptons minus anti-leptons).  Even sphalerons conserve B-L.

But, the baryon number of the universe (essentially equal to the combined number of protons and neutrons in the universe) is a high positive number since anti-matter baryons are extremely rare.  The same is true of charged leptons which are almost exactly equal in number to the number of protons in the universe; positrons, anti-muons and anti-tau leptons are all extremely rare.

There are far more neutrinos in the universe than there are baryons and charged leptons combined.  But, it isn't obvious what the ratio of neutrinos to anti-neutrinos is in the universe, although there are hints that the number of anti-neutrinos greatly exceed the number of neutrinos.

Even the number of anti-neutrinos measurably exceeds the number of neutrinos in the universe, the excess of anti-neutrinos over neutrinos creates a negative lepton number for the universe that swamps the total positive baryon number of the universe and the positive lepton number from charged baryons in the universe.  Thus, B is a very large positive number, L is probably an even larger negative number, and B-L is a very large positive number.

Fundamental dark matter particles that were antimatter and had baryon number, or were matter and had lepton number, could help even the scales a bit.  But, given the known total amount of dark matter in the universe and reasonable assumptions about the mass of dark matter particles, no fundamental dark matter particle with either of those properties could resolve the matter-antimatter asymmetry in the universe.

Careful analysis has determined that Standard Model sphaleron processes could not account for the current matter-antimatter asymmetries even over the entire history of the universe from a B=0, L-0, and B-L=0 initial condition which is often unquestioningly assumed at the initial moment of the Big Bang.

Analysis

So, why does our universe have such a matter-antimatter imbalance for different classes of fundamental particles (i.e. B>>0, probably L<<0, and probably B-L>>0)?

There are basically four possible solutions:

(1) The Universe Did Not Begin As Pure Energy.  The starting point of the universe is not B=0, L=0 and B-L=0.  Instead, the universe started with B>>0, L<<0 and B-L>>0 and has preserved those numbers ever since.  They are arbitrary laws of the universe just like other physical constants in the Standard Model and General Relativity like the speed of light, or Planck's constant.

(2) We Can't Observe Important Balancing Parts Of The Universe.  The universe includes places we cannot see due to General Relativistic singularities and balancing baryons and leptons to create B=0, L=0, B-L=0 are on the other side of singularities between us in the observable universe and the relevant event horizons.  This has two subcomponents:

(a) Before The Big Bang.  There was one anti-baryon created for every baryon, but the anti-baryons are overwhelmingly located before the Big Bang and are moving backward in time.  Likewise, leptons and/or anti-leptons necessary to balance lepton number are overwhelmingly located before the Big Bang.  Antimatter moving forward in time towards the Big Bang, and matter moving backward in time towards the Big Bang fuel this massive energetic event which created particle-antiparticle pairs in large numbers which unequally sorted antimatter particles "before" and matter particles "after".  This is very similar in principle to possibility (1).

(b) Inside Black Holes.  Anti-baryons, positrons, anti-muons, anti-tau leptons, and ordinary neutrinos are disproportionately sucked into black holes relative to baryons, charged leptons, and anti-neutrinos.  This sorting could arise from general relativity acting on pair production at the event horizon, or could arise from some other process.  For example, if black holes tend to have negative electric charge, they will tend to suck up anti-protons in preference to protons.  One fruitful way to investigate this possibility would be to look at the relative matter-antimatter composition of Hawking radiation.

[UPDATE June 16, 2014:] Given that the black hole at the center of the Milky Way galaxy has a strong magnetic field, and appears to be quite typical of black holes at the center of ordinary galaxies, the possibility that black holes are usually charged in a way that biases what does and does not enter a black hole based upon its electric charge which is correlated strongly with matter-antimatter character, is worth taking seriously. [End UPDATE.]

(3) The Dark Sector Balances The Books.  There are particles in the dark sector (i.e. those that account for dark matter and dark energy phenomena) that have baryon number and lepton number that balance out the matter-antimatter imbalance in observed matter.  Note that for the dark sector to contain the correct number of anti-baryons and leptons, and for the total mass of dark matter particles to coincide with the measured value, it is probably necessary for dark matter to be made out of composite particles rather than fundamental particles if the mass of each dark matter particle is in line with estimates inferred from astronomy observations.

The inferences one draws in general about the dark sector, if it is to resolve these imbalances (or at least to bring B-L to zero) provide an interesting exercise that depending on the neutrino-antineutrino ratio may imply composite dark matter, dark matter dominated by heavy dark matter particles with a wealth of very light (perhaps neutrino or axion mass scale) particles that have only a minimal effect on cosmic structure because they are so light relative to other kinds of matter in the univese, or even a dark sector in which dark matter phenomena are primarily a function of modified gravity laws rather than dark matter particles (if the neutrino-antineutrino ratio is only every so slightly balanced in favor of antineutrinos, leading to L=0 or L=-B).

(4) Beyond The Standard Model Processes Do Not Conserve B and L and B-L.  There are additional new physics processes beyond the Standard Model that violate baryon number conservation and lepton number conservation in ways that are almost impossible to observe now, but would have led to baryogenesis and leptogenesis in the highly energetic very early universe immediately after the Big Bang.

This is the predominant view among theoretical physicists, but this really shortchanges equally plausible options (1), (2)(a), (2)(b), and (3) without a good reason for doing so.

I would note that Option 2(a) is a particularly elegant solution that is also parsimonious, and could be supplemented in part by 2(b) and (3) to some extent that would not have to be complete.

It would also be an interesting exercise to determine the total mass-energy of the universe and compare it to the total number of particles in the universe and the total B+L of the universe.



Monday, June 9, 2014

Interesting LHCP 2013 (Barcelona) Higgs Conference Abstracts

* Francesco Riva, "The Higgs: supersymmetric partner of the neutrino"
Recent LHC searches have provided strong evidence for the Higgs, a boson whose gauge quantum numbers coincide with those of a SM fermion, the neutrino. This raises the question of whether Higgs and neutrino may be related by supersymmetry. I will show explicitly the implications of models where the Higgs is the sneutrino: from a theoretical point of view an R-symmetry, acting as lepton number is necessary; on the experimental side, squarks exhibit novel decays into quarks and leptons, allowing to differentiate these scenarios from the ordinary MSSM.
The idea that the Higgs boson itself could be a superpartner of the neutrino is novel and interesting.  A related pre-print discusses the issue.  This model is quite interesting, effectively proposing a SUSY model that is even more minimal than the MSSM:

We have shown that the phenomenology of this model is quite different from that of the MSSM. In the Higgs sector, a sizable (∼ 10%) invisible branching ratio for Higgs decays into neutrinos and gravitinos is possible, together with small deviations in the Higgs couplings to gluons and photons, due to loop effects if the stop t˜R is light. These effects are not yet favored nor disfavored by the present LHC Higgs data, but could be seen in the near future by measuring a reduction of the visible Higgs BRs. Higgsinos are absent in this model, and gauginos must get Dirac masses above the TeV. Only third-generation squarks are required, by naturalness, to be below the TeV. We have shown that the R-symmetry implies that squarks decay mainly into quarks and either leptons or gravitinos. Therefore, evidence for models with the Higgs as a neutrino superpartner can be sought through the ongoing searches for events with third-generation quarks and missing energy (tailored for the MSSM with a
massless neutralino) or through leptoquark searches for final states with heavy quarks and leptons. In the stop decays into tops and neutrinos, the determination of the top helicity will be crucial to unravel these scenarios.

Existing LHC searches rule out stops of less than 530 GeV and sbottoms of less than 500 GeV in this model.

* Juan Rojo Chacon, "Parton Distributions in the Higgs Boson Era"
With the recent discovery of the Higgs boson at the LHC, particle physics has entered a new era, where it is of utmost importance to provide accurate theoretical predictions for all relevant high energy processes for signal, bacground and New Physics production. Crucial ingredients of these predictions are the Parton Distribution Functions, which encode the non-perturbative dynamics determining how the proton’s energy is split among its constituents, quarks and gluons.

To bypass the drawbacks of traditional analyses, a novel approach to PDF determination has recently been proposed, based on artificial neural networks, machine learning techniques and genetic algorithms. In this talk we motivate their relevant of PDFs for LHC phenomenology and describe the latest developements of PDFs with LHC data.
The clear description of what the PDF is and why it is important provides useful background context.

* Daniele Barducci, Alexander Belyaev, Stefano Moretti and Stefania De Curtis, "The 4D Composite Higgs model and the 125 GeV Higgs like signal at the LHC"
General Composite Higgs models provide an elegant solution to the hierarchy problem present in the Standard Model (SM) and give an alternative pattern leading to the mechanism of electroweak symmetry breaking (EWSB). We present a recently proposed realistic realization of this general idea analyzing in detail the Higgs production and decay modes. Comparing them with the latest Large Hadron Collider (LHC) data we show that the 4D Composite Higgs Model (4DCHM) could provide a better explanation than the SM to the LHC results pointing to the discovery of a Higgs like particle at 125 GeV.
It is hard to tell from the abstract what the gist of the 4D Composite Higgs Model is, but it is worth looking at preprints to find out. Alas, the larger picture looks abysmally complex and hence less plausible: "Besides the SM particles the 4DCHM present in its spectrum 8 extra gauge bosons, 5 neutral called Z' and 3 charged called W', and 20 new quarks: 8 with charge +2/3, 8 with charge −1/3 and 2 respectively with charge 5/3 and −4/3; these coloured fermions are called t', b', T˜ and B˜, respectively."

* Jernej Kamenik, "On lepton flavor universality in B decays"
Present measurements of b->c tau nu and b->u tau nu transitions differ from the standard model predictions of lepton flavor universality by almost 4 sigma. We examine new physics interpretations of this anomaly. An effective field theory analysis shows that minimal flavor violating models are not preferred as an explanation, but are also not yet excluded. Allowing for general flavor violation, right-right vector and right-left scalar quark currents are identified as viable candidates. We discuss explicit examples of two Higgs doublet models, leptoquarks as well as quark and lepton compositeness. Finally, implications for LHC searches and future measurements at the (super)B- factories are presented.
Some of the strongest evidence for beyond the Standard Model behavior involves Lepton flavor violations where decays to electrons are about 25% more common than decays to muons, contrary to a Standard Model expectation of equal frequencies. This is a promising place to look for new physics. Motl discusses the results in a post here.

* Jorge de Blas Mateo, "Electroweak constraints on new physics"
We briefly review the global Standard Model fit to electroweak precision data. After that we analyze the electroweak constraints on new interactions, following a model-independent approach based on a general dimension-six effective Lagrangian. Finally, we also discuss the limits on several common new physics additions.
The abstract again says little, but this methodology is a window into higher energies than direct searches can reveal so it is always interesting to follow up upon at some point in the pre-print. Alas, however, the paper has so little analysis and so many undefined quantities (since it basically updates prior papers on the same topic with new numbers) that it is virtually impossible to make any sense of them.

* Carlos Lourenco, "Quarkonium production and polarization"
All the three frame-dependent polarization parameters (lambda_theta, lambda_phi and lambda_thetaphi), plus the frame-invariant parameter lambda_tilde, are measured in three different polarization frames, in five transverse momentum bins and two rapidity ranges, significantly extending the pT and rapidity ranges probed by previous experiments. The observations are in disagreement with the available theoretical expectations.
The excerpt of the abstract above captures the key point. Once again, the observed properties of quarkonium have failed to conform to theoretical expectations.

From here.

* The Conference also, according to Matt Strassler, showed convergence in the ATLAS mass measurements of the Higgs boson, but the exact new Higgs boson mass numbers from ATLAS aren't available in the abstracts or his commentary at his blog:

[Y]ou may recall a tempest in a teapot that erupted in late 2012, when ATLAS’s two measurements of the Higgs particle’s mass disagreed with each other by more than one would normally expect. This generated some discussion over coffee breaks, and some silly articles in on-line science magazines, even in Scientific American. But many reasonable scientists presumed that this was likely a combination of a statistical fluke and a small measurement problem of some kind at ATLAS. The new results support this interpretation. ATLAS, through some hard work that will be described in papers that will appear within the next couple of days, has greatly improved their measurements, with the effect that now the discrepancy between the two measurements, now dominated by statistical uncertainties, has gone down from nearly 3 standard deviations to 2 standard deviations, which certainly isn’t anything to get excited about. Experts will be very impressed at the reduction in the ATLAS systematic uncertainties, arrived at through significantly improved energy calibrations for electrons, photons and muons.

Experts: More specifically, the measured mass of the Higgs in its decay to two photons decreased by 0.8 GeV/c², and the systematic uncertainty on the measurement dropped from 0.7 GeV/c2 to 0.28 GeV/c2. And by the way, the rate for this process is now only 1 standard deviation higher than predicted for the simplest possible type of Higgs (a “Standard Model Higgs“); it was once 2 standard deviations high, which got a lot of attention, but was apparently just a fluke.

Meanwhile, for the decays to two lepton/anti-lepton pairs, the systematic error has dropped by a factor of ten — truly remarkable — from 0.5 GeV/c2 to 0.05 GeV/c2. The Higgs mass measurement itself has increased by 0.2 GeV/c2.

UPDATE: Analysis of these facts in a comment. My previous analysis overlooked statistical error.  The report is here at page 15.

* Beyond the scope of the Conference, I also note that two recent preprints have suggested that the combined ATLAS and CMS data reveal a potential signal of a 200 GeV mass stop sparticle and that there could be several other light sparticles that were overlooked in earlier searches, a possibility which Motl disucsses in an upbeat way at his blog.

Given the number of searches made by ATLAS and CMS combined and the fact that joint data from the two studies is necessary to produce even a 3 sigma effect, my money is on the possibility that this is really just a statistical fluke.  I find it highly unlikely all previous searches would have overlooked multiple supersymmetric (i.e. SUSY) particles and that these particles would only show up in this single isolated channel in such a weak way, despite the high energies of the two experiments.  It looks to me like a rifle shot hole in the general exclusion of such particles for almost all other parameters.

New Archaic Y-DNA R1b In Southern Siberia?

A recent comment at the Eurogenes blog states:
"Two out of three afanasievo remains and one okunevo remains tested R1b1 (M269) and one afanasievan – R1b."

Source (in russian) : http://pereformat.ru/2014/05/arbins-2/
The Afansevo culture, which appears to be the one referred to in the post above:
is the earliest Eneolithic archaeological culture found until now in south Siberia, occupying the Minusinsk Basin, Altay and Eastern Kazakhstan.

Conventional archaeological understanding tended to date at around 2000–2500 BC. However radiocarbon gave dates as early as 3705 BC on wooden tools and 2874 BC on human remains. The earliest of these dates have now been rejected, giving a date of around 3300 BC for the start of the culture.

The culture is mainly known from its inhumations, with the deceased buried in conic or rectangular enclosures, often in a supine position, reminiscent of burials of the Yamna culture, believed to be Indo-European. Settlements have also been discovered. The Afanasevo people became the first food-producers in the area by breeding cattle, horses, and sheep. Metal objects and the presence of wheeled vehicles are documented. These resemblances to the Yamna culture make the Afanasevo culture is a strong candidate to represent the earliest cultural form of a people later called the Tocharians.

The culture became known from excavations in the Minusinsk area of the Krasnoyarsk Krai, southern Siberia, but the culture was also widespread in western Mongolia, northern Xinjiang, and eastern and central Kazakhstan, with connections or extensions in Tajikistan and the Aral area.

The Afanasevo culture was succeeded by the Andronovo culture as it spread eastwards, and later the Karasuk culture.
The link of the Afansevo culture to the Tocharians is made, for example, in J.P. Mallory and Victor H. Mair, The Tarim Mummies: Ancient China and the Mystery of the Earliest Peoples from the West (2000).

There are more than a dozen rural Russian villages or settlements called "Okunevo", and presumably the one referenced above is geographically is a Southern Siberian site near the Afansevo finds in a reasonably similar time frame.

One way to read the reference is that three out of three Afansevo remains and one set of nearby in time and place Okunevo remains all had Y-DNA haplogroup R1b, with all but one having enough preservation of the ancient DNA to subtype it as R1b (M269), the predominant Western European subclade of R1b, and the other insufficiently preserved to be more specific than an R1b classification generally.

Mallory made the case in a 2011 talk that R1b was a Tocharian genetic signature based upon West Eurasian Y-DNA haplogroups found in Uyghur populations that were direct successors to and brought about the fall of the Tocharians during a period of Turkic expansion. But, ancient Tarim mummy DNA from ca. 1800 BCE, analyzed in 2009 showed uniformly R1a1a Y-DNA haplogroups (citing Li, Chunxiang. "Evidence that a West-East admixed population lived in the Tarim Basin as early as the early Bronze Age". BMC Biology (February 17, 2010)), so any Y-DNA R1b in that population would have to have entered that gene pool sometime in the following 2400 years or so, and was apparently not present initially.

This would be very notable as there are no other instances of ancient Y-DNA R1b so far East, particularly in a population strongly presumed to be Indo-European ca. 3300 BCE.

There are many ancient Y-DNA samples from Siberia and other parts of the Russian Steppe around that time or a bit later that are overwhelmingly R1a (including ancient Y-DNA from individuals who were part of the Andronovo culture that followed the Afansevo culture, suggesting that the later specifically Indo-Iranian culture may have replaced Afansevo, rather than evolving from Afansevo) or have Y-DNA Q, a sister Y-DNA clade to R.  But, there are no ancient Y-DNA samples to my knowledge that are consistently R1b or even R1b at all, in that region.

Today, R1b-M269 is predominant in large swaths of Western and Northern Europe where it expanded fairly recently (see, e.g. haplogroup and subhaplogroup mutation rate based age estimates here). It is also found in Armenians, Turks, north Iranians, and Lezgins among others (basically, West Asians).

The oldest known ancient Y-DNA R1b sample is from a late Copper Age Bell Beaker culture individual in Germany ca. 2800 BCE to 2000 BCE.

Circumstantial evidence and the phylogeny of Y-DNA haplogroup R1 strongly points to an ultimate origin of the haplogroup well to the east of the places where Y-DNA R1b is most common today, but just where has never been pinned down very definitively. Ma'alta boy with Y-DNA R* from ca. 24,000 years ago around the Altai pushes potential Paleolithic spread of R* far to the East, but triangulations from R2 in South Asia, R1a focused in Central and Eastern Europe into Siberia, and R1b in Western Europe (possibly as a late arrival from what is now Czech territory), has favored an origin around the Caucasus mountains or West Asia or Central Asia with no hint of an Eastward expansion for R1b in recent prehistory.

Of course, a mention in a Russian language blog post that I am relying on a second hand translation of, when another Eurogenes commentator doubts the workmanlike quality of the investigators involved, in and of itself, is not exactly authoritative and stinks of mere rumor. But, the rumor is specific and plausible enough (and my cynicism regarding the quality of work done by ancient DNA researchers is not so great) that I think it deserves a mention.

I've previous stated, repeatedly, that I believe that R1b in Europe was spread by a linguistically non-Indo-European culture (probably part of the same language family as the modern Basque language) whose speakers only later converted linguistically to Indo-European languages, mostly by proto-Celtic and proto-Germanic populations around and after Bronze Age Collapse ca. 1,200 BCE. Other investigators have argued that R1b was spread by Indo-Europeans, probably quite a bit earlier.

If Y-DNA haplogroup R1b was common or even predominant in the Afansevo culture, however, it becomes more plausible both that Afansevo was not Indo-European, and that the peoples who spread R1b in Europe were linguistically Indo-European people who were descended from or historically related to the Afansevo people, although none of this would be definitive.

UPDATE:  A blog post here from July 13, 2013, analyzing a recent paper on modern Y-DNA distributions in Central Asia that notes R1b-M269 in Turkmen and Uzbek populations of Central Asia (but not in other Central Asian and Siberian populations) that are probably not the product of recent historic flukes, adds useful data points when evaluating these questions.  The R1b-M269 in those areas could very plausible have origins in Afansevo populations.  Of course, it is still more common in the Caucasus (Armenians, Azeris, Georgians, Ossetians).

New Ancient DNA Results Galore

Ancient DNA from the Altai region, often a West Eurasian-East Eurasian border, shows a mix of West Eurasian and East Eurasian genetics in a number of individuals from ca. 2742 BCE to ca. 914 BCE.  Uniparental DNA isn't quite so definitive on the existence of admixture, but the autosomal data in every case where it is available, indicates a significant minority of Asian admixture.

Another large sample of ancient DNA from the Russian Steppe from 1000 BCE to 7000 BCE, shows a major transition from older samples representing typical European hunter-gatherers before 4000 BCE, followed by a transition to a population with significant "Ancestral Northern European" ancestry.

Another conference paper used maize genetics (some from ancient maize samples) to document the path of diffusion of this New World domesticated crop through the Americas.  (Gambler's House, meanwhile, has a nice post on the light shed by oral history on the ancestry and migration history of the Pueblo people of the American Southwest, addressing difficult issues of how to deal with "legendary history."  The migration purportedly starts with a volcanic eruption, tsunami and earthquake to the Southeast of the Four Corners area providing an archaeological recognizable event to search for to corroborate the story.)  A conference paper using a similar methodology uses the genetics of TB, which appears contrary to earlier conventional wisdom to have pre-Neolithic origins, to track human origins and migrations, which tends to corroborate important outlines of models of human migrations from other sources.

Another new ancient DNA study favors a maritime route via Cyprus and the Aegean, over an Anatolian route for the source of the Neolithic colonists who gave rise to the LBK and CP first wave archaeological cultures:
Sixty-three skeletons from the Pre Pottery Neolithic B (PPNB) sites of Tell Halula, Tell Ramad and Dja'de El Mughara dating between 8,700–6,600 cal. B.C. were analyzed, and 15 validated mitochondrial DNA profiles were recovered. In order to estimate the demographic contribution of the first farmers to both Central European and Western Mediterranean Neolithic cultures, haplotype and haplogroup diversities in the PPNB sample were compared using phylogeographic and population genetic analyses to available ancient DNA data from human remains belonging to the Linearbandkeramik-Alföldi Vonaldiszes Kerámia and Cardial/Epicardial cultures. We also searched for possible signatures of the original Neolithic expansion over the modern Near Eastern and South European genetic pools, and tried to infer possible routes of expansion by comparing the obtained results to a database of 60 modern populations from both regions. Comparisons performed among the 3 ancient datasets allowed us to identify K and N-derived mitochondrial DNA haplogroups as potential markers of the Neolithic expansion, whose genetic signature would have reached both the Iberian coasts and the Central European plain. Moreover, the observed genetic affinities between the PPNB samples and the modern populations of Cyprus and Crete seem to suggest that the Neolithic was first introduced into Europe through pioneer seafaring colonization.
The study also documents the significant shifts between the Pre Pottery Neolithic population of the Northwestern Fertile Crescent and its current population genetics.  On the other hand, one should not read too much into conclusions based on a sample of just 15 ancient mtDNA haplogroup assignments.

Sunday, June 8, 2014

Neutrino 2014 Offers Nothing No Re Neutrinoless Double Beta Decay Or Cosmology

The recap of cosmology based neutrino data and neutrinoless double beta decay experiments is unchanged from news already reported at this blog.

The Cosmology Data

On the cosmology front, the bottom line state of the art figure after considering all of the outstanding data is is Neff of 3.32 +/- 0.27 (with Neff equal to 3.04 in a case with the three Standard Model neutrinos and neutrinos with masses of 10 eV or more not counting in the calculation), and the sum of the masses of each of the neutrino mass eigenstates is less than 0.28 eV (at the 95% confidence level, and less than 0.2 eV at a 68% confidence level).

While the Neff value is not inconsistent with a fourth neutrino species, given the the sum of the three Standard Model neutrino mass eigenstates is not less than about 0.06 eV, this would require that a fourth effective neutrino species have a mass of less than 0.22 eV at the two sigma level and less than 0.14 eV at the one sigma level.

Yet, as noted previously, an additional sterile neutrino with a mass of 0.001 to 0.1 eV has been largely ruled out by two separate sets of reactor experiments, leaving a narrow window of 0.1 eV to 0.22 eV for an additional neutrino, despite reactor anomaly data fitting best to a 1 eV additional neutrino species.

In short, the cosmology data taken as a whole and in light of everything else we know about neutrinos, disfavors the existence of a light reactor anomaly sterile neutrino that oscillates with the ordinary Standard Model neutrinos.  A closing talk at Neutrino 2014 summarized the state of affairs by stating that the dominant 3+1 paradigm is almost ruled out by current experimental data already.  An earlier talk explored these tensions in that data in greater depth.

Neutrinoless Double Beta Decay

Nothing has changed since my last post on neutrinoless double beta decay experiment results in December of 2013.  But, the Neutrino 2014 conference in Boston last week did recap the data from the half dozen or so current experiments and the similar number of experiments that will come on line before long.

EXO and Kamland report consistent results for the rate of 2vBB decays (which the Standard Model permits) at the 1.4 sigma level.

The EXO result is 2.165 +/- 0.0573 * 10^21 years.
The Kamland  result is 2.32 +/- 0.094 * 10^21 years.

These confirmations give us some confidence that the methodology of these experiments are sound, and provides a firm experimental data point from which BSM theories in which it is possible to calculate relative rates of 2vBB decays and 20BB decays can be evaluated.  We now know, as the data points below reveal, that neutrinoless double beta decay, if it happens at all, is at least 10,000 times less common than 2vBB decays.

The GERDA experiment which rules out neutrinoless beta decay for 2.1*10^25 years at the 90% confidence level is the strongest individual exclusion, followed closely by the EXO-200 and Kamland results.  The coefficient in front of 10^25 years at the 90% confidence level for Kamland is 1.9, for CUORICINO is 0.28 and for NEMO-3 is 0.11.  The combined exclusion from all data is a bit stronger, but still on the same order of magnitude.

These correspond to upper limits on Majorana masses of 0.14 eV to 0.34 eV from the most strict results and 0.33 eV to 0.87 eV from the least strict bounds.  Of course, oscillation data put minimum absolute neutrino mass scales for both inverted hierarchies and normal hierarchies well below those levels.

Ruling out a completely Majorana mass for neutrinos if there is an inverted hierarchy is something that is "just around the corner" over the next several years with currently planned experiments.  Ruling out a completely Majorana mass for neutrinos if there is a normal hierarchy is beyond the scope of neutrinoless double beta decay experiments currently on the drawing board and will take quite a while.  Current experiments have detector material masses on the order of 1 ton.  An experiment that would rule out completely Majorana mass for a normal neutrino mass hierarchy would require on the order of 1000 tons of material.

Also, in many BSM theories, including many SUSY theories, there are sources of neutrinoless double beta decay in addition to those associated with Majorana mass neutrinos.  So, as exclusions of neutrinoless double beta decay grow greater over time, the limits on these BSM theories are tightened, even before Majorana mass neutrinos themselves can be ruled out.

Glass half full Majorana neutrino advocates would argue that the results so far say little, because current experiments aren't precise enough to detect neutrinoless double beta decay at the predicted frequencies of their theories, even if it exists.  Glass half empty skeptics would argue that the absence of any positive evidence of lepton number violation is yet another instance of BSM theories failing to deliver any observable evidence in their favor.

More neutrally, in any BSM theory which has both Majorana neutrinos and other sources of neutrinoless double beta decay, the alternative sources can't be more than twice as great as the Majorana sources in an inverted hierarchy and can't be more than about five times as great as the Majorana sources in a normal hierarchy.  This cutoff is sufficiently strong to rule out a fair amount of BSM parameter space.

Also, it isn't entirely clear that lepton number violation from Majorana mass neutrinos alone is actually sufficient to account for the actual lepton number of the universe which is strongly suspected to have far more antileptons than leptons (the lepton number of the universe is largely a function of the ratio of neutrinos to antineutrinos in the universe and would be extremely close to 1:1 if the lepton number of the universe started out at 0 and there were no lepton number violating processes).  If other BSM contributions to lepton number violation and observed neutrinoless double beta decay are not substantial, the question of a non-zero lepton number of the universe (if there is one) remains unsolved.

It is also worth noting that there were no other announcements at Neutrino 2014 of any evidence from any other source of lepton number violating processes.  Neutrinoless double beta decay also remains the most promising place, given current experimental limitations, to look for lepton number violating processes.

Thursday, June 5, 2014

Most Northern Hemisphere Men Have Patrilineal Ancestry In SE Asia

[W]e genotype 13 new highly informative single-nucleotide polymorphisms in a worldwide sample of 4413 males that carry the derived allele at M526, and reconstruct an NRY haplogroup tree with significantly higher resolution for the major clade within haplogroup K, K-M526.

Although K-M526 was previously characterized by a single polytomy of eight major branches, the phylogenetic structure of haplogroup K-M526 is now resolved into four major subclades (K2a–d). The largest of these subclades, K2b, is divided into two clusters: K2b1 and K2b2. K2b1 combines the previously known haplogroups M, S, K-P60 and K-P79, whereas K2b2 comprises haplogroups P and its subhaplogroups Q and R.

Interestingly, the monophyletic group formed by haplogroups R and Q, which make up the majority of paternal lineages in Europe, Central Asia and the Americas, represents the only subclade with K2b that is not geographically restricted to Southeast Asia and Oceania. Estimates of the interval times for the branching events between M9 and P295 point to an initial rapid diversification process of K-M526 that likely occurred in Southeast Asia, with subsequent westward expansions of the ancestors of haplogroups R and Q.
From Tatiana M Karafet, Fernando L Mendez, Herawati Sudoyo, J Stephen Lansing and Michael F Hammer, "Improved phylogenetic resolution and rapid diversification of Y-chromosome haplogroup K-M526 in Southeast Asia", European Journal of Human Geneticas (June 4, 2014).

Y-DNA R and Q have origins in SE Asia Probably Between 45 kya and 70 kya.

The most common Y-DNA haplogroup type in Europe is R (specifically, R1a and R1b), although this distribution probably didn't arise for most of Europe until the second wave of farmer/herder migrations to Europe, a thousand to two thousand years or so after the first farmers brought the Neolithic Revolution to Europe. 

R1a is associated with Indo-European settlement starting as far back as the Corded Ware culture or earlier in Central to Eastern Europe and Central Europe with ancient DNA evidence of the population's genotype extended all of the way to the Tarim Basin in far Western China at its greatest extent for essentially all of the Bronze Age and into the 7th century CE or so, when Uygur populations emerge in the region.

R1b is the predominant Y-DNA haplogroup of Western Europe and a very basal branch of it (R1b-V88) is common among the mostly pastoralist speakers of the Chadic languages in the African Sahel, a branch that probably arrived around 5,200 years ago based on archaeological evidence.  A migration probably brought this Y-DNA haplogroup into Western Europe at high frequencies sometime after that, although the precise archaeological culture associated with the predominance of R1b in Western Europe isn't entirely clear.  I've advanced that hypothesis that the Bell Beaker culture (the earliest to have R1b ancient DNA discovered so far) was responsible (for lack of any better candidates), but other evidence points to the prior Megalithic culture members who were first farmers in much of Western Europe, but with very fragile evidence, mostly from archaeologically inferred population booms and busts and no ancient DNA at all, to support that hypothesis.

Y-DNA haplogroup R2 has a largely South Asian distribution with concentrations highest in the Indus River Valley. Ancient DNA reveals an Y-DNA R* individual, Ma'lta boy ca. 24,000 years ago, with significant autosomal genetic affinity to modern Native Americans near the Altai region of Southern Siberia.

Y-DNA haplogroup Q is the most common Y-DNA haplogroup of Native Americans and is also found in Siberia.

Thus, the Y-DNA P clade within newly redesignated Y-DNA K2b is the leading Y-DNA clade of the Northern part of the Northern hemisphere.  The latest study cited above indicates that the Y-DNA P clade that includes R and Q probably arose in Southeast Asia, coincident with the expansion of Y-DNA K2, in general, and then migrated West together before expanding into their current range, from an intermediate source somewhere in the region from India to Iran to Central Asia.

More On The Expansion of K2 into SE Asia

Both Y-DNA haplogroup C (whose Asian specific break from other Eurasian Y-DNA haplogroups is more basal than K) and Y-DNA haplogroup P (which is part of K2) haplogroups are found in Australian aboriginal men and indigeneous Papuan peoples. So, the rapid expansion of K2 giving rise to sub-haplogroups had to have taken place before about 45,000 years ago, the earliest securely dated archaeological evidence of modern human presences in those places.

It isn't clear if C and K arrived in Southeast Asia in a single migration of a genetically mixed population, or if they were two separate waves of migration that only started to produce a Y-DNA haplogroup mixed population in SE Asia prior to migrations to Australia and Papua New Guinea.

Suggestive evidence, but no smoking evidence, suggests that modern humans most likely reached Southeast Asia from South Asia around the time of the Toba volcanic eruption ca. 74,000 years ago.  The 74,000 years ago date would also fit an mtDNA mutation rate estimated expansion of mtDNA haplogroups N and R out of India.

This time span (from 45,000 years ago to 74,000 years ago) is long enough to accommodate either a one wave or two wave scenario.  Time ranges from 100,000 years ago to about 55,000 years ago could be squared with hypothesis proving interpretations of the archaeological evidence.

My weak personal bias is an expectation that these waves of migration were separate, with C expanding first (probably around 74,000 years ago), based mostly upon the phylogeny of the Y-DNA C haplogroup expanded in Asia, which seems to show a rapid "race across the coastal route" path from India that differs quite a bit from the pattern seen in haplogroup K.  Haplogroup K2 would then migration to SE Asia in a separate wave perhaps 5,000 to 10,000 years later.

Also, sometime between 45,000 and 74,000 years ago, somewhere between India and the island of Flores, these early modern human migrants to Asia admixed with archaic hominins whose Denisovan-like genetic traces are present in modern Australian Aborigines, Melanesians and Phillipino Negrito populations at significant levels.  The peak admixture percentage in the source population for these groups is estimated to have been around 8%.  My weak personal bias that this admixture took place mostly on the island of Flores with H. Florensis who would have Denisovan DNA in that scenario, but I'm less confident of any particular scenario for Denisovan admixture than I once was, mostly because of (1) possible evidence of a low but non-zero level of mainland Asian Denisovan admixture that is highly diluted but was not eliminated that is unlikely to be due to Melanesian back migration, and (2) physical anthropology and archaeology suggestions that H. Florensis may have been Homo Erectus that evolved to a smaller size due to island dwarfism.  But, I have not yet seen really solid evidence of either point.

Homo Erectus was the first hominin species to leave Africa and arrived in Asia including South East Asia around 1,800,000 years ago.  It isn't clear if Homo Erectus was still in SE Asia at the time and went extinct as part of the same Toba/migration event, or if this hominin species went extinct in the region at an earlier time, because the archaeological record of them more recently than 100,000 years ago is pretty much non-existent.  The genetic evidence related to Denisovan admixture, strongly supports the concluding that the Archaic Denisovan admixture seen in modern humans is from a hominin species that probably was not a direct decedent of Asian Homo Erectus.

The species with Denisovan genetics might have replaced Homo Erectus before modern humans reached Southeast Asia from South Asia, or might have co-existed with Homo Erectus in Asian until modern humans brought about the extinction of Homo Erectus.  In either case, the extinction of Homo Erectus could have happened either directly, via genocide, or indirectly, for example, through impacts on food supply and habitat.  An extinction of Homo Erectus entirely, without a strong role in that process from another hominin species, seems unlikely for an archaic hominin species that had already managed to persist for 1,700,000 years or so in Asia.

A Toba/modern human migration model of Homo Erectus extinction seems more parsimonious.

There is no meaningful archaeological evidence of an intermediate archaic homin species other than Homo Flores (which seems like a poor candidate to wipe out a continent wide Homo Erectus population) apart from Denisovan admixture in the populations mentioned above, but we really don't have much solid evidence one way or the other to discriminate between different models of Homo Erectus extinction.

Footnote on Y-DNA Haplogroup D

Y-DNA haplogroup D is the other main Y-DNA haplogroup, in addition to C and K2 that is found in East Eurasia.  It is very common in Japan and is also found in a swath of land from the Anadaman Islands, across South Asia to Tibet and into the steppe to the North of Tibet.  The Japan v. non-Japan split in the phylogeny is more basal than the splits within each category and is almost complete (i.e. only Japanese D is found in Japan, and only non-Japanese D is found in South Asia).

The details of a DE split are controversial and beyond the scope of this post, but the question with Y-DNA haplogroup D is whether its island-like distribution is a result of a much wider range that was split up by subsequent migration, or whether it migrated to Asia after Y-DNA C and K2, perhaps seeking out niches not already occupied by modern humans in Asia ca. 30-40 kya.  I tend to favor the later scenario, because there is so little evidence of Y-DNA D anywhere else in Asia, and because the later scenario is still consistent with the evidence on the earliest migrations of modern humans to Japan.  I also tend to favor a scenario in which the Japanese Y-DNA D carriers arrive via a Northern route from somewhere North of Tibet, rather than a Southern Coastal route.

On the other hand, mutation rate data tend to favor a spit between Y-DNA D and Y-DNA E around the same time as the split between Y-DNA K1 and Y-DNA K2, for which the archaelogically favored date would be around 70,000 years ago, or perhaps a few thousand years earlier.  It could be that Y-DNA D expanded on a Northern route from Central Asia to Korea and then trickled down into India and the Andaman Islands from Tibet, prior to the LGM ca. 20,000 years ago, while Y-DNA C and K2 expanded along a Southern route from India into South East Asia, but that the LGM ice age wiped out all but a few relict populations with Y-DNA D, thus breaking up its range.

Another factor favoring Y-DNA haplogroup D as a post-Y-DNA C and K2 wave is that none of the Y-DNA haplogroup D populations has any significant non-Neanderthal archaic hominin admixture.

The New Y-DNA Haplogroup K1.

The Y-DNA haplogroups known as L (mostly South Asian and especially the Indus River Valley) and T (geographically broad early Neolithic and/or Epipaleolithic expansion from around Mesopotamia into Europe and parts of Africa and since diluted in many places), have been included in a newly designated Y-DNA K1 haplogroup.

It would appear that K1, which stayed in West Eurasia (probably in Iran or South Asia) before it split from K2, which continued on to SE Asia around 70 kya, if not earlier.

K1 could also have backmigrated from Southeast Asia around the same time as Y-DNA haplogroups Q and R, but that seems much less likely as there are few traces of Y-DNA haplogroup L or T in Southeast Asia that can't be much more easily explained by later migrations.





Wednesday, June 4, 2014

Forward Backward Top Quark Production Asymmetry Gone

Another once notable potential signal of beyond the Standard Model physics, the forward backward top quark production asymmetry, has disappeared as experimental precision has increased, more data has reduced statistical error, and the theoretical prediction has been refined.  The data from both Tevatron and the LHC now match the Standard Model prediction.

Discovering The Z Boson

In the early 1970s, a few years after my father graduated from Standard with an engineering PhD, while he was working as an associate professor in Georgia Tech in Atlanta, Georgia and was just starting his family, physicists discovered "weak neutral currents".

We now understand these phenomena to be mediated by the Z boson of the Standard Model of Particle Physics.  At the time, this had been predicted by the Glashow-Salam-Weinberg model that would eventually become the Standard Model, but not yet confirmed experimentally.  The 2012 discovery of the Higgs boson provided the finishing touch of experimental confirmation for direct descendant of that model.

nice introductory talk at the Neutrino 2014 conference in Boston this week, tells the story of what it was like to be a scientist doing the work that led to that discovery back then.  The drama, played out with slide rules, electronic punch card mainframe computers, hand written transparencies, and small collaborations recalls stories that I heard when I was growing up from my father about academic life in the STEM fields in the early 1970s.

It is remarkably how sophisticated those scientists could be without so many of the fundamental tools used by modern investigators.

Degenerate Sterile Neutrino Ruled Out

Luboš Motl from The Reference Frame Blog reports that the U.S. based MINOS experiment has largely ruled out the existence of a sterile neutrino flavor, in addition to the three "fertile" neutrino flavors (i.e. neutrinos that interact via the weak nuclear force which are either lefter handed neutrinos or right handed anti-neutrinos) with a mass nearly degenerate (delta mass squared of 0.01 ev2 or less which translates into a mass difference of 0.1 eV or less) with the fertile neutrino species.

The MINOS results from the Neutrino 2014 conference in Boston have been replicated almost exactly by the Chinese Daya Bay neutrino experiment using similar methods (i.e. a set of several neutrino detectors hundreds of miles from each other and from the reactor neutrino source) which were reported at the same conference.  The fact that the conclusions have been simultaneously and independently confirmed gives us considerable confidence that they are correct.

Fortunately, the data necessary to measure differences in mass and mixing angles for a fourth or fifth sterile neutrino with the three Standard Model neutrinos in the MINOS and Daya Bay experiments does not impact the determination of Theta 13, the squared mass difference between states one and three, or some of the other key parameters of the model which would complicate the analysis if they were strongly interdependent upon each other in fits of experimental data from these experiments to neutrino oscillation models with sterile neutrinos.

According to the new data from MINOS and Daya Bay, any sterile neutrino would have to be either at least 100 meV heavier than the heaviest of the three "fertile neutrinos" or less than 1 meV different in mass from the three fertile neutrinos

Background

The Three Light "Fertile" Neutrinos of the Standard Model

Multiple experimental sources have confirmed in experiments measuring neutrino oscillations and in cosmology observations, that there are at least the three Standard Model fertile neutrino flavors that have been observed in weak force decays.  Precision electroweak data, for example from the LEP experiment, rule out the existence of additional "fertile neutrinos" with a mass of less than 45,000,000 eV.

The absolute value of the mass difference between the first and second neutrino mass eigenstates is about 8 meV. The absolute value of the mass difference between the second and third neutrino mass eigenstates is about 50 meV.

The sum of the three neutrino mass eigenstates is at least 0.058 eV in a "normal" mass hierarchy, and at least 0.108 in an inverted mass hiearchy.

The latest direct experimental measurements reported at Neutrino 2014 merely confirm that the electron neutrino's most favored mass eigenstate has a mass of under about 0.225 eV (which would imply a sum of the three fertile neutrino mass eigenstates no higher 0.303 eV given the experimentally measured mass differences between these neutrino mass states).  The highest credible prediction that I've seen of this sum of mass states based in theory, which assumes a near degenerate inverted mass hierachy is 0.254 eV.  Direct experimental measurements of other flavors of neutrinos add no meaningful insight at all because they are even less precise and absurdly exceed what other well motivated theoretical and observational considerations permit.

Thus, the highest possible absolute sum of the three masses is about five times as large as the lowest possible value.  The absolute value of the neutrino masses is certainly still an open question, but it has been narrowed to less than a single order of magnitude when measured on a sum of fertile neutrino mass states basis.

The Possibility of Light "Reactor Anomaly" Sterile Neutrinos

The "reactor anomaly" suggest a less than definitive possibility that there is another "sterile neutrino" that oscillates with the three fertile neutrinos. But there are tensions between the various data points suggesting this possibility, particularly between appearance and disappearance data at some reactors, which should be functions of each other.

Best fits for reactor anomaly sterile neutrino models have pointed to a single sterile neutrino of about 1 eV or more in rest mass that mixes only infrequently with fertile neutrinos (compared to the rates at which they mix with each other, specifically, for sterile neutrinos of more than 0.1 eV not more than 2.2% of the time for an electron neutrino). As the Daya Bay pre-print linked earlier in this post explains:
At the moment, there are three experimental results from neutrino oscillation experiments, which give hints that sterile neutrinos could exist. These three results, usually referred to as anomalies, are the LSND (and MiniBooNE) anomaly, the Gallium anomaly, and the reactor anomaly, which all point to sterile neutrinos with mass of the order of 1 eV and small mixing. It should be noted that if such sterile neutrinos exist, they could be produced in the early Universe, and have played an important role in the cosmological evolution. Global fits to data from short-baseline neutrino oscillation experiments suggest that the data can be described by either three active and one sterile (3+1) neutrinos or three active and two sterile (3+2) neutrinos. However significant constraints come from experiments which would appear to disfavor these anomalies.
In general, while the reactor anomaly remains one of the more important unsolved questions in physics at this point, the significance of the data supporting it has declined somewhat over time.  One recent study estimated the statistical significance of the reactor anomaly to be just 1.4 standard deviations.  Systemic error and flawed theoretical calculations may account for much of it.

Best fits for 1+3+1 models, with one sterile neutrino lighter than any of the three Standard Model neutrinos, and one heavier, suggest a best fit mass of 3.2 eV for the five neutrino flavors combined.

Cosmology Data Regarding The Number Of Neutrino Flavors And Their Mass

Cosmological estimates of the number of effective neutrino species from cosmic background radiation observations and similar data have also pointed, but less than definitively, to the possibility of a light sterile neutrino in addition to the three Standard Model neutrinos.

Cosmology measurements favor a combined sum of the three neutrino masses that is less than about 0.3 eV, subject to a number of model dependent considerations (after the final Planck data on that point).  A reactor anomaly sterile neutrino is not forbidden by cosmology data on the number of effective neutrino species (Neff) but is disfavored by data on the estimated sum of the masses of the respective neutrino flavors at a value of 1 eV of rest mass, which exceeds best estimates for the sum of all of the neutrino flavor masses combined.  Best fits for 1+3+1 models are strongly at odds with cosmology data on the sum of the neutrino masses in all possible flavors.

However, it is important to note that neutrinos are defined for cosmology purposes, however, to have masses on the order of 1 eV or less, even if they would be considered neutrinos for other purposes in fundamental physics.  So, for example, a 2500 eV sterile neutrino dark matter candidate would not count as a neutrino for cosmology observation purposes.  Even a 3.1 eV sterile neutrino pushes the boundaries of what would be considered a neutrino for cosmology model purposes.

Cosmology data regarding Neff favor 3+1 models (i.e. a single sterile neutrino models) strongly relative to 3+2 models (i.e. models with two sterile neutrinos), the Chi-square fits to reactor data, given the respective numbers of degrees of freedom in each model, are not substantially improved by using 3+2 models rather than 3+1 models.  Reactor data tends to do the same thing, although 1+3+1 models are disfavored far less strongly than 3+2 models.

Significance of Findings

The exclusion of degenerate sterile neutrinos is not unexpected.  But, it largely rules out a possibility that experiments without multiple detectors spaced hundreds of miles apart and from a source reactor could not exclude.  This allows data from other experiments to be fit to less exotic sterile neutrino parameter spaces that other experiments cannot distinguish themselves, without loss of rigor.

A reactor anomaly sterile neutrino mass would be too light to be a warm dark matter or cold dark matter candidate if it is a thermal relic (if it were a thermal relic, it would be "hot dark matter" which is excluded as a explanation for dark matter phenomena experimentally). But, obviously, any experimental evidence for a non-Standard Model fundamental particle, even if it is not definitive, is a big deal.

In the end, my prediction is that the reactor anomaly will evaporate as systemic errors and theoretical calculation issues are clarified, eventually ruling out the possibility of a light sterile neutrino.  There is too much tension between the weak evidence in favor of sterile neutrinos for it to be likely that it will stand the test of time, and too little theoretical motivation for their existence.  But, it is too early to rule their existence out at this point based upon observational evidence alone at this point.