Monday, April 29, 2013

More Direct Dark Matter Detection Experiment Result Skepticism

Direct dark matter detection experiments appear to have cried wolf so many times that skepticism of their results is (appropriately) mounting.
Apparent effects of dark matter have been “discovered” so many times in the last decade that you may by now feel a bit jaded, or at least dispassionate. Certainly I do. Some day, some year, one of these many hints may turn out to be the real thing. But of the current hints? We’ve got at least six, and they can’t all be real, because they’re not consistent with one other. It’s certain that several of them are false alarms; and once you open that door a crack, you have to consider flinging it wide open, asking: why can’t “several” be “all six”? 
Professor Matt Strassler.

The evidence that something is causing the universe to behave at astronomy scales in a manner inconsistent with general relativity acting primarily on luminous matter is overwhelming and largely internally consistent.  Most commonly these effects are attributed to "dark matter" and "dark energy."

But, the evidence that particular experiments have actually discovered new kinds of particles that cause these effects is not currently compelling.  They are seeing "something" that gets tagged as an event, but it is very hard to distinguish those events from experimental error and unknown but pedestrian background effects.

Monday, April 22, 2013

Evidence Of 22,000 Year Old Human Habitation In Brazil Is Weak

In the extraordinary claims require extraordinary evidence department, I am deeply skeptical of the claim that archaeologists have found human made stone tools in a Brazilian rock-shelter that date to more than 22,000 years ago. (The linked story's source is: C. Lahaye et al., Human occupation in South America by 20,000 BC: The Toca da Tira Peia site, Piaui, Brazil, Journal of Archaeological Science. (March 4, 2013)).

While there very likely were pre-Clovis modern humans in the New World, the evidence that there were humans in Brazil nine thousand years pre-Clovis is not strong.  The new evidence at the Toca da Tira Peia rock-shelter is in the same Brazilian national park as the site of a previous claim at Pedra Furada alleged to be 50,000 years old by their investigators.

Skeptics have argued that the "unearthed burned wood and sharp-edged stones" dated to such ancient time periods, "could have resulted from natural fires and rock slides." 
[The new] site’s location at the base of a steep cliff raises the possibility that crude, sharp-edged stones resulted from falling rocks, not human handiwork, says archaeologist Gary Haynes of the University of Nevada, Reno. Another possibility is that capuchins or other monkeys produced the tools, says archaeologist Stuart Fiedel of Louis Berger Group, an environmental consulting firm in Richmond, Va. 
Other researchers think that the discoveries are human-made implements similar to those found in Chile and Peru at the Monte Verde site 14,000 years ago and at another possibly as old as 33,000 years ago (the dating method for the older dates is likewise widely questioned).

The dating methods are also suspect. 
Dating the artifacts hinges on calculations of how long ago objects were buried by soil. Various environmental conditions, including fluctuations in soil moisture, could have distorted these age estimates, Haynes says. . . . An absence of burned wood or other finds suitable for radiocarbon dating at Toca da Tira Peia is a problem, because that’s the standard method for estimating the age of sites up to around 40,000 years ago, Dillehay says. But if people reached South America by 20,000 years ago, “this is the type of archaeological record we might expect: ephemeral and lightly scattered material in local shelters.” 

The dates of 113 putative human artifacts were made with a "technique that measures natural radiation damage in excavated quartz grains, the scientists estimated that the last exposure of soil to sunlight ranged from about 4,000 years ago in the top layer to 22,000 years ago in the third layer. Lahaye says that 15 human-altered stones from the bottom two soil layers must be older than 22,000 years."

Fundamentally, the dates are questionable because:

* There is no historical precedent for modern humans moving into virgin territory on a sustained basis for thousands of year without expanding exponentially and leaving an obvious sign of their presence.  If the site showed the signs of a marginal community that lasted a few hundred years and collapsed that might be imaginable.  But, this site purports to show continuous habitation for eighteen thousand years or more.

* There is an absence of intermediate sites between South America and any possible source of humans in the appropriate time frame.  (There is one alleged older site in the American Southeast with similar dating and hominin identification issues).

* There is no skeletal evidence matching the remains definitively to modern humans prior to 14,000 years ago or so.  Even if the dates were undisputably that old and made by hominins, in that time frame, a small band of archaic hominins with less of a capacity to dominate their surroundings might be more plausible.

* No radiocarbon dating has been possible and it is not well established that the dating method used is really accurate to the necessary degree of precision.

Friday, April 19, 2013

Local Dark Matter Density

A detailed model of the inferred dark matter halo of the Milky Way galaxy based on the observed motions of stars in our galaxy implies that the density of dark matter in the vicinity of our solar system is 0.49 +0.08/-0.09 GeV/cm^3. 

If a dark matter particle is 2 keV as implied by other studies, then the density in dark matter particles per volume in this vicinity of the galaxy is 245,000 per cm^3, which is 245 dark matter particles per millimeter^3.

At an 8 GeV mass (which is disfavored by other measurements) there would be one particle per sixteen cm^2 (about one per cubic inch).  At an 130 GeV mass (also disfavored by other measurements) there would be one particle per 260 cm^3 (a cube slightly more than 6 cm on each side).

Wednesday, April 17, 2013

Two Physics Quick Hits

* Dark Matter
Among the hints of dark matter, I believe that the three apparently decidedly non-background-like events seen by CDMS II represent the strongest hint of a dark matter particle we have seen so far.
However, there are other hints, too. . . . CDMS suggests a WIMP mass of 8.6 GeV; AMS-02 indicates 300 GeV or more; and we also have the Weniger line at Fermi which would imply a WIMP mass around 130-150 GeV. These numbers are apparently inconsistent with each other.
An obvious interpretation is that at most one of these signals is genuine and the other ones – or all of them – are illusions, signs of mundane astrophysics, or flukes. But there's also another possibility: two signals in the list or more may be legitimate and they could imply that the dark sector is complicated and there are several or many new particles in it, waiting to be firmly discovered.
From here.

Lubos Motl greatly understates the weight of the experimental difficulties facing potential dark matter particles in these mass ranges if these indications are anything more than experimental error or statistical flukes.

There are other contradictory results from other experiments at the same light dark matter scale as CDMS II.  Some purport to exclude the result that it claims it sees and either see nothing at all, or see something else at a different mass or cross-section of interaction.  One other experiments seems to confirm the CDMS-II result.

Even if there are dark matter particles, however, none of the hints of possible dark matter particles are consistent with the dark matter particle needed to explain the astronomy data which is why we need to consider that dark matter exists at all.

Hot Dark Matter (i.e. neutrinos) does not fit the astronomy evidence.

Cosmic microwave background radiation measurements (most recently by the Planck satellite which are as precise as it is theoretically possible to be in these measurements from the vicinity of Earth), establish that dark matter particles must have a mass signficantly in excess of 1 eV (i.e. it can't be "hot dark matter") and hypothesizes that a bit more than a quark of the mass-energy of the universe (and the lion's share of the matter in the universe) is full of of unspecified stable, collisionless dark matter particle relics of a mass greater than 1 eV.

Cold Dark Matter (e.g. heavy WIMPs) does not fit the astronomy evidence.

Simulations of galaxy formation in the presence of dark matter are inconsistent with "cold dark matter" of roughly 8 GeV and up of particle mass.  Cold dark matter models, generically, predict far more dwarf galaxies than we observe, and also predict dark matter halo shapes inconsistent with those inferred from the movement of luminous objects in galaxies.

The conventional wisdom in the dark matter field has not necessary fully come to terms with this fact, in part because a whole lot of very expensive experiments which are currently underway and have many years of observations ahead of them were designed under the old dominant WIMP dark matter paradigm that theoretical work now makes clear can't fit the data.

Warm Dark Matter could fit the astronomy evidence.

The best fit collisionless particle to the astronomy data at both CMB and galaxy level is a single particle of approximately 2000 eV (i.e. 0.00002 GeV) mass, with an uncertainty of not much more than +/- 40%.  This is called "warm dark matter."

There is not yet a consensus, however, that has determined if this model can actually explain observed dark matter phenomena.  A particle of this kind can definitely explain many dark matter phenomena, but it isn't at all clear that it can explain all of it.  Most importantly, it isn't clear that this model produces the right shape for dark matter halos in all of the different kinds of galaxies where these are observed (something that MOND models can do with a single parameter that has a strong track record of making accurate predictions in advance of astronomy evidence).

The simulations that prefer warm dark matter models over cold dark matter models also strongly disfavor models multiple kinds of warm dark matter, although some kind of self-interaction of dark matter particles is not ruled out.  So, more than the aesthetic considerations that Lubos Motl discusses in his post disfavor a complicated dark matter sector. 

This simulation based inconsistency with multiple kinds of dark matter disfavors dark matter scenarios in which the Standard Model is naively extended by creating three "right handed neutrinos" corresponding one a one to one basis with the weakly interacting left handed neutrinos of the Standard Model but have different and higher masses, with some sort of new interaction governing left handed neutrino to right handed neutrino interactions.

But, as explained below, any warm dark matter would have to be "sterile" with respect to the weak force.

These limitations on dark matter particles make models in which sterile neutrino-like dark matter particles arise in the quantum gravity part of a theory, rather than in the Standard Model, electro-weak plus strong force part of the theory, attractive.

Light WIMPs are ruled out by particle physics, although light "sterile" particles are not.

No dark matter particle that is weakly interacting of less than 45 GeV is consistent with experimental evidence.  Any weakly interacting particle with less than half of the Z boson mass would have been produced in either W boson or Z boson decays.  Moreover, if there were some new particle with a mass of between 45 GeV and 63 GeV that was weakly interacting, it would have been impossible to observe, as the LHC has to date, Higgs boson decays that are consistent with a Standard Model prediction that does not include such a particle to within +/- 2% or so on a global basis. 

In other words, any dark matter particle of less than 63 GeV would have to be "sterile" which is to say that it does not interact via the weak force.  This theoretical consideration strongly disfavors purported observations of weakly interacting matter in the 8 GeV to 20 GeV range over a wide range of cross-sections of interaction, where direct dark matter detection experiments are producing contradictory results.

It is also widely recognized that dark matter cannot fit current models if it interacts via the electromagnetic force (i.e. if it is "charged") or if it has a quantum chromodynamic color charge (as quarks and gluons do). This kind of dark matter would have too strong a cross-section of interaction with ordinary matter to fit the collisionless or solely self-interacting nature it must have to fit the dark matter paradigm for explaining the astronomy data.

So, sterile dark matter can't interact via any of the three Standard Model forces, although it must interact via gravity and could interact via some currently undiscovered fifth force via some currently undiscovered particle (or if dark matter were bosonic, via self-interactions).

Neither heavy nor light WIMPs fit the experimental evidence as dark matter candidates.

Since light WIMPs (weakly interacting massive particles) under 63 GeV are ruled out by both particle physics, and all WIMPS in the MeV mass range and up are ruled out by the astronomy data, the WIMP paradigm for dark matter that reigned for most of the last couple of decades is dead except for the funeral. 

Even if there are heavy WIMPS out there with masses in excess of 63 GeV which are being detected (e.g. by the Fermi line or ASM-02 or at the LHC at some future data), these WIMPS can't be the dark matter that accounts for the bulk of the matter in the universe.

Direct detection of WIMP dark matter candidates is hard.

It is possible to estimate with reasonable accuracy the density of dark matter that should be present in the vicinity of Earth if the shape of the Milky Way's dark matter halo is consistent with the gravitational effects attributed to dark matter that we observe in our own galaxy. 

So, for any given dark matter particle mass it is elementary to convert that dark matter density into the number of dark matter particles in a given volume of space.  It takes a few more assumptions, but not many, to predict the number of dark matter particles that should pass through a given area in a give amount of time in the vicinity of the Earth, if dark matter particles have a given mass.

If you are looking for weakly interacting particles (WIMPs), you use essentially the same methodologies used to directly detect weakly interacting neutrinos, but calibrated so that the expected mass of he incoming particles is much greater.  The cross-section of weak force interaction adjusted to fit constraints from astronomy observations such as colliding galactic clusters further tunes your potential signal range.  You do your best to either shield the detector from background interactions or statistically substract background interactions from your data, and then you wait for dark matter particles to interact via the weak force (i.e. collide) with your detector in a way that produces an event that your detector measures, which should happen only very infrequently because the cross-section of interaction is so small. 

For heavy WIMPs in the GeV to hundreds of GeV mass ranges the signal should be pretty unmistakable, because it would be neutrino-like but much more powerful.  But, this is greatly complicated by a lack of an exhaustive understanding of the background.  We do not, for example, have a comprehensive list of all sources that create high energy leptonic cosmic rays.

Direct detection of sterile dark matter candidates is virtually impossible.

The trouble, however, is that sterile dark matter should have a cross-section of interaction of zero and never (or at least almost never) collide with the particles in your detector, unless there is some new fundamental force that governs interactions between dark matter and non-dark matter, rather than merely governing interactions between two dark matter particles.  Simple sterile dark matter particles, which only interact with non-dark matter via gravity, should be impossible to detect directly following the paradigm used to detect neutrinos directly.

Moreover, while it is possible that sterile dark matter particles might have annihilation interactions with each other, this happens only models very very non-generic choices about their properties.  If relic dark matter is all matter and not anti-matter, and exists in only a single kind of particle, it might not annihilate at all, and even if it does, the signal of a two particle warm dark matter annihilation which would have energies on the order of 4,000 keV would be very subtle and hard to distinguish from all sorts of other low energy background effects.

And, measuring individual dark matter particles via their gravitational effects is effectively impossible as well, because everything from dust to photons to cosmic microwave background radiation to uncharted mini-meteroids to planets to stars near and far contribut to the background, and the gravitational pull of an individual dark matter particle is so slight.  It might be possible to directly measure the collective impact of all local dark matter in the vicinity via gravity with enough precision, but directly measuring individual sterile dark matter particles via gravity is so hard that it verges on being even theoretically impossible.

If sterile dark matter exists, its properties, like those of gluons which have never been directly observed in isolation, would have to be entirely inferred from indirect evidence.

* The Higgs boson self-coupling.

One of the properties of the Standard Model Higgs boson is that it couples not only to other particles, but also to itself, with a self-coupling constant lambda of about 0.13 at the energy scale of its own mass if it has a mass of about 125 GeV as it has been measured to have at the LHC (see also in accord this paper of January 15, 2013).  The Higgs boson self-coupling constant, like the other Standard Model coupling constants and masses vary based on the energy scale of the interactions involved in a systematic way based on the "beta function" of a particular coupling constant.

A Difficult To Measure Quantity

Many properties of the Higgs boson have already been measured experimentally with considerable precision, and closely mathc the Standard Model expectation.  The Higgs boson self-coupling, however, is one of the hardest properties of the Higgs boson to measure precisely in an experimental context.  With the current LHC data set, the January 15, 2013 paper notes that “if we assume or believe that the `true’ value of the triple Higgs coupling lambda is true = 1, then . . . We can conclude that the expected experimental result should lie within lambda (0:62; 1:52) with 68% confidence (1 sigma), and lambda (0:31; 3:08) at 95% (2 sigma) confidence." 

Thus, if the Standard Model prediction is accurate, the measured value of lambda based on all LHC data to date should be equal to 0.08-0.20 at a one sigma confidence level and to 0.04-0.40 at the two sigma confidence level. 

With an incomplete LHC data set at the time their preprint was prepared that included only the first half of the LHC data to date, the authors were willing only to assert that the Higgs boson self-coupling constant lambda was positive, rather than zero or negative.  But, even a non-zero value of this self-coupling constant rules out many beyond the Standard Model theories. 

After the full LHC run (i.e. 3000/fb, which is about five times as much data as has been collected so far), it should be possible to obtain a +30%, -20% uncertainty on the Higgs boson self-coupling constant lamba.  If the Standard Model prediction is correct, that would be a measured value at the two sigma level between 0.10-0.21.

BSM Models With Different Higgs Boson Self-Coupling Constant Values

There are some quite subtle variations on the Standard Model that are identical in all experimentally measureable respects except that the Higgs boson self-coupling constant is different.  This would have the effect of giving rise to an elevated level of Higgs boson pair production while not altering any other observable feature of Higgs boson decays.  So, if decay products derived from Higgs boson pair decays are more common relative to decay products derived from other means of Higgs boson production than expected in the Standard Model, then the Higgs boson self-coupling constant is higher than expected.

A recent post at Marco Frasca's Gauge Connection blog discusses two of these subtle Standard Model variants which are described at greater length in the following papers:

* Steele, T., & Wang, Z. (2013). Is Radiative Electroweak Symmetry Breaking Consistent with a 125 GeV Higgs Mass? Physical Review Letters, 110 (15) DOI: 10.1103/PhysRevLett.110.151601  (open access version available here).

This model, sometimes called the conformal formulation, predicts a different Higgs boson self-coupling constant value of lambda=0.23 at the Higgs boson mass for a Higgs boson of 125 GeV (77% more than the Standard Model prediction) in a scenario in which electroweak symmetry breaking takes place radiatively based on an ultraviolet (i.e. high energy) GUT or Planck scale boundary at which the Higgs boson self-coupling takes on a notable value (for example, zero) and has a value determined at lower energy scales by the beta function of the Higgs boson.

This model, in addition to providing a formula for the Higgs boson mass thereby reducing the number of experimentally measured Standard Model constants by one, dispenses with the quadratic term in the Standard Model Higgs boson equation that generates the hierarchy problem.  The hierarchy problem, in turn is a major part of the motivation of supersymmetry.  But, it changes no experimental observables other than the Higgs boson pair production rate, which is hard to measure precisely, even with a full set of LHC data that provide quite accurate measurements of most other Higgs boson properties.

Similar models are explored here.

* Marco Frasca (2013). Revisiting the Higgs sector of the Standard Model arXiv arXiv: 1303.3158v1

This model predicts the existence of excited higher energy "generations" of the Higgs boson at energies much higher than those that can be produced experimentally, giving rise to a predicted Higgs boson self-coupling constant value of lamba=0.36 (which is 177% greater than the Standard Model prediction).

Both of these alternative Higgs boson theories propose Higgs boson coupling constant values that are outside the one sigma confidence interval of the Standard Model value based upon the full LHC data to date, but are within the two sigma confidence interval of that value.  The 0.13 v. 0.23 Higgs self-coupling value distinction looks like it won’t be possible to resolve at more than a 2.6 sigma level even after much more data collection at the LHC, although it should ultimately be possible to distinguish Frasca's estimate from the SM estimate at the 5.9 sigma level before the LHC is done, and a the 2 sigma level much sooner.

SUSY and the Higgs Boson Self-Coupling Constant

Supersymmetry (SUSY) models generically have at least five Higgs bosons, two of which have the neutral electromagnetic charge and even parity of the 125 GeV particle observed at the LHC, many of which might be quite heavy and have only a modest sub-TeV scale impact on experimental results. 

The closer the measured Higgs boson self-coupling is to the Standard Model expectation, and the more precision there is in that measurement, the more constrained the properties of the other four Higgs bosons and other supersymmetric particles must be in those models since the other four can't be contributing much to the scalar Higgs field in low energy interactions if almost all of the observational data is explained by a Higgs boson like looks almost exactly like the Standard Model one.

The mean value of the Higgs boson contribution to electroweak symmetry breaking is about 96% with a precision of plus or minus about 14 percentage points.  If the actual value, consistent with experiment is 100%, then other Higgs bosons either do not exist or are "inert" and do not contribute to electroweak symmetry breaking.



Tuesday, April 16, 2013

Ainu Origins

Razib has flagged a December 2012 study on autosomal DNA in Ainu and other Japanese populations.  The full study is pay per view, but the abstract is as follows:
57 Journal of Human Genetics 787-795 (December 2012)
The history of human populations in the Japanese Archipelago inferred from genome-wide SNP data with a special reference to the Ainu and the Ryukyuan populations
Japanese Archipelago Human Population Genetics Consortium
Abstract
The Japanese Archipelago stretches over 4000km from north to south, and is the homeland of the three human populations; the Ainu, the Mainland Japanese and the Ryukyuan. The archeological evidence of human residence on this Archipelago goes back to >30000 years, and various migration routes and root populations have been proposed. Here, we determined close to one million single-nucleotide polymorphisms (SNPs) for the Ainu and the Ryukyuan, and compared these with existing data sets. This is the first report of these genome-wide SNP data. Major findings are: (1) Recent admixture with the Mainland Japanese was observed for more than one third of the Ainu individuals from principal component analysis and frappe analyses; (2) The Ainu population seems to have experienced admixture with another population, and a combination of two types of admixtures is the unique characteristics of this population; (3) The Ainu and the Ryukyuan are tightly clustered with 100% bootstrap probability followed by the Mainland Japanese in the phylogenetic trees of East Eurasian populations. These results clearly support the dual structure model on the Japanese Archipelago populations, though the origins of the Jomon and the Yayoi people still remain to be solved.
The Yayoi people arrived in Japan from Korea immediately following the Jomon period in Japan around 900 BCE to 800 BCE.  They brought with them the core of what would become the modern Japanese language, cavalry warriors, and the rice farming method of food production used on the mainland.  The precise Korean culture on the then balkanized Korean penninsula that was ancestral to the Yayoi is disputed, but linguistically there were not a Tibeto-Burman people although they were a people who had experienced considerable Chinese cultural influence.

The culture created by the fusion of the Yayoi superstrate and the Jomon substrate upon the arrival of the Yayoi in Japan did not include all of Japan's main Honshu Island until about 1000 CE or later. 

The genetic evidence shows that while the Jomon language and much of its culture was wiped out on Honshu, that a very substantial proportion of the genetic ancestry of the modern Japanese people is Jomon in comparison to other historically or archaeologically attested encounters between hunter-gatherer populations and farmers.  The Jomon had pottery long before they were farmers, contrary to the experience in the Fertile Crescent where there was a long pre-Pottery Neolithic period, and in most other places.

The Ainu and Ryukuan ethnic minorities in Japan are widely believed to have significantly more indigineous Japanese (i.e. Jomon) ancestry and less Yayoi ancestry than the majority ethnicity in Japan. This autosomal genetic study appears to confirm this conclusion.

But, the genetics of the Ainu come with a twist. The Ainu appear to have another ancestral component not present in the also Jomon derived Ryukuan people. The obvious guess in the absence of the closed access paper, based on uniparental data available about the Ainu, would be that this other component is some now existing or now extinct or moribund Northeast Asian Paleo-Siberian popuation.

The Jomon are also very notable for being the apparent source of Y-DNA haplogroup D, a paternal lineage that is virtually absent in mainland East Asia and Southeast Asia, which is absent outside Tibet and the Andaman Islands except for trace to moderate frequencies similar to the Tibetan rather than to Japanese populations in North Asia.  Y-DNA haplogroup D is more closely related to Y-DNA haplogroup E, which is the predominant Y-DNA haplogroup in modern Africa, than to any of the other Eurasian Y-DNA haplogroups.  This suggests that the members of this migration wave may have been part of a migration wave distinct from that of the main Out of Africa migration that was ancestral to most of the rest of Eurasia.

There are two possible "two wave" scenarios.  In one, the people of the Y-DNA haplogroup D came first and were brought to extinction by a later wave of migration in the remainder of East Eurasia, Australia, Melanesia and Oceania who arrived in Australia and Papua New Guinea not later than 45,000 years ago.  In the other scenario, the people of Y-DNA haplogroup D were a secondary wave of out of Africa migration to Asia that was left with the territory that the first wave populations didn't occupy, didn't want, or couldn't defend, not later than about 30,000 years ago when Japan and Tibet were first populated, which is still well before the last glacial maximum ca. 20,000 years ago.

In the latter scenario, which I think is more likely, the Y-DNA haplogroup D people could have migrated either via a coastal maritime "Southern route" along the Southern coast of mainland Asia, or via a "Northern route" travelling to Tibet and Japan via Central Asia and/or Siberia and then migrating from Tibet to the parts of South Asia and Andaman Islands where Y-DNA haplogroup D is now found from Tibet.  I am increasingly coming around to the Northern route, rather than the maritime coastal Southern route as the more plausible of the two possibilities.  Other routes, such as a migration first to South India, then to Tibet, and from Tibet onto Japan are possible, but not necessarily persuasive since the archaeological evidence points to Tibet being populated from the direction of China, rather than India.

However, there is strong circumstantial evidence to suggest that the original Y-DNA haplogroup D people overwhelmingly had mtDNA haplogroup M.  Neither Y-DNA haplogroup D nor mtDNA haplogroup M (or its descendants) are associated with any West Eurasian populations.  So, if this D/M population did migrate via a Northern route, it is not easy to explain why they left no West Eurasian relic populations.





An Atom Drawn To Scale

Fig. 2: A more accurate depiction of an atom, showing it is mostly empty space (grey area) traversed by rapidly moving electrons (blue dots, drawn much larger than to scale) with the nucleus (red and white dot, drawn larger than scale) at center.  This is somewhat analogous to a rural community, with expanses of uninhabited land, a few scattered farm houses, and a small village with closely packed houses at its center.

From here.

Wednesday, April 10, 2013

When Should Cosmology Begin?

Cosmology is roughly speaking, the scientific study of the history of the universe.  This is a worthy pursuit, but only to a point.

Right now, the universe has certain laws that it obeys.  Nothing moves faster than the speed of light.  The universe is expanding in a manner consistent with a simple cosmological constant.  General relativity governs the gravity and describes the nature of space-time.  Baryon number and lepton number are almost perfectly preserved as separate quantities.  Mass-energy is conserved.  The quantum physical laws of the universe obey CPT symmetry, even though they are neither CP symmetric nor T symmetric.  Entropy increases over time.  Baryon number conservation and lepton number conservation severely limit the creation of antimatter.  The universe is predominantly made of matter and not antimatter.  There are invariant physical laws whose physical constants and physical laws in the Standard Model and General Relativity do not change.

Extrapolating these rules of physics back in time can take you a very long way.  It can carry you through the formation of all of the atoms in the universe.  It can take you back to before the "radiation era" more than thirteen billion years ago.  It can take you back to a point in time where the mass-energy in the universe was extremely smoothly distributed in a universe that fills a far smaller volume than it does today and the ambient temperature in the universe was close to the GUT (grand unified theory) scale where all of the forces of nature start to look very similar to each other.

There are questions, however, that one cannot answer by simply extrapolating back the rules of physics without making up new ones.  You can't answer the question, "why do we have precisely the amount of mass-energy in the universe that we do?"  You can't answer the question, "why is the universe mostly matter and not antimatter?"  You can't come up with a principled answer to the question of how our current baryon number and lepton number in the universe came to be what it is today.  You can't answer the question of why the physical constants are what they are today.  You have to violate laws of physics like the speed of light limitation to get the universe to be sufficiently smooth in the first second or two of the universe.

Rolling back the clock, at most can give you a set of initial conditions.  At proper time T, when the universe was X meters across, the laws of physics and physical constants were what they are today, there was this much mass-energy in the universe, the baryon number and lepton number of the universe respectively were Y and Z, the universe was A% antimatter and O% ordinary matter, and so on and so forth.

It is conceivable that this extrapolation backwards in time may even make it possible to get back to the first few minutes, or even seconds of the universe.  But, from decades of trying we have learned that there are questions that can't be answered simply by extrapolating back in time with the existing laws of physics.    But, there are limits that can't be explained without new physics.

My own bias and prejudice is to stop when we reach those limits.  Cosmology should legitimately take us back as far as possible using the existing laws of nature to a set of initial conditions that had to exist that far back in time.  This is a very sensible place to call "the beginning" from the point of view of scientific cosmology.  Indeed, the initial conditions themselves may be suggestive of possible new physics that could bring them about.  But, at that point, we start to engage in the process of scientific mythmaking, and stop engaging in the process of science itself.

Given that we have a thirteen billion plus year Big Bang cosmology that can't take us back before a singularity at t=0 in any case, who cares if we choose to start counting at t=0 or t=two seconds or t=ten minutes or t=one week or t=100,000 years.  As long as we go back as far as we can with existing laws of physics and set initial conditions for that point in time, any early initial conditions that require new physics is just question begging.  If you have to begin somewhat, why not choose a point of beginnning that goes back as far as your expertise can support, but no further.

This may mean that we never get a satisfactory answer to some of these questions, but so what.  We will know what is important and will have a conclusion around which a scientific consensus can be built.  If that means leaveing the source of those initial conditions unknown and unnatural, then so be it.  Life doesn't promise us answers to every question.

Why Three Generations Of Fermions?

Why are there three and only three generations of fermions?  Here is a conjecture.

One heuristic way to think about it is that the mass of a fundamental fermion beyond the first stable first generation and its rate of decay via the weak force are strongly intertwined.  The heavier something is, the faster is decays.  The lighter it is, the less rapidly it decays.

But, nothing can decay via the weak force any faster than the W boson, which facilitates those decays in the Standard Model.

The top quark decays almost, but not quite as quickly as the W boson does, and any particle much heavier would have to decay faster than the W boson.  But, because the W boson is what makes such decays possible, this can't happen.  Therefore, there can be no fundamental particles significantly heavier than the top quark.

Also, there is something to Koide's formula which seems to apply quite accurately to the heavier quark masses and the charged leptons.  If one extends the formula based upon recent data on the mass of the bottom and top quarks and presumes that there is a b', t, b triple, and uses masses of 173.4 GeV for the top quark and 4.190 GeV for the bottom quark, then the predicted b' mass would be 3.563 TeV (i.e. 3,563 GeV) and the predicted t' mass would be about 83.75 TeV (i.e. 83,750 GeV).  If the relationship between decay time for fundamental fermions and mass were extrapolated in any reasonable way to these masses, they would have decay times far shorter than that of the W boson that facilitates this process.  Thus, the bar to fourth generation quarks is similar to the physics that prevents top quarks from hadronizing.

Of course, even if Koide's formula is not correct in this domain, it is suggestive of the kinds of masses for fourth generation quarks that one would expect and the estimated masses need not be very precise to give rise to the same conclusion.

This reasoning also disfavors SUSY scenarios with superpartners that are universally heavier than the top quark, as increasingly seems to be the case for the currently experimentally allowed part of the SUSY parameter space, to the extent that SUSY particle decays and ordinary particle decays both took place via the weak force, which to some extent, is the whole point of SUSY in the first place.  A SUSY theory that decays by means other than the force described in electroweak unification doesn't solve the hierarchy problem which is its reason d'etre.

This reasoning also almost rules out annihilations of fundamental dark matter particles in the 300 GeV to 400 GeV+ mass range as suggested as one possible although quite implausible reading of AMS-02 observations of positron proportions in high energy cosmic rays.  If no fundamental particle can be much heavier than a top quark, than this scenario is ruled out and pair production via gamma-rays interacting with electromagnetic fields are all that remains.

The extension of a Koide triple for charged leptons (a muon, tau, tau prime triple), however, would imply a 43.7 GeV tau prime, which has been excluded at the 95% confidence level for masses of less than 100.8 GeV and with far greater confidence at 43.7 GeV (which would be produced at a significant and easy to measure freuquency in Z boson decays).  This is far from the mass level at which W boson decay rates would impose a boundary on charged lepton mass.  So, one has to infer that fundamental fermion generations, by virtue of some symmetry, are all or nothing affairs and that one cannot have just three generations of quarks, while having four generations of leptons.

This kind of symmetry, if it exists, suggests that the more common sterile neutrino theories are misguided.  Even if there is a massive particle that accounts for dark matter than doesn't interact weakly or electromagnetically or via the strong force, there is no place of it dangling from the neutrinos of the Standard Model at different fermion masses.  Neutrino mass and the source of a dark matter particle very likely are not two birds that can be killed with one unified theoretical solution.

Graviweak unification models, which create a singlet sterile neutrino which is not very tightly bound in mass theoretically within in the gravitational sector, rather than the electroweak sector, thus seem more attractive from this perspective.  These models have only left handed neutrinos and only right handed antineutrinos as a fundamental part of the theory, embracing rather than fighting what observation has told us, and the neutrinos therefore, must acquire mass via the same essential mechanism as all of the other Standard Model fundamental fermions do.

Rather than filling the right handed neutrino gap with mere right handed sterile neutrinos, gravitweak unification models fill the right handed neutrino gap with the entire gravitational sector operating in parallel to the electroweak sector, with the graviton and gravitational fields, a sterile neutrino, an intrasterile neutrino U(1) force, and a gravity sector Higgs boson-like scalar (perhaps the very same Higgs boson extending across both the electroweak and gravitational sectors) that could be attributed to dark energy, the inflaton, interia, or all of the above.

About Hadrons

This post recaps a few basic facts about Standard Model particle physics that is neither particularly controversial nor at the cutting edge of experimental discoveries to provide context when discoveries are made in the future. This analysis has consulted standard reference sources, my notes from a number of QCD journal articles, and a spreadsheet created for comparison purposes.

Quarks and other fundamental particles

There are six kinds of quarks (up, down, strange, charm, bottom (formerly also know as "beauty") and top), and all of them except the top quark are always observed confined to two quark (meson) or three quark (baryon) composite structures bound by gluons.  The top quark, which is the most massive of all of the fundamental particles with a mass of about 173.3 +/- 1.4 GeV based on the most recent LHC measurements, decays via the nuclear weak force too quickly to form observable hadrons, although in principle a vanishingly small share of top quarks produced might last long enough to allow hadronization since particle lives are probabilities of decay per time period and not certainties.    (The central value of the Tevatron measurement of the top quark mass was 173.2 +/- 0.8  GeV, with a combined estimate of a bit closer to 173.2 GeV than 173.3 GeV with a two sigma confidence interval of about 172.8 GeV to 174 GeV which probably overestimates the true error since independent measurements are so much closer to each other than we would expect if the error bars were as great as they are stated to be).

There are eight basic kinds of gluons, which are defined by the combinations of color charges that they carry, since they are otherwise identical. Gluons have a zero rest mass, but can acquire mass dynamically as they interact with each other and quarks.

Leptons (electrons, muons, tauons, and neutrinos) can interact with particles made of quarks via the exchange of five particles that are associated with the elecromagnetic and weak forces, including the Higgs boson (although they don't interact via the strong nuclear force) but don't form composite particles with them that binds quarks together. The heaviest quark has a rest mass of about 4.2 GeV. Up and down quarks are believed to have rest masses in the single digit MeV (about a thousand times lighter).

Each kind of quark comes in three color charges, has an electric charge of either +/- 1/3 (for up, charm and top quarks and their antiquarks) or +/- 2/3 (for down, strange and bottom quarks and their antiquarks), can have a left or right parity (sometimes called even or odd), and can come in matter or antimatter varieties. A particular quark has a particular rest mass associated with it which is the same for both the particle and its antiparticle (which also have opposite electric charges). Apart from these properties, quarks are entirely identical expect for their current momentum and location (both of which can't be determined at the same time beyoond a certain level of precision as a fundamental principle of physics) and their history of entanglement with other quantum particles.

Quark "color" which is neutral for every confined hadron, like the five lighest quark masses,  is something that is never directly observed. We know what observable results would flow from a different number of color charges, and those predictions are inconsistent with what we see in experiments, but there is no device that exists to directly tell you if a particular quark has a red, green or blue QCD charge.

Most importantly, a three color charge system (with three corresponding anticolors) constrains all hadrons to have integer electromagnetic charges and to have particular combinations of matter and antimatter in baryons and mesons, while forbidding all other combinations.

Hadrons

Only A Finite Number Of Hadrons Are Theoretically Possible

There are roughly one hundred theoretically possible kinds of hadrons and their quantum numbers (charge, spin, etc.), which can be set down from first principles by any graduate student in physics in an afternoon from the basic rules of quantum chromodynamics, although a handful of observed states which are combinations of different electromagnetically neutral hadron states, or are excited states, are not obvious from a mere rudimentary understanding of the laws of quantum chromodynamics.  With the more sophisticated nuances like excited states and a high but not utterly unlimited bound on energy levels, maybe you can get to twice that number.

In many practicle applications, an approximation of reality that ignores the masses of the lightest three kinds of quarks, and the existence of some or all of the heaviest three kinds of quarks, is adequate to provide results that are as accurate as can be calculated because the ephemeral particles made of heavier quarks are hard or impossible to form due to matter-energy conservation, and often have only a minor impact on lower energy physical systems. These models exclude the vast majority of these exotic hadrons.

Several other kinds of composite quark and gluon particles are not obviously forbidden by the Standard Model, but have not been observed and definitively identified. These include quarkless "glueballs" made entirely of gluons, tetraquarks and pentaquarks.

Potential tetraquark resonnances seen to date have turned out to be, in fact, "molecules" of discrete mesons rather than single coherent four quark composite particles. Numerous theoretical papers have described the properties that glueballs ought to have, but in the absence of experimental evidence (which is hard to amass since glueballs would be similar in many ways to hadrons with neutral electrical charge), we can't be certain that some law of nature not currently known to us forbids their formation or makes them so absurdly rare that we will never see one.

Mean Hadron and Fundamental Particle Lifetimes

The only hadrons that are stable are the proton and the bound neutron. The proton is stable with a mean lifetime at least as long as the age of the universe and a neutron which is not stable when not confined within an atom of a stable isotype, has a mean lifetime of about 886 seconds (about fourteen minutes and 46 seconds). The runner up, the charged pion, has a mean lifetime of about 2.6*10^-8 seconds, followed closely by the neutral kaon with a mean lifetime of about 1.2*10^-8 seconds, followed by others with mean lifetimes hundreds to trillions of times shorter. Protons, neutrons and pions are comprised only of up quarks, down quarks and their antiparticles. Kaons also incorporate strange quarks. The longest lived hadrons that contain charm or bottom quaraks have mean lifetimes on the order of 10^-12 seconds (one 1000 billionth of a second).

By way of comparison, electrons are stable, second generation heavy electrons which are called muons have a mean lifetime on the order of 10^-6 seconds, and third generation even heavier electrons called tauons have a mean lifetime on the order of 10^-13 seconds. The massive bosons of the electroweak force (the W+, W-, and Z bosons and the Higgs boson) ae likewise ephemeral, as are solitary top quarks which essentially always decay before they can form hadrons.

Hadron Volume

While quarks and other fundamental particles in the Standard Model are conceived of as being point-like, hadrons have a radius (that can be defined in several ways) on the order of 0.8.5*10^-15 meters (one femtometer), the experimentally measured size of a proton, which is small, but is trillions of times longer than the hypothetical minimum Planck length favored in many quantum gravity theories.

This scale is set largely by the form of the equations of the strong nuclear force.  At very small distances relative to this distance it is repulsive.  At longer distances it grows incredibly strong.  In between, the quarks bound by it are "asymptotically free".

While exotic hadron volume is rarely directly measured, it can be expected to vary to a similar extent to the strong force energy field binding energy of hadrons, which is pretty much all within an order of magnitude.

Electron orbits around atomic nuclei are much more tight than the gravitationally bound orbits of objects in our solar system which are often used as an analogy to them.  But, like our solar system, the vast majority of atoms and matter made of atoms and molecules is empty space.

Hadron Masses

The lighest of the hadrons (the "neutral pion") has a mass of about 0.1349766 (6) GeV. Both the proton and neutron have masses that are almost, but not quite identical, of about 0.93827013 (23) GeV for the proton and 0.939565346 (23) Gev for the neutron (about one part in four million). Approximately thirty meson masses and forty-two baryon masses have been measured to date. Several dozen more possible combinations are theroretically possible but belong to the mountain of experimental data for which some basic properties and approximate mass resonnances have been observed, but which have not been succeptible to a definitive identification with a particular predicted composite particle of QCD.

The heaviest observed three quark particle (i.e. baryon) whose mass has been precisely measured called a "bottom omega", made of two strange quarks and a bottom quark bound by gluons has a mass of about 6.165 (23) GeV (and is the least precisely measured mass of the lot at an accuracy of about a third of a percent precision). The heaviest observed two quark particle (i.e. meson) whose mass has been precisely measured is called an upsilon, is made of a bottom quark and an anti-bottom quark bound by gluons has a mass of about 9.46030 (26) GeV. The heaviest theoretically possible meson or baryon (that does not have a top quark as a component), which has not yet been observed, called the triple bottom omega baryon should have a mass of about 15 GeV.

The heaviest theoretically possible hadron is about 100 times as heavy as the lighest possible hadron, a much narrower range of masses than the range of masses for the fundamental particles of the Standard Model (which range over about 21 orders of magnitude from the top quark to the lighest neutrino), or even the quarks themselves (which have a range of masses of about 100,000 to 1). The range of hadron masses is bounded in part because the heaviest quark, the top quark, does not form hadrons. The range of rest masses of the five quarks that form hadrons is about 3,000 to one.

Equally or more importantly, this is because the color charge interactions of any kinds of quarks in two and three quark particles respectively, are almost (but not exactly) the same. The amount of strong nuclear force field energy necessary to bind an exotic spin-3/2 baryon made of the heaviest quarks is only about 30% greater than the amount of energy necessary to bind an ordinary proton or neutron and is basically the same for all spin-3/2 baryons (the ones with the most binding energy have only about 3% more binding energy than the ones with the least binding energy about ten to one hundred times more than the uncertainty in the experimental measurements themselves).

There is more variation in the amount of strong nuclear force field energy that binds together spin-1/2 baryons, but none have a binding energy that is more than about 40% greater than that of an ordinary proton or neutron (which have the least).

Moreover, this range is greatly inflated by a handful of the heaviest and most rare varieties of hadrons.

The stability of the amount of hadron mass attributable to this binding energy matters a great deal because in a proton or neutron, the sum of the three fundamental up and down quark rest masses is equal to roughly 1% of the total mass of nucleon. In contrast, in a bottom omega baryon, sum of the rest masses of the constituent quarks is about 71% of the whole particle's mass, and in the heaviest experimentally measured hadron, the upsilon, the sum of the masses of the constituent quarks is about 89% of the whole particle's mass.  In the heaviest theoretically possible hadrons, the ratio of fundamental particle mass to hadron mass would be even greater.

Of course, hadrons in turn bind themselves into one of about 120 different kinds of atoms, in a wide variety of isotypes, i.e. numbers of neutrons in an atom of N protons (only a small portion of which are stable), whose nuclei are made entirely of protons and neutrons.

Hadron Density

Atomic nuclei, in general, have approximately the same density of neutron stars, which are the most dense known objects in the universe outside of black holes.  Indeed, large black holes have less mass per volume defined by their event horizons than neutron stars do.  Black holes have a declining mass per event horizon volume has they acquire more mass.  Atomic nuclei are significantly below the density needed to form a black hole at their scale according to the General Relativity, although it isn't obvious that General Relativity applies at such small scales without modification since it is a clasical rather than a quantum theory that is applied to a quantum scale in this context.

It is possible that higher generation quarks like strange quarks or hadrons made of them may be stable in extreme circumstances like extremely dense neutron stars (or perhaps utterly beyond our observation inside black holes), but there is no solid evidence that such quark stars actually exist.

Theory Lags Behind Experiment In Hadron Physics

First principles theoretical calculations of proton and neutron masses are accurate to about 1% in absolute terms, although more precise theoretical predictions can be made for heavier exotic hadrons and it is also possible to calculate from first principles to an order of magnitude accuracy the much smaller mass difference between the proton and neutron massses, even though this is only about 0.1% of the mass of the proton. Even the more precise theoretical determination of the difference of the two masses is about 4,000 times less precise than experimental measurements of this mass difference.

Quantum chromodynamics is the virtually unchallenged contender for this part of the Standard Model mostly because none of its theoretical predictions have been contradicted and because no one else has come up with any really credible alternatives that make more precise predictions. The only particles it predicts that we haven't seen are particles that are hard to observe and classify which we may actually have seen already. Every particle that we have been able to observe carefully enough to classify has been succeptible to being fit into QCD's built in taxonomy with a modicum of ingenuity.

An important reason for this difficulty in making accurate calculations is that it is quite difficult to determine the quark masses precisely from the roughly seventy-two available hadron data points, and some less direct data points (like measurements of the strong force coupling constant which are accurate to at least about four significant digits), and the known values of the Standard Model coupling constants. The ligher the quark, the less precisely its mass is known, because mass attributable to gluon interactions so profoundly overwhelms the fundamental quark masses. Turning those data points into theoretical constant values involves great computational difficulties, so mostly physicists resort to methods only moderately more sophisticated than a basic spreadsheet comparing the seventy-two data points with the known composition of the particles in question (determined based upon their other properties like charge and spin).

These percentages are a bit slippery because fundamental particle masses "run" with the energy level of the circumstances where they are observed so any single mass value for them necessarily includes contextual assumptions about the measurement that may be inconsistent with the context in which the hadron that contains it is observed. Often the 2 GeV mass level of about two protons at rest, is used as a standard. Likewise, the conventional view that the additional mass associated with quarks within hadrons is localized in the dynamically generated gluon masses within the confined quark system is to a great extent a model dependent feature and one could imagine a coherent model in which that additional mass was apportioned to the constituent quarks which is hard (although not necessarily impossible) to distinguish experimentally.

In a calculation like the calculation of the proton-neutron mass difference, uncertainty regarding the values of the fundamental constants gives rise to something on the order of two-thirds of the uncertainty in the theoretical prediction, while about a third of the theoreticallly predicted value or so is due to truncation of the infinitely long series of equation terms that the QCD equations tell us give the exact value which can be approximately solved numerically but not calculated precisely in all but the most simple cases.







Tuesday, April 9, 2013

The Origins Of Modern Myanmar

Razib Khan offers a stunningly cogent account of the cultural history of Myanmar (Burma) and neighboring regions, from the point of view of the origins and affiliations of ethnicities religiously, racially, linguistically, and historically.  In the course of doing so, he also neatly traces out important trends in the history of Western Civilization after the fall of Rome. 

Summaries or brief snippets can't do his piece justice.  Read the whole thing.

Monday, April 8, 2013

Of Fish And Ships

Sometime in the last few years, we reached a point where there are, by weight, more ships in the ocean than fish.
 
From here.

Friday, April 5, 2013

Positrons In The Sky. Why?

Astronomers have long known that the Earth and everything in its vicinity are constantly bombarded with "cosmic rays" a rather misleading term that includes highly energetic particles such as electrons and positrons (the anti-matter equivalent of electrons) as well. The Alpha Magnetic Spectrometer (AMS-02) experiment, on the International Space Station, has been carefully measuring this bombardment of electrons and positrons before they hit the atmosphere, leading to complicated by interactions with the gases in it.

Their findings confirms those of several prior experiments, but with far greater precision which showed that the ratio of electrons to much more more positrons in cosmic rays is dependent on the energy of the particle (although positrons are always far more rare than electrons).

Positrons make up 5% of the lowest energy cosmic rays.  Positrons grow relatively less common up to particle energies of a bit less than 10 GeV where they make up about 1% of cosmic rays of that energy, and then grow more common again, reaching a proportion of 11% by the first bin starting at 100 GeV and 15% in each of the two bins covering 206 GeV to 350 GeV (although at a declining rate as energies increase) starting from 10 GeV through at least a bin of events in the 250 GeV to 350 GeV range.

There were 465 electrons and 72 positrons observed in this first round of ASM-02 data in the highest 260 GeV to 350 GeV data bin. The trend is smooth as illustrated on the graph below, and notably neither electrons nor positrons show any favored direction of origin in the galactic sphere at the time they reach the AMS in Earth orbit. We also know that there was at least one 982GeV electron and at least one 636GeV positron observed.

Fig. 1: AMS results (red dots) compared to previous results, of which the most important are PAMELA (blue squares) and FERMI (green triangles).  Within the stated errors, PAMELA, FERMI and AMS are essentially consistent; all show an increasing positron fraction above 10 GeV and as far up as 300 GeV or so.

Nobody is surprised that there are more electrons than positrons.

Lots of processes can rip electrons from atoms and send them hurling into space at high energies where they become cosmic rays. Few process emit high energy positrons, and when they do, they usually generate equal numbers of high energy electrons.

The natural assumption is that we have two data sets that need to be disentangled from each other. One set of phenomena that is producing roughly equal numbers of electrons and positrons at particular energy levels, and another set of phenomena that are producing only electrons. This is complicated a bit by the fact that electrons of all kinds can easily pick off positrons en route to the detector and electromagnetic attraction attracts stray cosmic ray electrons and positrons to each other.

Also, many processes that create positron-electron pairs also create pairs of quarks whose products are not reflected in the data, and so can't be a source of these positrons.

Just about everybody agrees that we're not certain why this positron excess is observed. It could be emissions from pulsars or active galaxies. It could be some process in astronomy that nobody's ever seriously considered. It may or may not be "new physics" beyond the scope of the Standard Model and General Relativity, but at the very least, this is a fairly new and now much better documented unexplained phenomena in astronomy and hence is a big deal.

Keep in mind also, that given what we know so far, one doesn't have to interpret the data as a high and low energy "positron excess" at all.  The money chart that invites the conclusions and analysis merely shows an electron-positron ratio. It is equally possible, given the existing and publicly available data, that we are mostly seeing an excess middling energy electrons in cosmic rays because there is some unknown process that generates electrons but not positrons in cosmic rays of that energy range, that tapers off at higher energies.  However, if the rumors discussed below are true, that can't be the whole story.

Matt Strassler and Resonaances confirm the summary above and point out that while the AMS-02 data is much more precise that it generally confirms facts that we already knew or strongly suspected about electron-positron ratios.

Are results being censored and why should we care?

The AMS did not release data from bins with higher energies, despite the fact that they did have higher energy events.  The project's spokesperson stated that those findings were too scattered to be statistically significant, but Lumos Motl argues that this is a bit of a conspiracy to sustain interest in the experiment which has years of activity ahead of it that must continue to be funded.

There are rumors that the data beyond the 350 GeV show a dramatic decline in the ratio of positrons to electrons, which would be quite pertinent to determining the possible source of the positron proportions that are observed. Motl goes so far in the post linked above as to sketch out some fake data estimates of what he thinks has probably been held back from the data.

Motl cares mostly because an analysis of the data from the earlier PAMELA experiment by SUSY theorists demonstrated that the positron excess that is observed could be caused by the annihilation of a SUSY dark matter candidate called a neutralino with a mass on the order of 300 GeV to 350 GeV, a range which, with the right assumptions, hasn't fully been ruled out by Large Hadron Collider data so far.

The discovery of a SUSY particle that makes up dark matter in the universe would be a huge deal that would instantly falsify the Standard Model, and even if the discovery of a new particle could not be confirmed from this data alone, the AMS-02 positron-electron data would strongly point LHC researchers towards the precise mass and particle properties to be looking for to discover a new SUSY particle if one is out there and producing this effect.  The dramatically narrowed search parameters for the suspected 300-400 GeV SUSY neutralino could greatly improve the power of the LHC to prove or disprove that particular hypothesis in future runs.  And, a 300 GeV-400 GeV SUSY particle ought to be something that the LHC has the capacity to observe without any major modifications sometime in the next few years.  String theory would be saved and the Standard Model would be resigned to a mere low energy effective theory for a supersymmetric higher energy effective theory that manifests below the TeV electroweak scale.

But, if I were you, I wouldn't invest in a stash of party balloons and confetti yet.

What could produce the results allegedly being censored?

For what it is worth, I don't rule out the possibility that positrons in high energy cosmic rays may have origins in some sort of exotic particle or process, although it isn't entirely clear what kind of exotic particle would produce positrons over so wide an energy range in so smooth a manner.

In the case of a fermion or a short lived bosons like a W or Z bosons or a Higgs boson, you would expect a distinct bump at a particular, discrete energy threshold and not a long slope.

Decays into positrons and electrons over such a wide range of energy levels would be more characteristic of a relatively stable boson that can have a range of energies before decaying predominantly to positron-electron pairs.

Do we know of any such particles?

It turns out that we do. They are called photons, and at high energies (in high end of the gamma-ray wavelengths of 100keV or more) they can spontaneously produce high energy positron-electron pairs in an exceedingly well understood way, while being relatively disinclined to create protons (just the leptophillic pattern one would like to see given the data we see at AMS-02).

One would have to figure out the source of photons that decay to the positrons and some of the electrons that we observe in cosmic rays to explain the data, but many processes that produce gammay-rays and surely we don't have a full accounting of all of them.

Ordinary beta decay of stationary heavy atoms typically produces gamma-rays of 10 MeV or less.

Pair production starts to become possible at a bit more than 1 MeV that encounter electric fields.

Obviously higher energy pairs of the kinds seen with hundreds of GeV of mass-energy by AMS-02 take more energy to produce. Pair production is the predominant form of absorption for photons in the GeV energy range or above that encounter electric fields.

But, there is nothing terribly exotic about photons in particle physics. If the positrons observed are a result of pair production by gamma rays, then the interesting question is what is producing such a spatially uniform, but energetically non-uniform distribution of them.

On the theory that extraordinary claims require extraordinary proof, I'd hope that one would try very hard to exclude positrons created by gamma-rays perhaps from new previously unknown sources, before resorting to explanations requiring 300-400 GeV neutralinos that also happen to make up dark matter.

Dark matter generally shouldn't decay to positron-electron pairs

We wouldn't expect dark matter that lacked an electromagnetic charge to couple to positron-electron pairs at all.

For example, Standard Model neutrinos do not do directly, although if energetic enough a neutrino and antineutrino that collide can give rise to a Z boson that can decay to a positron-electron pair, as well as into pairs of particles and antiparticles of every other possible type with equal probabilities, more or less, for all energetically permitted pairs. Likewise, Z bosons do not decay directly into photons which don't couple to the weak force and would violate conservation of spin if produced in pairs by a single vector boson like a decaying Z boson.

Astronomy data disfavors heavy dark matter particles that easily decay to positrons and electrons

But, whatever is generating the high energy positrons observed in cosmic rays, the notion that this is an annihilation of exotic, non-baryonic dark matter particles such as neutralinos is exceedingly implausible.

As I've observed in other posts, the astronomy data increasingly strongly favors dark matter particles with masses in a narrow 1000 eV to 2000 eV mass range (called "warm dark matter") and increasingly strongly disfavors particles with masses in the hundreds of GeV mass range (called "cold dark matter") that would annihilate or decay to particles in the energy range observed by AMS-02 followed by a dramatic drop off in predicted positron ratio at higher energies as an important component of dark matter.

SUSY does not have any plausible dark matter candidates in the keV mass range. Even the low GeV mass range is a challenge for a SUSY theory to assign a particle to without creating observable phenomena that haven't been detected so far in laboratories.

This suggests that SUSY does not have a viable exotic dark matter candidate, which deprives that theory of one of its important justifications for being studied.

Even if some SUSY theory compatible with existing particle collider experiments was correct, given that we know that cold dark matter is too heavy to be consistent with current astronomy observations, none of the SUSY particles could be stable, not even a lightest supersymmetric particle (i.e. LSP), because even an LSP would be too heavy to be dark matter. So any large quantity of stable LSPs would be contrary to astronomical observations.

A hitherto unknown long range force operating between dark matter particles (probably with a U(1) gauge group similar to that of electromagnetism that affects only dark matter or has highly suppressed interactions with other kinds of matter) might mitigate the problems with a cold dark matter model for cold dark matter that is almost light enough to be warm dark matter, but not quite.  But, when you are talking about particles with masses of 300 GeV and up, it is virtually impossible to include those particles in any dark matter model that can fit the astronomy data.

The astronomy data are a much better fit to a 1 keV to 2 keV sterile neutrino that interacts only via gravity and some force that operates only between sterile neutrinos with a massless or very light carrier boson (as suggested, for example, by graviweak unification theories), not a hundreds of GeV WIMP candidate.  So, even if annihilations of dark matter particles and antidark matter particles of the right masses, if this could produce photons at all (perhaps indirectly through a chain of decay reactions), they would tend to produce X-rays, rather than gamma-rays with sufficient energy to produce high energy positrons of the kinds observed at ASM-02.

Is the positron energy range related to nuclear binding energy curves and atomic mass distributions?

I would also note that the cutoff in positron frequency that Motl supposes exists coincides pretty closely with the mass-energy of the most massive atoms in the periodic table and that the slope of the electron-positron ratio curve fairly closely mirrors the inverse of the first derivative of the nuclear binding energy per nucleon v. number of nucleons curve (the curve itself hits its peak at about 50 GeV of particle mass but the peak of the curvature comes much earlier), although probably with a bit of amplification (perhaps squared or cubed).

Putting two and two together, the positron v. electron ratio in cosmic rays could reflect something as benign as the natural distribution of high energy photons released from some subset of nuclear reactions in stars or supernovas. The abundance of positrons at low energies could reflect the relative abundance of hydrogen and other light atoms in the universe (and maybe even the surprisingly low levels of lithium in the universe), while the abundance of positrons at high energies could reflect interactions involving the heaviest atoms.

Thursday, April 4, 2013

Neutrino CP Violating Phase Numerology

The PMNS matrix which governs the likelihood that neutrinos will oscillate into other kinds of neutrinos has four parameters, three mixing angles, each of which has been measured with some accuracy, and one CP violating phase (the analogous matrix for quarks is called the CKM matrix).  The values considered state of the art in an April 3, 2013 paper from the T2K collaboration on their latest efforts to compare their measurements to prior measurements of Ɵ13 prior to their latest experiment were as follows (their paper concluded that this value was confirmed by the latest experiment):

Ɵ12=34° (per the Particle Data Group, the central value is 34.1 and the one sigma CI is 33.2 to 35.2)
Ɵ23=45° (per the Particle Data Group, the central value is greater than 36.8° at a 90% CI)
Ɵ13=9.1±0.6°
ƍCP=Unknown

A crude but fairly solid estimate for the CP violating phase of the PMNS matrix should be available within a few years for most of the possible values of this parameter (experiments are more sensitive to some values than to others) given the nature of the neutrino experiments that are currently in progress. In principle, any value between zero and 360° (i.e. 2π radians), with the endpoints being indistinguishable, is possible. A value close to zero would be hard to distinguish from zero itself with sufficient statistical power until considerably later.

Possible CP violating phase values in the four parameter PMNS matrix considered

In this post, I review a few of the interesting hints, some experimental and some little more than numerology, regarding that possible values of the CP violating phase of the PMNS matrix from smallest to greatest.

Keep in mind that error margins of one or two sigma (i.e. standard deviations from the mean value experimentally measured), are routine in physics and three sigma deviations "go away" with improved experimental measurements about half of the time (which is proof positive that error bars are almost always underestimated in physics).  An experimentally measured values that differs by less than two sigma from other experimentally measured value are commonly said to be "consistent" with each other.

It is also worth noting that according to a recent paper:
The recent measurement of the third lepton mixing angle, \theta_{13}, has shown that, although small compared to \theta_{12} and \theta_{23}, it is much larger than anticipated in schemes that generate Tri-Bi-Maximal (TBM) or Golden Ratio (GR) mixing. . . . For comparison we determine the predictions for Bi-Maximal mixing corrected by charged lepton mixing and we discuss the accuracy that will be needed to distinguish between the various schemes.
Likewise, simple versions of "quark-lepton complementarity" which had the corresponding 12 and 23 mixing angles of the CKM matrix and PMNS matrix each sum to forty-five degrees is also disfavored by current experimental data at more than a two sigma significance in the case of the theta 12 angles, although the fact that there are multiple ways to parameterize the respective matrixes somewhat mutes the importance of this observation.

* A zero value would mean no CP violation in neutrino oscillations and fewer than the full four possible degrees of freedom in the PMNS matrix. While this is in some sense the null hypothesis and default assumption, I would honestly be quite surprised if CP violation was truly zero for leptons, while being non-zero for quarks. Note that the data would be consistent with the three mixing angles having a combined value of 90° and a zero CP violating phase as well at the one sigma level.

* If the sum of the four PMNS parameters had their current best fit values and were equal 90° then the value would be 1.9° (0.033 radians) and might not be distinguishable from zero to within margin of error for some time.

* A sum of all eight PMNS and CKM matrix parameters equal to 180° would imply a ƍCP PMNS = 7.5±7.8°

* If ƍCP PMNS + ƍCP CKM = 90°, then ƍCP PMNS is approximately 21.2° but could be as low as 15.8° if the sum of the four CKM parameterswere actually 90° rather than their current best fit values. This would be a 1.2 sigma difference from the measured value of the CP violating parameter in the CKM matrix which would appear mostly via a CP violating phase of closer to 74.2° (1.295 radians), rather than the current best fit value discussed below.

* This paper predicts a value of 33° (0.57 radians) or less.

* This paper thinks that the Daya Bay data predicts a CP violation phase of a approximately 45° (π/4 radians).

* A value of 68.8±4.6°(1.20±0.08 radians) which would be identical to the CP violation phase in the quark sector is plausible. There is no particularly good a priori reason for CP violation to be different for leptons than it is for quarks even though there is no particularly compelling reason for them to be the same either. It isn't obvious that the mechanism through which the non-CP violating mixing matrix angles are generated which appears to be intimately related to the mass matrix values for fermions, has any relationship to the CP violating phase of either the CKM or PMNS matrix.

* A value of 90° (π/2 radians) would be halfway between minimal and maximal CP violation in neutrino oscillations in phase space.

* Slightly less than 180° (i.e. slightly less than π radians). This would be a scenario with almost, but not quite, maximal CP violation in neutrino oscillations. There are weak hints, referenced below, of high levels of CP violation in neutrinos, but none of the Standard Model mixing angles in the CKM or PMNS matrixes (with the possible exception of Ɵ23 or CP violating phases already known are exact fits to any "round" number and the hints that the estimate of that phase are a bit high are also weakly present).

* A value of 180° (π radians) would be maximal CP violation in neutrino oscillations.  There have been some weak hints of near maximal CP violation, but nothing at all solid. The best fit value at one sigma is consistent with maximal CP violation, but at even slightly less than two sigma from the best fit value, all possible values of the CP violating phase are permitted.

* If one wanted to have a fit in which the sum of the two CP violation phases was equal to 90° modulo 180°(π/2 modulo π radians) and also wanted the sum of the three CKM mixing angles and its CP violation phase to equal 90° (π/2 radians), then the PMNS CP violation phase would have to be about 195.8° (1.088π radians) which is very close to the best global fit value of PMNS CP violation with a normal hierarchy. The experimentally allowed one sigma range without adding in quadrature for the sum of the three PMNS matrix mixing angles is 90°+6.4°-9.8° neither of which is easily squared with a 15.8° deviation from 180° in the CP violation phase without a further adjustment for two Majorana phases that weren't quite equal to a neat multiple of 180° of either 9.0° (0.05π radians) or 22.6°(0.126π radians) depending on the signs of the terms.

* A more recent effort to make a global fit found a best fit value of 306° (not a typo for 360°, and equivalent to 1.70 radians) with a one sigma range of 162°-365.4° ("0.90 to 2.03π radians", I put this in quotes because the range of permitted values for this parameter is [0-2π] so a 2.03π radians value isn't allowed). All possible values of the CP violating phase of the PMNS matrix, however, are consistent with that global fit of the other data at something less than the three sigma level, which is to say that we really don't know anything with any confidence about the CP violating phase of the PMNS matrix at this point.

* UPDATE (April 5, 2013): One of the more interesting models by Barr and Chen at the University of Delaware (September 28, 2012) inspired by SU(5) GUT concepts, develops a formula for the neutrino masses and mixings based on the mass and mixing matrixes of the quarks and charged leptons with parameters for the overall mass scale of the of the neutrinos and two more conceptualized as being related to Majorana CP violation phases but realized simply as two fitting constants each of which are a combination of a Majorana CP violation phase and another independent physical constant (they are q*e^(i*beta) and p*e^(i*alpha)).

The allocation of theses two fitting constants between the complex exponent and other fitting constant of each is not explained in this paper. The value they fit for q and beta has a value for beta equal to -2π/18 (with the eighteen and choice of fitting constant "q" suggestive of an eighteen value chosen based on there being six quark types of three color charges each, a number also relevant in weak force decay frequencies for all quarks combined). The value they fit of alpha is very close to 8/18th (perhaps for the number of gluon types relative to the number of quark types, or simply to make the combined values of alpha and beta Majorana CP violating phases equal to -π radians).

These three constants (together with the known quark and charged lepton values) are given best fit values based upon the three neutrino mixing angles and two mass differences known to predict the remaining unknown components of the entire neutrino mass and mixing matrix (i.e. the electron neutrino mass (and implicitly the neutrino hierarchy) and the PMNS matrix CP violation phase).  The best fit process also massages the experimental data by choosing particular values other than the isolated measurement best fits for those experimental values.
In the quark matrix, to make the fits work, the ratio of the strange quark mass to the down quark mass which experimentally is in a range of 17 to 22, is fixed at a slightly low value of 19 rather than the mean value of 20.5; the CKM CP violation phase for which the experimentally measured value is 1.187 +0.175-0.192 radians is set at a value towards the high end of that range at 74.5° (1.30 radians), PMNS theta12 is set at 34.1° (which is the current mean value but was higher than the published mean value in their reference at the time), PMNS theta23 is set at 40° rather than 45° +/- 6.5° (which is a decent estimate of what the most recent experimental evidence is trending towards showing), and tweaked the square of the difference between first and second neutrino mass value which an experimentally measured mean of 7.5 +/- 0.2 * 10^-5 eV^2 to a value of the initial number to 7.603.  All of these tweaks are within one sigma of the experimentally measured values.

While it is presented as an output of the model rather than in input, this fitting, which predicts a neutrino CP violation parameter in the four parameter PMNS model of 207° (1.15π radians) was done knowing that the experimentally measured Daya Bay one sigma value for the neutrino CP violation was 198° (1.1π +0.3π-0.4π radians).  Their model also makes a prediction for the electron neutrino mass of 0.0020 eV, a number that is quite in line with what one might expect simply assuming a normal and non-degenerate neutrino mass hierarchy (which while few published papers come out and say so, it the natural favorite since it is what we see in the quarks and charged leptons, and is capable of being consistent with data showing a much larger mass gap between the second and third neutrino masses and the first and second neutrino masses).

From the point of view of predicting something unexpected, this isn't impressive.  It shows that the authors can read the tea leaves in hints from experimental data to fit six Standard Model constants at particular spots within the one sigma ranges of the measured values of those constants and can peg the value of one non-measured constant where it would be expected with a couple of best bet assumptions and a bit of educated guesswork.  The outcomes are essentially equivalent to betting on the favorite in a horse race making conventional assumptions at every step without being too unduly rigid about it, although these predictions do leave some room for falisfiability if the data take an unexpected turn.

Indeed, the paper expressly explains precisely which of the experimentally measured values the fit is most sensitive to with charts illustrating the point graphically for several of them.  For all of the fits to be internally consistent, the model puts considerable pressure on PMNS theta23 to be lower than 41°.

Particularly interesting is that their model needs the CP violating phase in the CKM matrix for quarks to be at the high end of the experimentally measured value at about 74.5°(1.30 radians aka 0.41π radians), which is almost precisely the value that makes the sum of the three CKM mixing matrix angles and the CP violating phase of the CKM matrix equal to 90° (π/2 radians). From the perspective of their model, this seems to be almost a coincidence as this model's best fit value, the sum of the three PMNS matrix mixing angles and the CP violating phase of the PMNS matrix is a not particularly numerologically notable 290.22° (1.61π radians), although one might argue that this needs to be modified by the sum of the two Majorana phases alpha and beta, both of which are negative, which they fit to -176.7° (-0.98π radians) which is suggestive of some possible relationship that would require the two Majorana phases combined to have a value of -π radians, even though the combined value for the three PMNS matrix mixing angles and three CP violating phases in this model of 0.61π isn't very notable (unlike a 0.5π value which would make the combined value of all of the angles in the model equal to π); but since each of their proposed Majorana phases has a second corresponding constant that always appears at the same time, it ought to be possible to fit their model such that the sum of all of the PMNS matrix phases did equal π/2 without too much difficulty.

What does make the paper impressive, however, is that it offers up a coherent, formalized set of equations that relates all of the fermionic masses and mixing matrix parameters of the Standard Model to each other in a coherent analytical framework that is capable of fitting all twenty of these (plus two additional Majorana mass CP violation phases) to the data consistent with all experimental data to date at a one sigma level. As a Within the Standard Model (WSM) effort to look for a deeper relationship between the myriad physical constants that the Standard Model treats are arbitrary, this is a solid accomplishment.

Put another way, this model is asking, and answering, some of the right questions and demonstrating that it is indeed possible to formalize the relationships between these quantities.

Caveats


The four parameter model used here which is the dominant paradigm for experimenters at this point, implicity assumes that neutrinos have Dirac mass rather than Majorana mass, which introduce a couple of additional phases.

Neutrino researchers are also trying to determine the absolute mass of the different kinds of neutrinos, which is irrelevant to determining the PMNS matrix entries. And, they are also trying to determine if the sign (i.e. positive or negative) of one of the know mass differences between two of the kinds of neutrinos in the mass hierarchy. A positive value is a "normal hierarchy" and a negative one is an "inverted hierarchy". The sign of this mass different term, practical purposes, tweaks the estimated value of the mixing angle parameters given a particular set of experimental data by about 1%-2%, which is similar to or smaller than the overall precision of all of the measurements of Standard Model constants being made at this point in the neutrino sector.

UPDATED (April 5, 2013) footnote on Majorana model fits to the PMNS matrix:
In general, it is alway possible to fit a three by three unitary matrix, like the PMNS matrix, with four independent parameters.  To prove this by example, assume that you know the value of the first two entries in the bottom and next to bottom row of the matrix.  You can always determine the other five values from those four parameters simply by knowing that the matrix is unitary.  Any model with three mixing angle parameters and one CP violation parameter is a parameterization of the matrix with a minimal number of parameters and there is not a unique four parameter parameterization, or even a unique four parameter parameterization in which each of the parameters is independent of each of the other parameters.  (Note, however, that Majorana neutrino models often have a non-unitarian PMNS matrix.)

It follow from this conclusion that it is impossible to determine uniquely, even in any given parameterization scheme for the PMNS matrix, all three mixing matrix values and all three Majorana neutrino CP violation phases, even if you can measure all nine entries in the PMNS matrix experimentally.  A unique set of neutrino parameters in the Majorana neutrino mass scenario also requires information in addition to the three left handed neutrino mass states and a full understanding of the oscillations of these three classes of neutrinos.  Those measurements inherently conflate the three CP violation phases in a Majorana mass model into a single number.  To decompose those three phases in a Majorana model from each other, you also need two more parameters related in some way to the see-saw mechanism of these mass models.  As some researchers have acknowledged, discerning these parameters experimentally could be very difficult.