Pages

Monday, December 31, 2012

Gravito-Weak Unification?

A new preprint submitted earlier this month by a group of authors including one of the leading loop quantum gravity scholars suggests a deep link between gravity and the fact that the weak force acts only on left parity particles.
 
http://arxiv.org/abs/1212.5246  Gravitational origin of the weak interaction's chirality Stephon Alexander, Antonino Marciano, Lee Smolin (Submitted on 20 Dec 2012)
We present a new unification of the electro-weak and gravitational interactions based on the joining the weak SU(2) gauge fields with the left handed part of the space-time connection, into a single gauge field valued in the complexification of the local Lorentz group. Hence, the weak interactions emerge as the right handed chiral half of the space-time connection, which explains the chirality of the weak interaction. This is possible, because, as shown by Plebanski, Ashtekar, and others, the other chiral half of the space-time connection is enough to code the dynamics of the gravitational degrees of freedom.  
This unification is achieved within an extension of the Plebanski action previously proposed by one of us. The theory has two phases. A parity symmetric phase yields, as shown by Speziale, a bi-metric theory with eight degrees of freedom: the massless graviton, a massive spin two field and a scalar ghost. Because of the latter this phase is unstable. Parity is broken in a stable phase where the eight degrees of freedom arrange themselves as the massless graviton coupled to an SU(2) triplet of chirally coupled Yang-Mills fields. It is also shown that under this breaking a Dirac fermion expresses itself as a chiral neutrino paired with a scalar field with the quantum numbers of the Higgs.
21 pages 
The conceptual connection is particularly attractive because fundamental particle mass is deeply linked to weak force interactions.  All massive fundamental particles interact weakly, all fundamental particles that lack rest mass do not interact weakly, and the Higgs boson often described as the source of fundamental particle mass is at the heart of electro-weak unification.  It would make sense if interial mass via the Higgs field and gravitational mass, which general relativity states is equivalent to intertial mass, have a deep common source.

The unification apparently gives rise to a sterile neutrino which is an analog to the Higgs boson in the gravitational sector, and a U(1) field that acts on the dark matter sector.

Now, to be clear, this is just a theoretical Beyond the Standard Model (BSM) notion that someone is putting out there, like hundreds of other papers by theoretical physicists (if not thousands) each year, no one is claiming that it is the truth, just that the truth about Nature could conceivably look something like this.  What makes it notable is that I've never seen a BSM theory taking this approach before - it is exploring relatively virgin theoretical territory at a time when the more well trodden paths of BSM theories are increasingly looking like dead ends in light of new experimental data.


 
 

Thursday, December 27, 2012

Is There A Common Origin For Don't Eat It Myths?

Two separate legendary traditions, the Greek myth of Hades and Persephone, and the European notion of the land of the Fae, share a common notion. Those who eat or drink anything in the the Underworld, and the Faerie world, respectively, may never return from it. Are there other legendary traditions that share this mythological feature? Do they have common origins?

There are indeed many traditions that share elements of these myths, suggesting that they do have a common origin that is pre-Indo-European and has its oldest attested roots in Sumerian legends tied to some of the most ancient Sumerian kings.  And, the element of not eating or drinking anything in the Underworld or Otherworld may be reinterpretations of the way in which the Sumerian god's rites were was ritually observed (with a taboo of not eating ground food during the ritual period in honor of the fact that a god's very bones were ground in the Sumerian myth).

Both of these legendary traditions are pre-Christian, and both were parts of the cultures of linguistically Indo-European people at some point. But, they seem somewhat remote from each other. While the story of Hades and Persephone is deeply rooted in a polytheistic pantheon, the notion of Faerie can be almost animistic. 

Animistic religions are sometimes seen as a stage of religious development often seen as associated with a "tribe or a band society," before it reaches the kind of chiefdom society (and in particular, the somewhat federal late chiefdom phase "complex chiefdoms") which parallel the organization of dieties polytheistic ruling band in early Greco-Roman mythology and Norse mythology and were prevailling when these polytheistic schemes originated.  Monotheism, in turn, can be associated with formative eras in some of the earlier centralized bureaucratic states - the Jewish state in the iron age Levant, the Egyptian state during the reign of King Tut, and the emergence of Christianity during a dominant Roman Empire, and the emergence of Islam as the tribal peoples of Arabia expanded by conquest to form a bureaucratized, urban empire.

The legendary culture of Faerie seems to live mostly in places that were once Celtic (Ireland, Britain, Normandy) and is sometimes described as Celtic folklore, although the word "Fairy" comes to English via Old French. It has its root in the Latin word for one of the Fates, but derivatives in other Romance languages apparently do not have the same connotations that they do in French and English. There also seems to be some sort of faerie tradition in Germanic Northern Europe. Wikipedia (linked above) notes that:

Folklorists have suggested that their actual origin lies in a conquered race living in hiding,[4] or in religious beliefs that lost currency with the advent of Christianity.[5] These explanations are not necessarily incompatible, and they may be traceable to multiple sources. Much of the folklore about fairies revolves around protection from their malice, by such means as cold iron (iron is like poison to fairies, and they will not go near it) . . .
[Citing [4] Silver, Carole B. (1999) Strange and Secret Peoples: Fairies and Victorian Consciousness. Oxford University Press. p. 47. and [5] Yeats, W. B. (1988) "Fairy and Folk Tales of the Irish Peasantry", in A Treasury of Irish Myth, Legend, and Folklore. Gramercy. p.1.]
 
The pivotal role of iron in the fairy myths suggest an origin of the myths in current form not earlier than the Bronze Age to Iron Age transition, which roughly coincides with the point in time at which Indo-European Celts and Germanic people emerge and expand in Western and Northern Europe, perhaps carred by these cultures, but perhaps as a legacy of a substrate pre-Celtic Bell Beaker culture.  The notion of fairies as a conquered race living in hiding likewise fits with the notion of this lore a being the legacy of a conquered substrate people whose storytellers and holy priests who preserved these legends might have had to keep hidden and might have had to practice their Old European beliefs in private.
 
Basque mythology, which is the most solid continuous cultural link to the pre-Indo-European culture of Western Europe is arguably closer to the model of the Faerie legends than those of Classical Greco-Roman and Norse pantheons of Gods.  For example, Basque mythology unlike Greco-Roman and Norse pantheons is a chtonic one "all its characters dwell on earth or below it, with the sky seen mostly as an empty corridor through which the divinities pass," in contrast to the Greco-Roman tradition situating its Gods on a sky oriented Mount Olympus and a comparable arrangement in Norse mythology.  The mythological figure Mari in Basque mythology, likewise seems more of a faerie queen than a Hera to a male Zeus.  This argues (weakly) for Faerie as a pre-Indo-European substrate absorbed into Celtic culture, and perhaps also into Germanic culture in a parallel cultural transmission.
 
On the other hand, folktale accounts of faerie in the English speaking world, at least, are generally situated in the era of conversion to Christianity in the early Middle Ages.  However, to the extent that faerie represents an early iron age Indo-European Celtic and Germanic incorporation of a Western and Northern European substrate, perhaps with origins in Bell Beaker traditions, a placement of key tales at the point when Celtic and Germanic pagan cultures were superseded by Christianity is not necessarily inconsistent with this hypothesis.
 
Persephone's myth was incorporated into the polytheistic Indo-European pagan tradition and has been compared to similar myths "in the Orient, in the cults of male gods like Attis, Adonis and Osiris, and in Minoan Crete."  But, this particular myth's origins seem to have roots that predate the arrival of Indo-Europeans in the Aegean.
 
The cult of Attis, the consort of Cybele is sourced in Phrygian mythology ca. 1250 BCE and later adopted from it into Greek mythology.  Osiris has origins in Egytian legend.  He "was at times considered the oldest son of the Earth god Geb,[1] and the sky goddess Nut, as well as being brother and husband of Isis."  Both of these cases parallel the Basque pairing of chtonic Mari and her consort Sugaar, and the story of Persephone and Hades could be seen as a gender reversed version of the tale for a patriarchal society displacing one in which women played a more dominant role.  The origins of the Adonis myth are hotly disputed and there are contradictory accounts.
 
But, the existence of something similar to the Persephone myth in Minoan Crete also points to an origin of the core of this myth in a pre-Greek substrate rather than as a shared part of the Indo-European tradition.  Its presence in the myth of Egyptian Osiris likewise suggests its place in a pre-Indo-European tradition that spanned the Mediterranean basin.  Adonis and the other gods in this cluster of thematically similar dieties likewise shows strong similarities with the Sumerian God Tammuz:




In Babylonia . . . Tammuz, who originated as a Sumerian shepherd-god, Dumuzid or Dumuzi, the consort of Inanna and, in his Akkadian form, the parallel consort of Ishtar. The Levantine Adonis ("lord"), who was drawn into the Greek pantheon, was considered by Joseph Campbell among others to be another counterpart of Tammuz, son and consort. The Aramaic name "Tammuz" seems to have been derived from the Akkadian form Tammuzi, based on early Sumerian Damu-zid. The later standard Sumerian form, Dumu-zid, in turn became Dumuzi in Akkadian.
Beginning with the summer solstice came a time of mourning in the Ancient Near East, as in the Aegean: the Babylonians marked the decline in daylight hours and the onset of killing summer heat and drought with a six-day "funeral" for the god. Recent discoveries reconfirm him as an annual life-death-rebirth deity: tablets discovered in 1963 show that Dumuzi was in fact consigned to the Underworld himself, in order to secure Inanna's release, though the recovered final line reveals that he is to revive for six months of each year . . .  Locations associated in antiquity with the site of his death include both Harran and Byblos, among others. A Sumerian tablet from Nippur (Ni 4486) reads:
She can make the lament for you, my Dumuzid, the lament for you, the lament, the lamentation, reach the desert — she can make it reach the house Arali; she can make it reach Bad-tibira; she can make it reach Dul-šuba; she can make it reach the shepherding country, the sheepfold of Dumuzid
"O Dumuzid of the fair-spoken mouth, of the ever kind eyes," she sobs tearfully, "O you of the fair-spoken mouth, of the ever kind eyes," she sobs tearfully. "Lad, husband, lord, sweet as the date, [...] O Dumuzid!" she sobs, she sobs tearfully.
The cult of Tammuz is referenced in the Hebrew Bible as part of pre-Jewish pagan practice at the door of the Temple in Jerusalem.  Ezekiel 8:14-15.  Dumuzi, in turn, can be associated with some of the earliest kings in Sumerian king lists, suggesting that legends around real historical figures in Sumeria may have spawned these myths.

A Sumerian source in the Semitic tradition is a good fit to the fact that much of the Books of Genesis and Exodus in the Hebrew Bible borrow from Sumerian myths of people who adopted Semitic languages.  The centrality of Tammuz in Sumerian/early Semitic mythology may have facilitated the incorporation of this myth into neighboring non-Semitic forms of paganism, a religous structure well suited to borrowing myths from other peoples.

Thus, the myth of Persophone, may have made its way in the Pre-Jewish Semitic and ancient Egyptian traditions by way of Sumeria, whose copper age civilization gave rise to the oldest written documents and some of the earliest city-states before it underwent language shift to the Semitic Akkadian language.  This same source myth could also have been a source for the Mari-Sugaar consort pair of Basque mythology, which in turn could have spread throughout much of Western Europe and Northern Europe as part of a Bell Beaker expansion if one accepts my own admittedly disputable identification of the Basque culture and language's ethnogenesis with the Bell Beaker culture.

The legendary element of not partaking of food or drink in the underworld does not appear to be part of the original Sumerian myth.  But, it may related to the observed practice as late as the 10th century CE in Mesopotamia in which "Women bewailed the death of Tammuz at the hands of his master who was said to have 'ground his bones in a mill and scattered them to the wind.' Consequently, women would forgo the eating of ground foods during the festival time." 

A prohibition of eating ground foods in Mesopotamia may have morphed into a prohibition on eating any foods in the underworld in the Mediterranean retellings of the story as it spread in a commom form (probably in a form close to the older Basque legends of Mari and Sugaar) that were absorbed independently from substrate cultures in Greece, in Germanic Europe, and by the Celts, in the latter two cases as part of a not quite polytheistic pantheon faerie tradition.

 

Monday, December 24, 2012

A Review of Fundamental Physics in 2012

LHC Discovers Standard Model Higgs Boson

The biggest story in physics in 2012 was the official discovery of a Higgs-like boson which over the course of the year has increasingly been confirmed to have the properties of the Standard Model Higgs boson with a mass of about 126 GeV.  Experimental data in 2012 has confirmed that it has a even parity and intrinsic spin of zero (as predicted), and that the decays that it produces are so far within the range of reasonable statistical and experimental error of the Standard Model Higgs boson (almost all within two sigma, properly calculated, and many closer), although not all of the data are precisely on the money of the Standard Model prediction (which they shouldn't be unless the data has been faked). 

A year or two of additional LHC data should be able to much more definitively confirm that the observed Higgs boson decays match those predicted by the Standard Model and make the mass of the Higgs boson to plus or minus about 0.1 GeV or so, a settled matter.  An additional year or two of LHC data should also rule out (or find) any additional Higgs bosons over a very broad range of masses (i.e. for all masses up to perhaps 600 GeV to more than 1 TeV, perhaps as much as ten times the Higgs boson mass discovered so far).

The observed mass of the Higgs boson is consistent with a universe that is at least "meta-stable" (i.e. has a predicted lifetime arising from quantum instability at least as long as its actual lifetime), and allows Standard Model calculations to remain "unitary" (i.e. compute probabilities that always add up to one for any given computation) to arbitrarily high energy levels, something that would not have been true for all possible Standard Model Higgs boson masses.

Thus, while the Standard Model Higgs boson discovery doesn't solve many unsolved problems in fundamental physics with a "why is nature this way?" character, it does solve most of the unsolved "how can we do physics calculations at extreme energies in a rigorous way?" problems of the Standard Model.

SUSY Looking Less Likely

The two experiments at the Large Hadron Collider have found no evidence of beyond the Standard Model physics despite the fact that the high energies it is testing are excluding many theories that had predicted new particles or new behavior of particles near the TeV scale.

Ongoing exclusions from collider physics, together with the tightening bounds of dark matter searches and neutrino physics, discussed below, in particular, are discouraging for proponents of Supersymmetry (SUSY) and string theory. 

While these theories have a variety of parameters and other "moving parts" that can be adjusted to put the new physics predicted by these theories beyond the range of experimental evidence, any supersymmetry theory that fits the LHC data must have a very high characteristic energy scale (e.g. in some high energy scale SUSY theories most superpartners of ordinary particles predicted under the theory might have masses of 8-20 TeV, with the lighest superpartners having masses of 1 TeV or more, and the theory may not conserve "R-partity" and hence lack the stable superpartner ground state which would otherwise have been a strong dark matter candidate).

SUSY theories with high characteristic energy scales should have theoretical consequences outside collider experiments (like a very heavy dark matter candidate and high rates of neutrinoless double beta decay) that don't seem to be supported by the new experimental evidence.  The non-collider experiment consequences of high energy scale SUSY may ultimately falisfy the theory entirely even though colliders themselves will never be large enough to rule out SUSY directly at all energy scales.

One can, of course, devise SUSY theories that have moving parts that evade these theoretical consequences, but the less "natural" a SUSY theory is, the less well motivated it is as a true theory of nature supported by experimental evidence.  Everyone who devised SUSY expected when the theory was first formulated that it would have been experimentally visible at the energies present at the LHC so far in the experiment, even if they have since revised their opinions.

The problems which motivated SUSY: like the hierarchy problem, the nature of symmetry breaking, and the issue of whether the coupling constants converge until they form a single common force at a "grand unification" energy level are problems that nature doesn't seem too concerned to answer anytime sooon.

Technicolor Dead

SUSY, of course, is not alone.  "Technicolor" models, for example, which were invented to create a Higgsless version of the Standard Model, are pretty much dead now due to the LHC discovery of a Higgs boson.  Technicolor was a theoretical Plan B that turned out not to be necessary.

Dark Matter And Modified Gravity

Dark Matter Effects Are Real, Whatever Their Source, And Unexplained

The universe we observe in our telescopes does not behave the way that the gravitational effects of General Relativity (which are mostly equivalent to Newtonian gravity to the level of precision we can observe with our telescopes) predict that it should.  Galaxies don't fling particles away as they should if their mass were close to the sum of the stars we can observe and central black holes and planetary stuff that we know is there but can't see.  The disparity between masses as measured via relativistic lensing effects and masses estimated from observed luminous material is even greater for galactic clusters.

There are only two possible solutions to this problem, one or both of which must be true.  Either there is a lot of exotic non-baryonic dark matter out there of a type never seen in particle colliders, or the laws of gravity must be different than they are in general relativity, particularly in weak gravitational fields.

New more powerful telescopes and computational capacity is making it possible to precisely quantify the discrepency between the laws of general relativity applied to luminous matter and what we observe in our telescopes.  But, these observations together with direct searches for particles that have the right properties, have ruled out the easiest dark matter theories.  These observations, particularly in galactic clusters, have also ruled out the simplest theories in which all of these effects come from modified gravitational laws.

Direct Searches Find No Dark Matter Particles And Colliders Exclude What Can't Be Seen Directly

Direct searches for dark matter have had contradictory results.  A couple of claimed to see something, but the somethings that they have seen have had different properties.  Other searches have seen nothing at all, effectively ruling out the existence of weakly interacting massive particles of dark matter in the 10 GeV and up mass range. 

Yet, collider tests such as the LHC and LEP have ruled out anything like a weakly interacting neutrino as masses of less than 45 GeV.  SUSY dark matter candidates have been ruled out by the LHC and other collider experiments at masses of 100 GeV and less, as a general matter, and at masses of 600 GeV and less for specific candidates in specific versions of SUSY (such as minimal SUSY). The LHC has also ruled out additional Higgs bosons of the types predicted by SUSY theories to relatively high masses.  No particle discovered so far in particle colliders like the LHC and its predecessors is a good fit to any dark matter particle that could fit the astronomy evidence.

Dark matter theories suffer the curse of being overconstrained. They need particles with properties that aren't found in any kind of matter we have ever observed despite considering extreme situations that have produced all sorts of exotic particles that don't exist in nature. 

Essentially all possible non-baryonic dark matter candidates (other than ordinary neutrinos or perhaps neutrino condensates) are strongly disfavored by some experimental evidence, and there aren't enough neutrinos in the universe to give rise to all of the effects attributed to dark matter.

CDM Models Simulations Don't Reproduce Observed Large Scale Structure In The Universe

Meanwhile, a variety of experimental results, most notably the large scale structure of the universe, appear to be inconsistent with a "cold dark matter" scenario in which dark matter effects observed in nature are due to heavy WIMPS.  Detailed simulations have established that if cold dark matter existed, the large scale structure of the universe would be far more fine grained with far more dwarf galaxies, for example.  The clarity with which this data proved that CDM is a false hypothesis reached critical mass in 2012, although it will take a number of years for this development to be widely assimilated by researchers in the field.

Hot neutrino dark matter also seems inconsistent with the data as well.  Similar simulations show that hot dark matter would virtually eliminate the large scale structure of the universe and reduce it to a homogeneous, amorphous goo.

But "warm dark matter" in the KeV mass range remains consistent with the observed large scale structure of the universe.  If you simulate the formation processs of the universe after its initial moments in high powered computers and add the special sauce of warm dark matter (defined more by speed than the mass assumed to move at that speed in the model), then you get a level of large scale structure in the universe similar to what we actually see, not the goo you see with hot dark matter, or the excessive levels of fine scaled structure you see with cold dark matter.

But, while "warm dark matter" in the KeV range is a hypothesis that fits with the evidence from the large scale structure of the universe, collider experiments and neutrino mass experiments have come close to ruling out the existence of fundamental particles (or composite particles made up of fundamental particles) with masses in that range, or even remotely close.

There are also no good dynamical theories that explain the very consistent shapes of dark matter halos observed in galaxies with dark matter.  Why do galaxies of particular shapes always have the same shaped dark matter halos?  Dark matter theories that stuggle to explain halo shapes even with multiple parameters still perform worse than single parameter modified gravity models in predicting the behavior of observed galaxies.

In particular, cold dark matter theories do not, as a rule, predict that dark matter will be observed with the distribution that must be inferrred from how visible matter in galaxies acts.

Even if dark matter and not modified gravity models are correct, any successful dark matter theory needs to be able to explain the observed data with no more parameters than the modified gravity models, and no dark matter theory has successfully managed this so far.

Sterile Neutrinos?

The Standard Model would admit without great injustice, neutrinos that do not interact via the weak interaction because they have right handed partity, which are called "sterile neutrinos."

Since particles decay via the weak force, there would be no missing matter atributable to sterile neutrinos in collider experiments or radioactive decay experiments. Direct dark matter detection experiments aren't designed to see particles with masses of far less than 1 GeV and so couldn't see any form of warm or hot dark matter. Neutrino detection experiments would either ignore sterile neutrinos entirely, to the extent that they rely on weak force interactions, or would be unable to distinguish "fertile neutrinos" from "sterile neutrinos" to the extent that they rely on contact interactions. So there are reasons why sterile neutrinos would not have been detected directly so far.

But, there is also no positive experimental evidence for the existence of sterile neutrinos (as distinct from dark matter generally). And, there is no precedent for a fermion (or any massive particle, for that matter) that interacts via gravity but not via the weak force, the electromagnetic force, or the strong force. Also, all other Standard Model particles have the same mass regardless of their parity (left handed or right handed intrinsic spins). If this was true, sterile neutrinos would be too light to be the main source of dark matter which is the only experimental motivation for sterile dark matter to exist.

Non-interacting massive sterile neutrinos in the KeV range might help solve warm dark matter problems, but only if one could determine the nature of sterile neutrino leptogenesis and discern how they come to be arranged in the halos in which dark matter seems to arrange itself via gravity alone.

If sterile neutrino leptogenesis took place only a unification scale energies in the early universe, or perhaps also in extreme high energy interactions of the kinds found in galactric clusters, this could explain a relative absence of this kind of dark matter in our local solar system vicinity. And, perhaps the mechanisms that form them, or some analog to the weak force applicable only to right handed particles and much rarer than the observed weak force, could explain why the much lighter particles have KeV sized particle scale momentums. Still, on balance, 2012 ended with less support for sterile neutrinos than there was at the beginning of the year.

Controversial Observations And Calculations Claim Local Dark Matter Is Ruled Out.

One study looking for the gravitational impact of dark matter in the vicinity of the solar system claimed to rule it out, although another study cast doubt on those conclusions.

Controversial Papers Argue That General Relativity Effects Are Larger Than Usually Assumed.

There are also mixed opinions on whether the effects of general relativity are adequately reflected in common models of galactric rotation curves.  Errors in these calculations could significantly overestimate the amount of dark matter that the universe must have to fit astronomy observations.

Dim Matter Discoveries Continue

A steady trickle of results continue to show that material parts of what was previously assumed to be non-baryonic dark matter is, in fact, merely "dim" ordinary matter such as interstellar gasses, very dim stars, and very heavy gas giant planet like objects that aren't quite stars.  Ultra fast objects omitted from central black holes provide a mechanism that could explain some of the distribution of dim matter.

Mainstream dark matter scholarship had failed to catch up with the transfers from the exotic dark matter side of the universe's total mass-energy budget to the ordinary dim matter side of the universe's total mass-energy budget that results from these discoveries, thereby dramatically overstating the amount of dark matter present in the universe, which is closer to 50% than to 75% of all matter in the universe.

MOND Theories With Cluster Dark or Dim Matter Remain Viable

Several considerations have keept the alternative to dark matter, a modification to gravity, alive:

* Large quantities of "dim matter" in galactic clusters that is not present in ordinary galaxies reduces the need for larger quantities of dark matter; failure to account for general relativistic effects in galaxies could also reduce the need to find large quantities of dark matter.
* New theoretical motivations of a cutoff scale for modified gravity effects at levels on the order of the Hubble constant and cosmological constant have been proposed with inspiration for Verlinde's entropic formulation of gravity; essentially modified gravity effects in weak fields starting at just the critical point where modified gravity effects are observed, could arise from the absence of gravity waves longer than the size of the universe.
* Cold dark matter theories have failed to come up with anything approaching the parsimony with which modified gravity theories explain galactric rotation curves with a single parameter gravity modification theories, and cold dark matter theories have made inaccurate predictions about new data that gravity modification theories have accurately predicted.
* There are relativistically consistent formulations of gravity modification theories.
* Forms of baryonic or neutrino dark matter that can't explain rotation curves for a variety of reasons such as the matter budget of the universe, can explain dark matter in galactic clusters which make up a small part of the total amount of mass in the universe.

Evidence from the "bullet cluster" makes clear that modified gravity theories need dark or dim matter to be present in large quantities in galactric clusters where they underestimate dark matter effects.  But, given the large amounts of "dim matter" that improved observational techniques are revealing in galactric clusters that are not present in isolated galaxies, this proposition seems like less of a problem than it did in the past when we thought we understood the composition of galactric clusters better than we actually did.

If non-baryonic dark matter is found principally in galactric clusters with almost all galactic dark matter and some galactic cluster dark matter explained by gravity modifications in weak fields, non-baryonic dark matter only needs to make up something on the order of 3-4% of all of the matter in the universe, instead of 50%-75% of the matter in the universe, since galactric clusters make up only about 10% of the mass in the universe and gravity modifications and newly discovered dim matter in galactric clusters account for some of the deficits even there. 

Sources of non-baryonic dark matter like ordinary "fertile" neutrinos are far more viable in these quantities, given the known proportion of the universe's matter that is in the form of neutrinos, and that there are nuclear processes that take place in galactric clusters much more often than elsewhere that could explain why there might be an excess number of neutrinos there.

Dark Energy Is Still A Solved Problem

The conventional way of describing the matter-energy budget of the universe states that the universe is predominantly composed of "dark energy", a uniform distribution of energy throughout all of the universe that leads it to expand at the rate indicated by the Hubble constant.

All observed dark energy effects in the universe are fully described by the cosmological constant called lambda, a single constant of integration in the equations of general relativity that has been measured fairly precisely.  Experimental efforts to distinguish dark energy conceptualized as a uniformly distributed thin haze of energy in the universe from dark energy conceptualized as one more term in the equations of gravity, has


Dark energy is nothing more than a well understood and simple feature of the formulas of general relativity.  Reifying "dark energy" as a substance, rather than part of the law of gravity is at best a bit of heuristic subterfuge and at its worst, misleading.  It is only moderately tolerable at all because general relativity to some extent reifies the fabric of space-time itself in one common layman's interpretation of the theory.

Neutrino Physics

Mixing Matrixes

Neutrino physics experiments have now put positive non-zero values on all three of the neutrino mixing matrix angles (theta 12, theta 23 and theta 13) although they have not yet determined if there is a non-zero CP-violating phase in the PMNS matrix that governs neutrino oscillation.  These values are know to precisions on the order of 1% to 10%.

Evidence for more than three generations of neutrinos has been quashed by experimental evidence.

Absolute and Relative Neutrino Mass

Evidence regarding the relative and absolute masses of the three neutrino mass eigenstates has also been determined with considerable precision.  The difference in mass between the lighest and next lightest neutrino mass eigenstate is about 0.008 eV.  The difference in mass between second and third neutrino mass eigenstates is about 0.052 eV. 

One study puts the sum of the neutrino mass eigenstates at 0.28 eV or less, implying a electron-neutrino mass of about 0.073 eV or less in an "ordinary hierarchy" (or slightly more in an "inverted hierarchy" of neutrino masses), and improved cosmological observations may be able to pin this number to 0.2 eV of less in the near future (unless the total is between 0.2 and 0.28 eV).  This would imply a muon neutrino mass of about 0.081 eV and and tau neutrino mass of about 0.133 eV. 

But, the relative masses would be far less close to each other (i.e. less "degenerate") if the absolute mass of the electron neutrino were lower, which the experimental data does not rule out.  For example, if the electron-neutrino's mass were 0.001 eV, the muon neutrino mass would be about 0.009 eV, and the tau neutrino mass would be about 0.061 eV, for a sum of the three mass eigenstates of 0.071 eV.  The sum of the three neutrino masses can't be less than about 0.07 eV, so the maximum value of the sum and the minimum value differ by only about a factor of four and experimental evidence could narrow this to a factor of three within just a few years.

Neutrinoless Double Beta Decay Searches And Their Implications

Neutrinoless double beta decay experiments continue to fail to detect any such decays, placing an upper limit on the frequency of such decays (which aren't allowed by the Standard Model), and hence bounding the potential that the neutrino could be a Majorana particle with Majorana mass (in addition to "Dirac mass" of the type found for all other fermions). 

Experimental limits on the Majorana mass of a neutrino, from searches for neutrinoless double beta decays, are 0.140 eV to 0.380 eV.

Neutrinoless double beta decay will either be discovered, or will have an upper limit an order of magnitude or two lower, when the current round of experiments searching for it are completed within a decade or so.

In important consequence of these bounds on neutrino mass is that isolated neutrinos, having masses on the order of a fraction of an electron-volt, cannot be a source of warm dark matter.  Warm dark matter is hypothesized to have a mass on the order of a kiloelectron-volt, about 10,000 heavier than a tau neutrino and 100,000 to 1,000,000 or more times as heavy as the presumably most common electron neutrino.  Yet, warm dark matter is on the order of 100 times lighter than individual electrons.  No Standard Model particles or known composite Standard Model particles have masses anywhere close to the hypothetical warm dark matter mass (a proton or neutron is about 1 GeV) and this mass range is not constrained by the power of state of the art particle colliders which can explore masses in the hundreds of GeV or less.

The failure of credible experimental evidence of neutrinoless double beta decay also disfavors a wide variety of beyond the Standard Model theories in which lepton number is not a conservative quantity and instead baryon number-lepton number is a conserved quantity. Such models, generically predict beyond the Standard Model particles as well as lepton number violations, And, in these models the higher the energy scale of the beyond the Standard Model particles, the more common neutrinoless double beta decay should be. But, as the LHC increasingly pushes up the minimum masses of any beyond the Standard Model particles, and new neutrinoless double beta decay experiments push down the maximum rate of lepton number violations, lepton number violating models are increasingly disfavored.

The failure of models with strong lepton number violations is a big problem for cosmology, because it is quite a bit harder to devise theories that can explain the imbalance of matter and anti-matter in the universe without them.  But, apparently, cosmologists have been forced by collider physicists to deal with this inconvenient reality.

Neutrino Don't Break The Speed Of Light

Late 2011 reports of faster than light neutrinos from the OPERA experiment turned out to be a simple case of a loose cable in the experimental set up.  The corrected results show neutrinos moving at a speed indistinguishable from the speed of light, which implies that they have masses in the low tens of GeV or less (something long know to much greater precision by other means).

Emerging Relationships Between Standard Model Constants

Fundamental Constant Measurements

Lots of the ongoing work in fundamental physics is the process of measuring, every more precisely, the constants of the Standard Model of particle physics, and of cosmology.  But, some of these constants are known much more precisely than others. 

The weak force boson masses, the charge lepton masses, the speed of light in a vacuum, and the coupling constants of the electromagnetic and weak forces are known to astounding precision (parts per million). 

The gravitational constant, strong force constant, top quark mass, Higgs boson mass are known or are on the verge of being known with intermediate precision (perhaps parts per thousand). 

The absolute neutrino masses, the PMNS matrix parameters, the masses of the quarks other than the top quark, and the cosmological constant, however, are know only to one or two significant digits of accuracy.  But, we do know all of the values of all of fundamental physics constants, them with the possible exception of the CP-violating parameter of the PMNS matrix, to at least one significant digit order of magnitude levels of accuracy.

Implications of Fundamental Constant Measurements

As we know these constants with greater precision, it becomes possible to test a variety of possible relationships between them.  Almost everyone in fundamental physics believes that nature does not in fact have dozens of truly fundamental Standard Model constants that don't have deeper sources from which they can be, in principle at least, derived.  But, the deeper connections between those constants remains elusive.

If we knew that some of the Standard Model constants had deepere relationships to each other, we might have better clues about a deeper theory than the Standard Model that could elucidate.  For example, the "coincidental" cancellations of contributions to the Higgs boson mass whose existence has been called the "hierarchy problem" might be transparent if we knew how the fundmental fermion and boson masses were related to each other functionally.

We are close to being able to experimentally test for leading contenders for descriptions of these relationships that could dramatically reduce the number of experimentally measured parameters in the Standard Model and establish that there are deeper relationships between these parameters than the Standard Model itself makes evident.

Koide's Formula

Koide's formula, a simple formula that in its original 1982 version by Yoshio Koide, states a precision relationship between the rest masses of the charged leptons to each other that is still consistent with experimental measurements twenty years later.  A simple extension of this formula has been proposed to derive from the charged lepton masses, the masses of the top, bottom, charm and strange quarks, (also here) the quark masses, although the extended formula seems to imply a higher down quark mass than experimental evidence supports and a near zero up quark mass in some formulations.  

Other extensions of Koide's formula has been proposed for the neutrino masses (one suggests an electron neutrino mass of about 0.0004 eV, a muon neutrino mass of about 0.009 eV and a tau neutrino mass of about 0.510 eV see also by the same author here ) with a negative square root for the electron neutrino mass rather than a positive one, but this can't be tested due to the lack of precision measurements of absolute neutrino masses.  Carl Brannen's 2006 presention in the first link in this paragraph builds up this analysis from a model in which leptons are built from and composed of preons with identical positions in the previous link.  Further analysis of both extensions of the original Koide's formula can be found here

Recent scholarship by Yukinari Sumino and François Goffinet has also addresses the criticism of Lubos Motl that the Koide relation is formulated in terms of masses that are themselves dependent upon an energy rather than more fundamental quantities. 

Extended versions of Koide's formula, at their root, if they work, imply that all twelve of the fermion masses in the Standard Model may be determined exactly from the two most exactly measured fermion masses - thereby eliminating the need for ten of the twelve experimentaly measured Standard Model constants.

A Simple Higgs Boson Formula?

The Higgs boson mass continues to be consistent within the bounds of experimental error with a simple formula indeed: 2H=2W+Z (arguably 2H=2W+Z+photon mass, which is equivalent), that almost no one in the theoretical physics community predicted in advance.  The reason for the difference between double the Higgs boson mass (about 252 GeV) and the Higgs field vacuum expectation value (about 246 GeV) remains largely unexplained, but suggestive of a simple formula as well (for example, the difference of 6 GeV is roughly equal to the sum of the quark masses other than the top quark).  The W and Z boson masses, in turn, are related in the Standard Model by the weak mixing angle, and the photon mass in the Standard model is theoretically assumed to be exactly zero.  This relationship, if determined to be valid, would allow the masses of all of the Standard Model bosons to be determined from a single weak force boson mass and a single mixing angle, reducing the number of experimentally measured Standard Model constants by one.

Quark-Lepton Complementarity

A hypothesis known as quark-lepton complementarity (QLC) suggests that the CKM matrix governing quark flavor mixing, and the PMNS matrix governing lepton flavor mixing, when properly parameterized, can be described in terms of angles that sum to 45 degree or other multipes of that angle.  Since the CKM matrix entries are known with precision, and since there are a finite number of sensible ways to parameterized the two matrixes, it is possible to make firm predictions about the PMNS matrix terms predicted by this theory and to compare them against experimental results.  QLC is contrary to experimental evidence for many possible parameterizations, but has not been ruled out for all of them at this time.  Quark-lepton complementarity, if established to be correct, would allow all eight of the experimentally measured mixing matrix parameters of the Standard Model to be determined from just four of those mixing matrix parameters.

Relationships Between Mixing Matrixes and the Square Roots Of Fermion Masses

There have also been suggested relationships between the fermion mass matrixes of the Standard Model (or the matrix of the square roots of Standard Model fermion masses) and the mixing matrixes of the Standard Model that will be possible to test with precision PMNS angle measurements and neutrino masses in hand.

QCD

Quantum chromodynamics which describes the interactions of quarks and gluons in the Standard Model makes only low precision and qualitative predictions relative to the other Standard Model forces.  This is because the mathematical tools used to calculate electroweak force predictions, such as renormalization, don't work well with QCD since gluons have a strong degree of self-interaction.

But, numerical approximations using lattice methods, high power computers and Monte Carlo methods are increasingly making it possible to make solid QCD predictions even in low energy "infrared" contexts where quark confinment serious limits direct measurements.

These approximations are increasingly making it possible to explain how gluons give rise to the vast majority of the mass in the universe, to predict a massive state for gluons which are in motion (gluons have no rest mass), to predict the masses of composite particles made of quarks and gluons, and to predict the existence of composite particles made entirely of gluons without any quarks at all which are called glueballs.

While the mathematics invovled is hard, and the ability to conduct direct experimental measurements of the predicted behavior beyond the nature of the composite particle spectrum observed in nature is modest, QCD has an advantage not shared by beyond the Standard Model theories.  There is wide consensus on the exact form of the equations of QCD and there are moderately accurate experimental measurements of all of the physical constants in those equations.  The theoretical predictions of QCD have not been contradicted by experiment and there is thus high confidence in the ability of elaborate numerical methods that are based on these equations to accurately reproduce nature even in areas where it is very difficult to observe directly.

Quantum Gravity

The Longstanding Challenge Of Unifying Quantum Mechanics and General Relativity

The Standard Model and General Relativity are inconsistent mathematically.  Yet, both theories of fundamental physics perform admirably to the highest levels of precision to which we can experimentally test them in their own respective domains.  Efforts to reconcile the two have been on ongoing area of theoreretical physics research since the 1940s. 

Indeed, the Holy Grail of theoretical physics is a "theory of everything" involves finding a way to reconcile some generalization of Standard Model physics that unifies the three forces and couple of dozens particles in it into a "Grand Unified Theory", and a quantum gravity theory involving a spin-2, massless graviton that carried the gravitational force.  Generalizations of supersymmetry called string theory, in several forms determined to be equivalent descriptions of a larger M theory on a many dimensional brane, were held out for decades to be that TOE.  But, this proved to be a bridge too far.  Neither SUSY, nor M theory, have worked out so far, and they seem to be on the verge of being contradicted by experiment.

Loop Quantum Gravity

The main contender for a quantum gravity theory other than String Theory has been "loop quantum gravity" although half a dozen other names for areas of research using the same paradigm have been developed.  All of these theories start from the premise that space-time is discete rather than continuous, in some carefully defined manner at some sufficiently fine level, typically the Planck scale.  The approaches use toy models connecting nodes of space-time according to rules that look like quantum mechanical rules to formulate a space-time that behaves in the domains where it has been tested like general relativity and to give rise emergently to a four-dimensional space-time.

Efforts are underway to develop a consensus formulation of LQG, to integrate Standard Model particles and interactions into the model, and to explore phenomological distinctions between classical general relativity and the LQG formulations that reduce to it, in those circumstances where classical general relativity gives rise to mathematical inconsistencies with the Standard Model.

In some LQG models, the Standard Model particles themselves are emergent excitations of localized areas of space-time.  It is hoped that quantum gravity could allow us to better understand phenomena like black holes, the Big Bang, the point-like nature of Standard Model particles, and perhaps dark matter and dark energy as well.

LQG remains very much a work in progress, but unlike string theory, it is a work in progress that is showing (in part because these area still early days in the field) real theoretical progress.  No insurmountable dead ends in the LQG research program have emerged yet.

Ad Hoc Efforts To Address Particular Quantum Gravity Questions

Other avenues of quantum gravity research aren't so ambitious. 

Programs to investigate phenomena around black holes, around the Big Bang, in high energy settings (asymptotic gravity) have simply come up with ad hoc and incomplete ansatz approaches to analyzing particular quantum gravity problems on a case by case basis without claiming to have consistent theory of quantum gravity as a whole. 

Notably, one of these approaches, asymptotic gravity, made one of the most accurate of the many dozens of Higgs boson mass predictions.





Thursday, December 20, 2012

Precision Pre-History

A new paper on wooden Neolithic water wells in Germany highlights a trend in research about the prehistory human condition that has been mostly invisible because it has happened in a gradual and diffuse way.  This trend is towards an increasingly precise chronology in the Holocene era (i.e. from around the time that farming and herding were invented) with an increasingly large number of data points.  This is also true, although less strikingly, for the Upper Paleolithic and the Middle Stone Age. 

The linked study, for example, examined 151 oak timbers for four waterlogged Neolithic wells that were dated between 5469 BCE and 5098 BCE.  The date at which the first farmers appeared in each part of Europe and the Fertile Crescent is known to a precision of about +/- 150 years, which isn't bad for events that are 7,000 to 10,000 years old in most of those places.

Written history starts in Egypt and Sumeria about 3500 BCE, and starts to include Anatolia by about 1700 BCE, although the historical record is still quite patchy until a few centuries after 1000 BCE in the Iron Age.  Significant written history is found in Britain around 0 CE although there are gaps in the record, and isn't well established in much of Northern and Central Europe until the early Middle Ages.

But, despite the limited availability of written history, our ability to match times and places comprehensively to archaeological cultures and subcultures and periods within them is increasingly precise, as is the richness of the data available to describe each of them.  Increasingly, paleoclimate data from tree rings and ice cores and organic remains in layers of archaeological sites can be used to calibrate these dates to each other and to broader climatic influences on human civilization.

Rather than having a prehistoric chronology in which there are a few highlight points and big gaps of the unknown in between them, we increasingly have a chronology of prehistory (particularly in Europe and the Middle East) which provides a comprehensive account from the Neolithic all of the way through to the present.

For Holocene era Europe, the timeline can break the entire period from about 6000 BCE to about 1000 CE (after which written records are much more widely available and are present almost everywhere in Europe), with meaningful detail about pretty much every 200-300 years period (about 28 date bins) at a level of geographic precision comparable to the size of the smaller European countries, or to the first or second level subdivisions of larger European countries (a few hundred place bins).  If you break this era of European history and prehistory into the roughly 10,000 bins in space and time, you can put meaningful, empirically based statements about what was going on at the time there in almost all of them.  And, there are many parts of the prehistoric record of Europe where the level of precision in terms of both dates and geography is much finer.

Ancient DNA data, combined with old fashioned but reliable physical anthropology analysis of old skeletal remains whose conclusions are often corroborated by ancient DNA evidence, has given us a solid foundation from which to discuss who the people who practiced these prehistoric cultures were, where they came from, and the extent to which they displaced or were in continuity with prior residents of the same places.

In short, we have reached a point where the breadth and depth of our reliable data on European prehistory since the arrival of the first farmers is almost as good as our ancient history record, where it is available, for the entire period before the Roman era, although, of course, prehistory lacks the many of the names and personalities of the historic record, and likewise lacks definitive resolution of historical linguistic questions even though strong suppositions can be supported.

Tuesday, December 18, 2012

Does the SM Require Excited Higgs Bosons?

QCD physicist Marco Frasca argues at his blog that the mathematical structure of the scalar field of the Higgs boson in the Standard Model implies that there cannot be just a single Standard Model Higgs boson. 

He explains that "a massless scalar field with a quartic interaction in [de Sitter] space develops a mass. . . . A self-interacting scalar field has the property to get mass by itself."

"de Sitter space" is a space-time in which special relativity applies, and is a background upon which the Standard Model of Particle Physics can be formulated that is more general than the usual Minkowski space (where only special relativity applies), but is symmetrical and lacks the mass-energy fields of general relativity; it is a particular vacuum solution of the equations of general relativity.

Instead, Frasca argues, the mathematics imply that there must also exist other, higher energy, excited states of the scalar field in addition to the Higgs boson observed so far. In other words, there must be higher energy versions of the Higgs boson in the kind of scalar field that it generates.  He summarizes his argument by stating that:
[I]f we limit all the analysis to the coupling of the Higgs field with the other fields in the Standard Model, this is not the best way to say we have observed a true Higgs particle as the one postulated in the sixties. It is just curious that no other excitation is seen beyond the (eventually cloned) 126 GeV boson seen so far but we have a big desert to very high energies. Because the very nature of the scalar field is to have massive solutions as soon as the self-interaction is taken to be finite, this also means that other excited states must be seen.
Frasca's observation goes beyond the canonical description of the Higgs boson, but isn't precisely beyond the Standard Model physics either.  A better way to describe his observation would be to say that it is a non-canonical analysis of how Standard Model physics plays out that makes predictions that have not yet been observed and not yet achieved consensus status among physicists.

Is The de Sitter Space Assumption An Important Loophole To Frasca's Conclusion?

There is a loophole in Frasca's analysis, however.  His analysis and that of the other paper he cites in support of his conclusion both assume a background of de Sitter space, as is natural and usually done without comment of any kind in quantum mechanics. 

But, we don't live in de Sitter space.  We live in a world where the entire background independent formulation of Einstein's theory of general relativity applies to a universe full of matter and energy.

Put another way, our world has stuff in it, while de Sitter space doesn't, and that might be relevant to a mechanism that has a fundamental role in giving rise to mass, which is a quantity upon which gravity acts that exists only outside de Sitter space which assumes away its existence, at least as a first order approximation.  Some of the relevant distinctions are explored here.

It is possible that while a self-interacting massless scalar field does acquire mass and does imply the existence of excited states of the Higgs boson in de Sitter space, that this conclusion does not in fact hold in the space-time of the asymmetrical, mass filled universe version of general relativity in which we actually live.  Indeed, the absence of of excited states of the Higgs boson could be a clue that could point physicists in the right direction when developing a coherent theory of quantum gravity. 

I have no particularly good reason to think that Frasca's result shouldn't generalize, other than that we do not observe the Higgs field at its vacuum expectation value acquiring mass in real life (with the possible exception of dark energy which is many, many orders of magnitude too small for this conclusion to hold).  So, if there is no flaw the mathematical reasoning that Frasca employs as he reaches his conclusion in de Sitter space, perhaps this counterfactual assumption about the nature of space-time does matter in some way.  Given that the Higgs vev, by definition, permeates all of space-time and plays a fundamental role in giving rise to mass, this isn't such a far fetched possibility. 

In general, no mathematically tractable theory of quantum gravity that can clear this kind of hurdle has been formulated, so it is impossible to say, in general, how a quantum mechanics based conclusion would play out in a context in which gravity could play an important role. 

In similar situations (e.g. the quantum mechanics that take place at the event horizon of a black hole), physicists develop an ad hoc ansatz to deal with possible quantum gravity issues, and to determine where quantum gravity considerations might be relevant, on a case by case basis without the benefit of a full theory of quantum gravity.

Could Excited Higgs Boson States Be Unattainably Heavy?

It is also worth noting that since Frasca is discussing the qualitative properties of otherwise massless scalar fields that are self-interacting, his post, at least, does not predict any particular mass for an excited state of a Higgs boson. He simply predicts that they exist. 

Thus, the loophole that he affords himself is that an excited state of the Higgs boson might exist, but it might be so heavy that it will never be realized outside of Big Bang conditions. 

Indeed, if this were the case, excited Higgs bosons might play a role in making possible forms of baryongenesis and leptogenesis that cannot be achieved at sufficient rates to explain the presence of mass in the universe without them.

Background On Scalar, Vector and Tensor Fields

Spin-0 particles are either "scalar" or "pseudo-scalar" depending upon their parity and create "scalar fields" (i.e. fields described at any given point in space-time by a single number).  Spin-1 bosons are called "vector bosons" and create "vector fields" (i.e. fields described at any given point in space-time by a directional arrow and the magnitude of that arrow, such as electromagnetic fields).  And, spin-2 particles (such as the hypothetical graviton) are "tensor" bosons that create "tensor" fields (i.e. fields described at any given point in space-time by a matrix of numbers of the kind found in general relativity).

Since the Higgs boson is the only spin-0 particle in the Standard Model, as all other Standard Model bosons have spin-1, this issue doesn't crop up anywhere else in the Standard Model.  Further, electromagnetic forces is not self-interacting.  Only the Higgs boson generates a self-interacting scalar field. 

Thus, Frasca's analysis does not imply that there must be a more massive excited version of the W or Z boson, a more massive excited version of the gluon, or a more massive excited version of the photon (although his reasoning is simply silent on these points and doesn't rule them out either).

Physical Constants Stay Constant

A clever astronomy observation has established that the physical constants of the Standard Model related to the electron mass, the Higgs field strength, the quark masses, the strength of the strong force, and Planck's constant, have been unchanged to a very high degree of precision for at least 7 billion years, subject to some fairly weak assumptions that are used to put a date on the observation (e.g. the constancy of the speed of light has to be assumed as part of a red shift calculation).

Of course, this is what scientists assume anyway.  But, finding a way to confirm the value of so many physical constants more or less directly, so distantly in the past, is a remarkable feat that quashes a variety of beyond the Standard Model theories.

On the other hand, the constants whose values have been observed to be constant are not necessarily those whose values would be expected to change over time, at least on those time scales. 

Many proposals for changing physical constants either suppose differing values in the extremely high energy environment of the first 0.5 billion years or less after the Big Bang (or even the first few hours or seconds after the Big Bang).  These formulas may formally be functions of temperature or entropy.  More than six billion years after the Big Bang, these physical constants would have long since reached equilibrium levels.

Others proposals for physical constants that vary over time involve physical constants related to gravity and the details of the particle composition of the universe (e.g. a dark matter proportion or Hubble's constant), which arguably arise from the structure of all of space-time and may evolve as the composition and dispersal of the universe changes over time.

Monday, December 17, 2012

Srinivasa Ramanujan, India's Mathematical Mystic

Srinivasa Ramanujan was a self-trained mathematical genius and self-styled mystic from Madras, India who died in 1920 at the age of thirty-two. Contrary to the Western scientific tradition, he didn't reveal his methods, other than to say that mathematical observations came to him in dreams provided by a Hindu god, didn't provide proofs, and didn't show his work.

But, observations about number theory and abstract algebra that he made from his death bed in 1920, one of a veritable horde of conjections neither proved nor disproved that he advanced. Some of these have just been proved using modern mathematical methods developed in the last decade.

A televised documentary of his life and accomplishments is coming soon in honor of the 125th anniversary of his birth.

Modern mathematicians are still struggling to figure out how he saw relationships that three generations of mathematicians since him, informed by far more research upon which they build their own work, have not managed to see. What fruitful component was there to his work that has eluded the entire profession in the West for so long?

Few mathematicians believe that Srinivasa Ramanujan was genuinely divinely inspired, but this isn't to say that they don't think he was on to some amazing unstated principal or approach to their trade that they lack, and which makes the relationships more obvious. In the same way, while it was an accomplishment for Wiley to finally prove Fermat's Last Theorem, using methods that were clearly not available centuries earlier when it was formulated, mathematicians still daydream over and ponder what simpler approach (even if not fully rigorous) Fermat could have used to reach his conclusion.

Then again, mathematics is a mature discpline. With a handful of notable exceptions (e.g. fractals, and the simplex method of solving linear equations), particularly in applied mathematics, almost all of the material studied by mathematics students in undergraduate and early level graduate level courses had been worked out by the deaths of Swiss mathematician Leonhard Euler in 1783 and French mathematician Jean Baptiste Joseph Fourier in 1830, several generations before Srinivasa Ramanujan was born.

There is very little being taught in graduate school mathematics classes today that Srinivasa Ramanujan would have either been immediately familiar with and able to grasp, or had the foundational knowledge to figure out in a matter of a few days or weeks. The state of mathematics in 1920 was not so very behind what it is today in a great many of its subfields, including number theory, where he was most renowned.

Number theory remains a subfield of mathematics where many easy to understand problems remain unsolved and where each new advance seems like some sort of miracle not easily inferred by just anyone from the knowledge that came before it in the field.  It has progressed not in some logical and orderly fashion, but with a crazy quilt of zen-like observations whose connection to a larger context and structure of the theorems of the field is obscure.

Language Is Messy

A nice analysis of the origin of the sense of the phase "sleep in" in American English with the meaning "to sleep later than usual", nicely illustrates that point that language is messy. The first attested historical use of the phase in this sense in American English appears to have migrated via the older Scotish usage in or after a fiction reference (involving a Scotish character) in 1931 that somehow migrated from fiction to common usage.

A word that has a particular meaning on one phrase may have a different meaning in another context. Even the same phrase may have different meanings in different contexts in a single living language at a single historical moment.

This matters because, for a great many purposes in historical linguistics, it is counterfactually assumed for purposes of practicality of analysis, that words have only one semantic meaning which may be shaded a bit over time, but doesn't make the leaps of meaning that linguistics have observed in lots of common words in real life, relatively modern history.

Remembering that reality is messier than it is assumed to be is worthwhile, even when one does use simplifying assumptions out of necessity. Otherwise, one might fail to take one's own results with the grain of salt that these results absolutely require.

Ancient Cheese Makers

An article in the journal Nature last week (abstract available here) provides definitive evidence for cheese making in Northwest Anatolia, ca. 5000 BCE.  This was based on the detection and identification of milk sourced organic chemicals that had seeped into pottery of that age, with shapes that were consistent with them being cheese strainers and that matches what seeps into similar cheese strainers today.  Until now, other purposes for the strainers (e.g. in beer making) had also been viable possibilities.

This date is particularly notable because this early Neolithic revolution date is at a time when we know, based upon ancient DNA evidence, that adult lactose tolerance was rare or non-existent.  Cheese is digestable by lactose intolerant individuals even when cow's milk is not.  Thus, dairying for cheese was probably an important part of cattle herding long before dairying for milk gained importance in the Neolithic diet.  This gives us a quite intimate and detailed insight into the pattern of daily life and subsistence for some of the first farmers in West Eurasian Neolithic villages.

The location of the ancient cheese strainers that were found (many years ago), and tested for milk chemicals in the research leading to this publication, also suggests that cheese making had been developed before farming and herding spread from the Fertile Crescent into the rest of Europe.  Some researchers had strongly considered the possibility that the dairying aspect of cattle herding might have instead been a European innovation.

Friday, December 14, 2012

LHC: "Higgs Boson Mass Estimates Fuzzy"

The big news in physics in 2012 was the official discovery of the Higgs boson, which was actually all but certain a year ago. Now that the dust is settled, the details are being examined and an interesting nuance has come up.

There are two experiments at the Large Hadron Collider (LHC) that have looked for and found the Higgs boson. One is called CMS, and the other is called ATLAS.

The various means of detecting a Higgs boson at CMS has produced mass estimates for the Higgs boson that are consistent with each other.

But, at ATLAS, the mass determined by some methods is a bit more than two standard deviations different than the mass determined by other methods used at the experiment. The high number at ATLAS (from diphoton measurements) is 126.6 GeV. The low number from ZZ decays is 123.5 GeV. A combined number that is the best fit to the combined data is about 125 GeV. The measured value of the signal strength of the Higgs boson evidence at ATLAS is also about 80% higher (about two standard deviations) than the expected value, although this may be due in part to systemic measurement errors and biases in the mass fitting formula used.

By comparison at CMS the mass estimate based on ZZ decays are at 126.2 +/- .6 GeV, and the mass estimate based on gamma gamma decays is around 125 GeV. So the CMS masses are compatible with and in the middle between the two extreme ATLAS values, and a best fit to the combined CMS data is between 125 GeV and 126 GeV. In all likelihood, the discrepency seen at ATLAS is simply a matter of measurement error and, in fact, there is just one Higgs boson with a mass of something like 125 GeV (a crude average of the four measurements would be 125.3 GeV, and where there are good reasons that a more sophisticated combination of the four measurements are more technically correct, this isn't far from the mark of what makes sense). This would be consistent with both of the ATLAS measurements and the CMS measurement at about the two standard deviation level. (For comparison, estimates from four and a half months ago are summarized here.)

But, there is another possibility. There are well motivated beyond the standard model theories in which there is more than one neutral charge, spin zero Higgs boson, and if there were, there could be two such Higgs bosons similar in mass to each other and that would also produce a greater than otherwise expected Higgs boson signal. This is the case in almost all SUSY models.

While signal strengths after new data are mostly migrating towards the Standard Model expected strength, the diphoton data remain stronger than expected at both ATLAS and CMS even as new data come in. So this is looking more like the stronger than expected signal in the diphoton channel could have real physical meaning, rather than simply being a fluke.

I don't think that the LHC is really seeing two different Higgs bosons, and neither do lots of other people who nevertheless have duly noted the possibilty. But, it is the most interesting story from the LHC results at the moment, so it deserves a mention. The existence of two neutral Higgs bosons, rather than just one, would revolutionize physics, would be the only beyond the standard model experimental result other than neutrino oscillation in the last half a century, and would dramatically tip the balance in the SUSY v. no SUSY determination.

Another interesting new little tidbit is that further analysis of the data has determined that the Higgs boson has even parity and spin-zero, rather than spin-2 or odd parity, at a 90% confidence level, as expected.

Tuesday, December 4, 2012

Ancient DNA Links Justinian Plague To Black Death

Genetic evidence and historical records, taken together, suggest that the Black Plague may have first arrived in West Eurasia from an East Asian source around 541 CE, possibly via the Silk Road Trade or Turkic Huns or Avars.  This caused massive loss of life then, rivaling the damage it did when it resurfaced in the 14th century. 

In particular, the early series of outbreaks, known as the Justinian Plague after the Byzantine Emperor who was reigning when it appeared, may have been instrumental in weakening the Byzantine Empire (i.e. the Eastern Roman Empire) in advance of its collapse at the hands of an expanding Arab Islamic Empire, particularly from 632 CE to 655 CE, when it lost all of its territory except Asia Minor (a.k.a. Turkey).

The New Plague Genetics Study
After the initial reconstruction of the complete medieval genome of Y[ersnia] pestis from a Black Death [ (1347 -- 1351)] cemetery in London last year, the researchers from the University of Tuebingen used a published genome wide dataset from more than 300 modern Y.pestis strains to reconstruct the relationship of ancient and modern plague bacteria. Due to the well-established age of the ancient remains they were able to date major radiation events in the history of this pathogen that are likely linked to major pandemics in the human population.
The comparison of modern and ancient genomes revealed that of the 311 Y.pestis strains analyzed, 275 trace their ancestry back to the medieval Black Death pandemic in the mid of the 14th century, confirming a previous analysis of 21 complete plague genomes by the same authors in 2011. In the new larger dataset, however, the authors identified an additional cluster of 11 contemporary bacterial strains that branch in the Y.pestis phylogeny between the 7th and 10th centuries, thus suggesting a radiation event of Y.pestis bacteria during a major outbreak. This time period roughly coincides with the Justinian plague, which historical sources suggest took place between the 6th and 8th centuries AD. 
From here citing Kirsten I. Bos, Philip Stevens, Kay Nieselt, Hendrik N. Poinar, Sharon N. DeWitte, Johannes Krause. Yersinia pestis: New Evidence for an Old Infection. PLoS ONE, 2012; 7 (11): e49803 DOI: 10.1371/journal.pone.0049803.

The new study apparently examines the issue by looking at the internal evidence from Black Plague DNA of an earlier outbreak based on a common origin of the different phylogenies present in the 14th century C.E. that started to diverge from each other at that point.  The illustration from the paper below summarizes this analysis.


As the authors of the new open access paper explain:

Based on similarity in mortality levels, geographic distribution, and recorded symptoms, historians have long suspected that the Plague of Justinian (542–740 AD) might have been caused by the same infectious agent as that responsible for the 14th-century Black Death [5]. Since several publications have implicated Y. pestis as the principal cause of the Black Death by phylogenetic assignment and evaluation of DNA quality [1], [6], [7], the possibility that the Plague of Justinian may have been responsible for the deep cluster we observe here carries some legitimacy. This is further supported by the placement of the cluster approximately half the distance between the Black Death (1283–1342 AD) and the ancestral rodent strain Y. pestis microtus, which is suspected to have diverged from the soil-dwelling Y. pseudotuberculosis root approximately 2000 years ago (41–480 AD). Our cursory dating analysis based on relative branch lengths reveals a divergence time of 733–960 AD for this cluster, thus placing it in a phylogenetic position expected for a Y. pestis radiation event roughly coincident with the Plague of Justinian. We regard this as an important observation since a Y. pestis involvement in the plague of Justinian seemed unlikely from our previous whole genome analysis [1]. Confirmation of their potential for human infection via isolation of one of these strains, or an as yet uncharacterised close relative, from an infected patient would be helpful to understand their possible relationship to the plague of Justinian. . .

We acknowledge that the above conclusions have yet to be confirmed via a more robust full genomic comparison of Y. pestis strains, both contemporary and historical. Specifically, Y. pestis data from human mass burials dating to the Justinian era may hold pertinent information to permit a more thorough evaluation of the evolutionary history of this notorious human pathogen. 
The references in the quoted part of their discussion are:

1.  Bos KI, Schuenemann VS, Golding GB, Burbano HA, Waglechner N, et al. (2011) A draft genome of Yersinia pestis from victims of the Black Death. Nature 478: 506.510.
5.  Sherman IW (2006) The Power of Plagues. Washington, D.C: ASM Press. 431p.
6.  Schuenemann VJ, Bos K, Sharon DeWitte, Schmedes S, Jamieson J, et al.. (2011) Targeted enrichment of ancient pathogens yielding the pPCP1 plasmid of Yersinia pestis from victims of the Black Death. Proc Natl Acad Sci USA doi: 10.1073/pnas.1105107108.
7.  Haensch S, Biaucci R, Signoli M, Rajerison M, Schultz M, et al.. (2010) Distinct Clones of Yersinia pestis caused the Black Death. PLoS Pathog 6, e1001134. doi:10.1371/journal.ppat.1001134.
 
Analysis
 
About two years ago, I did a comprehensive analysis of the Justinian Plague at Wash Park Prophet that reached a similar conclusion, also based on ancient DNA evidence (the article in question, which focused on the East Asian root of the phylogeny, which was unreferenced in that post, was: Morelli G, Song Y, Mazzoni CJ, Eppinger M, Roumagnac P, et al. (2010) Yersinia pestis genome sequencing identifies patterns of global phylogenetic diversity. Nat Genet 42: 1140.1143). See also here.  I have also looked at the genetic impact of the plague in Croatia and Southern Italy even in modern populations.

Another post of mine noted a study showing that plague outbreaks appear to have had some connection to climate trends, and more generally to the Migration period ca. 400 CE to 800 CE, during which "barbarian" tribes relocated en masse across Europe in the face of adverse climate conditions, that may have played a part in carrying the Black Plague to the Eastern Roman Empire.

The summary of the plague itself that I quoted then stated:
The Plague of Justinian was a pandemic that afflicted the Eastern Roman Empire (Byzantine Empire), including its capital Constantinople, in the years 541–542. . . Modern scholars believe that the plague killed up to 5,000 people per day in Constantinople at the peak of the pandemic. It ultimately killed perhaps 40% of the city's inhabitants. The initial plague went on to destroy up to a quarter of the human population of the eastern Mediterranean. New, frequent waves of the plague continued to strike throughout the 6th, 7th and 8th centuries CE, often more localized and less virulent. It is estimated that the Plague of Justinian killed as many as 100 million people across the world. Some historians such as Josiah C. Russell (1958) have suggested a total European population loss of 50% to 60% between 541 and 700.

After 750, major epidemic diseases would not appear again in Europe until the Black Death of the 14th century.
The number of deaths caused by the Justinian Plague may have been one of the greatest in recorded history together with other major epidemics and floods.

My previous analysis noted the possible important role that plague may have played in weaking the Byzantine Empire as it contended with an era of Arab Muslim expansion (the timing of which was reviewed here as well).  The new study adds credibility to this aspect of the historical narrative of the Justinian Plague.

Two Provocative Papers On Gravity

http://arxiv.org/abs/1212.0454
Gravity can be neither classical nor quantized
Sabine Hossenfelder
(Submitted on 3 Dec 2012)
I argue that it is possible for a theory to be neither quantized nor classical. We should therefore give up the assumption that the fundamental theory which describes gravity at shortest distances must either be quantized, or quantization must emerge from a fundamentally classical theory. To illustrate my point I will discuss an example for a theory that is neither classical nor quantized, and argue that it has the potential to resolve the tensions between the quantum field theories of the standard model and general relativity.
7 pages, third prize in the 2012 FQXi essay contest "Which of our basic physical assumptions are wrong?"

Hossenfelder's essay considers the possibility that Planck's constant runs with the energy level of the environment, in an amount only discernable at very high energies, and reaches zero at the temperatures seen in the early days of the Big Bang and inside black holes.  This causes the gravitational constant to run to zero at these energy levels, so rather than reaching infnity and creating singularities, gravity turns itself off when temperatures get great enough.

http://arxiv.org/abs/1212.0371
Not on but of
Olaf Dreyer
(Submitted on 3 Dec 2012)
In physics we encounter particles in one of two ways. Either as fundamental constituents of the theory or as emergent excitations. These two ways differ by how the particle relates to the background. It either sits on the background, or it is an excitation of the background. We argue that by choosing the former to construct our fundamental theories we have made a costly mistake. Instead we should think of particles as excitations of a background. We show that this point of view sheds new light on the cosmological constant problem and even leads to observable consequences by giving a natural explanation for the appearance of MOND-like behavior. In this context it also becomes clear why there are numerical coincidences between the MOND acceleration parameter, the cosmological constant and the Hubble parameter.
9 pages. This article received a forth prize in the 2012 FQXi essay contest "Questioning the Foundations".

Dreyer constructs a formalism in which particles are excitations of space-time.  In this construction, ground state energy in the vacuum is zero and the Casimir effect can be explained by means other than vacuum energy. 

The energy of the vacuum in this formulation is described by changes in the gravitational ground state energy, in a vacuum at zero temperature without considering entropy, produces the familiar Newtonian gravitational constant. 

But, when entropy is considered, gravity weakens at non-zero temperatures (in a derivation based on the definition of free energy in a system with entropy).  The entropic effect is composed of contributions from wavelengths of all lengths, but the wavelengths longer than the size of the universe (the maximal wavelengths) cannot contribute.  Taking this limitation into effect, it is remarkable that the Hubble scale for the size of the universe, and the square root of the inverse of the cosmological constant both produce cutoff scales that change the strength of the gravitational constant at a cutoff value on the same order of magnitude of Milgrom's MOND regime's empirically determined cutoff value. 

Hence, entropy effects on a gravitational force in the context of particles seen as excitations of space-time naturally produces MOND-like gravitational effects that mirror an empirically derived modification of gravity that captures essentially all effects attributed to dark matter at the galactic scale and some of the effects attributed to dark matter at the galactric cluster scale (where ordinary but invisible matter could in theory account for the additional dark matter effects observed in galactic clusters whose composition and structure are still not fully understood).

Dreyers cites many recent papers in the literature at the close of his essay that have combined the ideas of Verlinde and Milgrom in a similar way, arguing that his approach is notable because it does not use a holographic approach and is three dimensional. 

Other essays in the contest can be found here.

As a bonus, a recent paper on quark-lepton complementarity that integrates the latest empirical data can be found here and here and here and here by an overlapping group of Chinese investigators.  They examine different parameterization regimes to determine which ones might be consistent with both QLC and the empirical data and suggests that measurements of CP violation in neutrino oscillations is critical to discriminating between the possible options.  Another investigator, independent of that group, looks at the issues here.  A third team looks at related problems here.  A related investigation in the CKM matrix itself is here and focuses on a number of interesting relationships between the mass matrix and mixing matrixes there.