Saturday, February 28, 2015

Tuesday, February 24, 2015

A Theoretical Case For The Higgs Boson As A Composite Particle Made Of Gauge Bosons

The case for the Higgs boson as a scalar combination of electroweak gauge bosons in a gauge invariant fashion is made in a preprint by F.J. Himpsel (of the University of Wisconsin at Madison Physics Department who primarily works in condensed matter physics; another of his interesting ideas about fundamental physics and additional explanation of the one described in the preprint is found here). The preprint makes a theoretical case for a Higgs boson as a composite object made of gauge bosons whose mass is half of the Higgs vacuum expectation value (vev) at the tree level before adjustment for higher order loops that bridge the gap between the tree level estimate and the experimentally observed value.

The paper is notable because it takes these related notions from being mere numerology, to something with some plausible theoretical foundation and can close the gaps between rough first order theoretical estimates and reality in a precise, calculable, falsifiable way.  There is a lot of suggestive evidence that he is barking up the right tree.  And, there is much to like about the idea of a Higgs boson as a composite of the Standard Model gauge bosons as discussed in the final section of this post.

The preprint discusses that the Higgs boson mass is close to half of the Higgs vev, 123 GeV (actually closer to 123.11 GeV), a 2% difference.  This gap is material, on the order of a five sigma difference from the experimentally measured value, but the explanation for the discrepancy is interesting and plausible (some mathematical symbols translated into words):
The resulting value MH = ½v = 123 GeV matches the observed Higgs mass of 126 GeV to about 2%. A comparable agreement exists between the tree-level mass of the W gauge boson MW = ½gv = 78.9 GeV in (8) and its observed mass of 80.4 GeV. Such an accuracy is typical of the tree-level approximation, which neglects loop corrections of the order the weak force coupling constant = g^2/4pi which is approximately equal to 3%. It is reassuring to see the Higgs mass emerging directly from the concept of a Higgs boson composed of gauge bosons.
The summary at the end of the paper notes that:
In summary, a new concept is proposed for electroweak symmetry breaking, where the Higgs boson is identified with a scalar combination of gauge bosons in gauge invariant fashion. That explains the mass of the Higgs boson with 2% accuracy. In order to replace the standard Higgs scalar, the Brout-Englert-Higgs mechanism of symmetry breaking is generalized from scalars to vectors. The ad-hoc Higgs potential of the standard model is replaced by self-interactions of the SU(2) gauge bosons which can be calculated without adjustable parameters. This leads to finite VEVs of the transverse gauge bosons, which in turn generate gauge boson masses and self-interactions. Since gauge bosons and their interactions are connected directly to the symmetry group of a theory via the adjoint representation and gauge-invariant derivatives, the proposed mechanism of dynamical symmetry breaking is applicable to any non-abelian gauge theory, including grand unified theories and supersymmetry.

In order to test this model, the gauge boson self-interactions need to be worked out. These are the self-energies [of the W+/- and Z bosons] and the four-fold vertex corrections [of the WW,WZ, and ZZ boson combinations]. The VEV of the standard Higgs boson which generates masses for the gauge bosons and for the Higgs itself is now replaced by the VEVs acquired by the W+/- and Z gauge bosons via dynamical symmetry breaking. Since the standard Higgs boson interacts with most of the fundamental particles, its replacement implies rewriting a large portion of the standard model. Approximate results may be obtained by calculating gauge boson self-interactions within the standard model, assuming that the contribution of the standard Higgs boson is small for low-energy phenomena. The upcoming high-energy run of the LHC offers a great opportunity to test the characteristic couplings of the composite Higgs boson, as well as the new gauge boson couplings introduced by their VEVs. If confirmed, the concept of a Higgs boson composed of gauge bosons would open the door to escape the confine of the standard model and calculate previously inaccessible masses and couplings, such as the Higgs mass and its couplings.
Background: The Standard Model Constants

The combined margin of error weighted experimental value of the Higgs boson mass as of the latest updated results from the LHC in September of 2014 is 125.17 GeV with a one sigma margin of error in the vicinity of about 0.3 GeV-0.5 GeV.  In other words, there is at least a 95% probability that the true Higgs boson mass is between 124.17 GeV and 126.17 GeV, and the 95% probability range is probably closer to 124.37 GeV and 125.97 GeV.

The experimentally measured value of the Higgs vev is 246.2279579 +/- 0.0000010 GeV. Conceptually, this is a function of the SU(2) electroweak force coupling constant g and the W boson mass.  In practice, it is determined using precision measurements of muon decays.

The other fundamental particle masses in the Standard Model are as follows:

* The top quark mass is 173.34 +/- 0.76 GeV (determined based upon the data from the CDF and D0 experiments at the now closed Tevatron collider and the ATLAS And CMS experiments at the Large Hadron Collider as of March 20, 2014).
* The bottom quark mass is 4.18 +/- 0.03 GeV (per the Particle Data Group).  A recent QCD study has claimed, however, that the bottom quark mass is actually 4.169 +/- 0.008 GeV.
* The charm quark mass is 1.275 +/- 0.025 GeV (per the Particle Data Group).  A recent QCD study has claimed, however, that the charm quark mass is actually 1.273 +/- 0.006 GeV.
* The strange quark mass is 0.095 +/- 0.005 GeV (per the Particle Data Group).
* The up quark mass and down quark mass, are each less than 0.01 GeV with more than 95% confidence, although the up quark mass and down quark pole masses are ill defined and instead are usually reported at an energy scale of 2 GeV.
* The tau charged lepton mass is 1.77682 +/- 0.00016 GeV (per the Particle Data Group).
* The muon mass is 0.1056583715 +/- 0.0000000035 GeV (per the Particle Data Group).
* The electron mass is 0.000510998928 +/- 0.000000000011 GeV (per the Particle Data Group).
* Each of the three Standard Model neutrino mass eigenstates (regardless of the neutrino mass hierarchy that proves to be correct) is less than 0.000000001 GeV.
* The W boson mass is 80.365 +/- 0.015 GeV.
* The Z boson mass is 91.1876 +/- 0.021 GeV.
Photons and gluons have an exactly zero rest mass (as does the hypothetical graviton).

The only other experimentally determined Standard Model constants not set forth above are:

* The strong force coupling constant (about 0.1185 +/- 0.0006 at the Z boson mass energy scale per the Particle Data Group).
* The U(1) electroweak coupling constant g', which is known with exquisite precision.  I believe that the value of g is about 0.65293 and that the value of g' is about 0.34969 (both of which are known with much greater precision).
* The four parameters of the CKM matrix, which are known with considerable precision.  In the Wolfenstein parameterization, they are λ = 0.22537 ± 0.00061 , A = 0.814+0.023 −0.024, ρ¯ = 0.117 ± 0.021 , η¯ = 0.353 ± 0.013.
* The four parameters of the PMNS matrix; three of which are known with moderate accuracy.  These three parameters are theta12=33.36 +0.81/-0.78 degrees, theta23=40.0+2.1/-1/5 degrees or 50.4+1.3/-1.3 degrees, and theta12=8.66+0,44/-0.46 degrees.

None of these are pertinent to the issues discussed in this post. All of these except the CP violating phase of the PMNS matrix and the quadrant of one of the PMNS matrix parameters has been measured with reasonable precision.

General relativity involves two constants (Newton's constant which is 6.67384(80)×10−11 m3*kg−1*s−2 and the cosmological constant which is approximately 10-52 m-2).

In addition, Planck's constant (6.62606957(29)×10−34 J*s) and the speed of light (299792458 m*s−1) are experimentally measured constants (even though the speed of light's value is now part of the definition of the meter), that are known with great precision and which must be known to do fundamental physics in the Standard Model and/or General Relativity.

A small number of additional experimentally determined constants may be necessary to describe dark matter phenomena and cosmological inflation.

A Higgs Boson Mass Numerology Recap

The intermediate equations in the tree-level analysis in the paper suggests how the 125.98 +/- 0.03 GeV/c^2 value that the simple 2H=2W+Z formula suggests could be close to the actual result.  If this similar formula were true, the combined electroweak fits of the W boson mass, top quark mass and Higgs boson mass favor a value at the low end of that range, perhaps 125.95 GeV/c^2.

* * *

An alternative possibility, in which one half of the Higgs boson mass equals exactly the sum of the squares of the massive fundamental bosons of the Standard Model (and the sum of the squares of the masses of the fundamental fermions has the same value as the sum of the squares of the boson masses), i.e., for Higgs vev=V, Higgs boson mass=H, W boson mass=W, and Z boson mass=Z:

H^2=(V^2)/2-W^2-Z^2, implies a Higgs boson mass of 124.648 +/- 0.010 GeV, with combined combined electroweak fits favoring a value at the high end of this range (perhaps 124.658 GeV).  (This would imply a top quark mass of about 174.1 GeV which is consistent with the current best estimates of the top quark mass; a slightly lower top quark mass of 173.1 GeV would be implied if the Higgs boson mass were, for instance, 125.95 GeV; both of these top quark masses are within 1 standard deviation of the experimentally measured value of the top quark mass).

This could also have roots in the analysis in the paper, which includes half of the square of the Higgs vev, the W boson mass, and the Z boson mass in its analysis (and has terms for the photon mass that drop out because the photon has a zero mass).

* * *

Thus, both alternative possibilities (2H=2W+Z and sum of F^2= sum of B^2= 1/2 Higgs vev) are roughly consistent with the experimental evidence, although they are clearly inconsistent with each other using pole masses for each boson.

It is also possible, however, that the correct scale at which to evaluate the masses in these formulas might, for example, be closer to the Higgs vev energy scale of about 246 GeV, and that at that scale, both formulas might be true simultaneously.  Renormalization of masses as energy scales increase shrink the fundamental gauge boson masses more rapidly than they shrink the fundamental fermion masses, and an energy scale at which the sums of the squares of the fermion masses equal the sum of the squares of the boson masses is less than 1,000 GeV (i.e. 1 TeV) and not less than the value determined using the pole masses of the respective fundamental particles.

In a naive calculation using the best known values of the fundamental fermion and boson masses, the sum of the square of the fermion masses are not quite equal to the sum of the square of the boson masses (including the Higgs boson mass).  But, the uncertainties in the top quark mass, the Higgs boson mass, and the W boson mass (in that order) are sufficient to make it unclear how the sum of the square of the fermion masses relate to the sum of the square of the boson masses either at pole masses or at any other energy scale.  The uncertainties in these three squared masses, predominate over any uncertainties in the masses of the other 11 fermions, the Z boson (whose mass is known with seven times more precision than the W boson despite having a higher absolute value), and uncertainty in the value of the Higgs vev.

If these top quark and Higgs boson mass measurements could be made about three times more precise than the current state of the art experimental measurments, most of the current uncertainty regarding the nature of the Higgs boson mass and the relationship of the fundamental particle masses to the Higgs vev could be eliminated.  The remainder of the LHC experiments will almost surely improve the accuracy of both of these measurements, but may be hard pressed to improve them by that much before these experiments are completed.

What Does A Composite Higgs Boson Hypothesis Imply?

It is always a plus to be able to derive any experimentally determined Standard Model parameter from other Standard Model parameters, reducing the number of degrees of freedom in the theory.  This would make electroweak unification even more elegant.

More deeply, if the Higgs boson is merely a composite of the Standard Model electroweak gauge bosons, then:

(1) The hierarchy problem as conventionally posed evaporates, because the Higgs boson mass itself is no longer fine tuned.  The profound fine tuning of the Higgs boson mass in the Standard Model is the gist of the hierarchy problem.  The demise of the hierarchy problem removes an important motivation for SUSY theories.

(2) Some of the issues associated with the hierarchy problem migrate to the question of why the W and Z boson mass scales are what they are, but if the Higgs boson mass works out to be such that the fermionic and bosonic contributions to the Higgs vev are identical, then the W and Z boson masses are a function of the fermion masses and an electroweak mixing angle, or equivalently, the fermion masses are a function of the electroweak boson masses and the texture of the fermion mass matrix.

(3) The spotlight in the mystery of the nature of the fermion mass matrix would cease to be arbitrary coupling constants with the Higgs boson and return squarely to W boson interactions which are the only means in the Standard Model by which fermions of one flavor can transform into fermions of another flavor.

(4) There are no longer any fundamental Standard Model bosons that are not spin-1 (the hypothetical spin-2 graviton is not part of the Standard Model).  All fundamental Standard Model fermions are spin-1/2.  Eliminating a fundamental spin-0 boson from the Standard Model changes the Lie groups and Lie algebras which can generate the Standard Model fundamental particles without excessive or missing particles in a grand unified theory.

(5) If the Higgs boson couplings are derivative of the W and Z boson couplings, then neutrinos, which couple with the W and Z boson, although only via the weak force, should derive their masses via the composite Higgs boson mechanism, just as other fermions in the Standard Model do.  This implies that neutrinos should have Dirac mass just like other particles that derive their masses from the same interactions.

(6) A composite Higgs boson makes models with additional Higgs doublets less plausible, except to the extent that different combinations of fundamental Standard Model gauge bosons can generate these exotic Higgs bosons, which if they can, would then have masses and other properties that could be determined from first principles and would be divorced from supersymmetry theories.

(7) A composite Higgs boson might slightly tweak the running of the Standard Model coupling constants, influencing gauge unification at high energies (as could quantum gravity effects).  A slight tweak of 1-2% to the running of one or more of the coupling constants over from the electroweak scale to the Planck or GUT scale (more than a dozen orders of magnitude), is all that would be necessary for the Standard Model coupling constants to unify at some extremely high energy.  It may also have implications for the unstable v. metastable v. stable nature of the vacuum.

The Proca Model and Podolsky Generalized Electrodynamics

Background and Motivation 

The Standard Model, and in particular, the quantum electrodynamics (QED) component of the Standard Model, assumes that the photon does not have mass (although photons do, of course, have energy, and hence are subject to gravity in general relativity in which gravity acts upon both matter and energy).

Now, almost nobody seriously thinks that the assumption of QED that the photon is massless is wrong, because the predictions of QED are more precise, and are indeed more precisely experimentally tested than any other part of the Standard Model, or for that matter almost anything in experimental physics whatsoever.  There is no meaningful experimental or theoretical impetus to make the assumption that the photon is massless.

But, generalizations of Standard Model physics that parameterize deviations from the Standard Model expectation, provide a useful tool for devising experimental tests to confirm or contradict the Standard Model and to quantify how much deviation from the Standard Model has been experimentally excluded.

Also, any time one can demonstrate that it is possible to have a new kind of force with a massive carrier boson that behaves in a manner very much like QED, but not exactly like QED, that is theoretically rigorous, these theories, once their implications are understood, can be considered as possible explanations for explaining unsolved problems in physics.

For example, many investigators have considered a massive "dark photon" as a means by which dark matter fermions could be self-interacting very similar to the models discussed below, because self-interacting dark matter models seem to be better at reproducing the dark matter phenomena that astronomers observe, than dark matter models in which dark matter fermions interact solely via gravity and Fermi contact forces (i.e. the physical effects of the fact that two fermions can't be in the same place at the same time) with other particles and with each other.

The Proca Model and Podolsky Generalized Electrodynamics

A paper published in 2011 and just posted to arVix today for some reason evaluates the experimental limitations on this assumption.

The Proca model of Romanian physicist Alexandru Proca (who became a naturalized French citizen later in life when he married a French woman) was developed on the eve of World War II, mostly from 1936-1938 considers a modification of QED in which the photon has a tiny, but non-zero mass. Proca's equations still have utility because they describe the motion of vector mesons and the weak force bosons, both of which are massive spin-1 particles that operate only at short ranges as a result of their mass and short mean lifetimes.

Experimental evidence excludes the possibility that photons have Proca mass (according to the 2011 paper linked above and cited below) down to masses of about 10-39 grams (which is roughly equivalent to 10-6 eV/c2). This is on the order of 10,000 lighter than the average of the three neutrino masses (an average which varies by about a factor of ten between normal, inverted and degenerate mass hierarchies). The exclusion (assuming that no mass is discovered) would be about 100 times more stringent if an experiment proposed in 2007 is carried out. This exclusion for Proca mass is roughly equal to the energy of a photon with a 3 GHz frequency (the frequency of UHF electromagnetic waves used to broadcast television transmissions); visible light has more energy and a roughly 300 THz frequency (10,000 times more energetic).

The Particle Data Group's best estimate of the maximum mass of the photon is much smaller than the limit cited in the 2011 article, with a mass of less than 10-18 eV/c2 from a 2007 paper (twelve orders of magnitude more strict). A 2006 study cited by not relied upon by PDG claimed a limit ten times as strong.  A footnote at the PDG entry based on some other 2007 papers notes that a much stronger limit can be imposed if a photon acquires mass at all scales by a means other than the Higgs mechanism (with formatting conventions for small numbers adjusted to be consistent with this post):
When trying to measure m one must distinguish between measurements performed on large and small scales. If the photon acquires mass by the Higgs mechanism, the large-scale behavior of the photon might be effectively Maxwellian. If, on the other hand, one postulates the Proca regime for all scales, the very existence of the galactic field implies m < 10-26 eV/c2, as correctly calculated by YAMAGUCHI 1959 and CHIBISOV 1976.
Ordinarily a massive photon would break the gauge symmetry of QED, which would be inconsistent with all sorts of experimentally confirmed theoretical predictions that rely upon the fact that the gauge symmetry of QED is unbroken.

But, it is possible to find a loophole in the assumption that a massless photon would break the gauge symmetry of QED.  Specifically, the Podolsky Generalized Electrodynamics model, proposed in the 1940s by Podolsky, incorporates a massive photon in a manner that does not break gauge symmetry. In this model, the photon has both a massless and massive mode, with the former interpreted as a photon and the later tentatively associated with the neutrino by the Podolsky when the model was formulated (an interpretation that has since been abandoned for a variety of reasons). In the Podolsky Generalized Electrodynamics model, Coulomb's inverse square law describing the electric force of a point charge is slightly modified. Podolosky Generalized Electrodynamics is equivalent to QED in the limit as Podolsky's constant "a" approaches zero.

Podolosky Generalized Electrodynamics is also notable because it can be derived as an alternative solution to one that uses a set of very basic assumptions to derive Maxwell's Equations from first principles, and does so in a manner that prevents the infinities found in QED because it has a point source (which Feynman and others solved for practical purposes with the technique of renormalization) from arising.

In the Podolsky Generalized Electrodynamics model, there is a constant "a" with units of length associated with the massive mode of the photon must have a value that is experimentally required to be smaller than the current sensitivity of any current experiments.  But, "a" is required as a consequence of the "value of the ground state energy of the Hydrogen atom . . . to be smaller than 5.6 fm, or in energy scales larger than 35.51 MeV."

In practice (for reasons that are not obvious without reading the full paper), this means that deviations from QED due to a non-zero value of "a" could be observed only at high energy particle accelerators.

It isn't inconceivable, however, to imagine that Podolsky's constant had a value on the order of the Planck length (i.e. 1.6 * 10-35 meters), which would be manifest only at energies approaching the Planck energy which is far beyond the capacity of any man made experiment to create, a value which could be correct without violating any current experimental constraints that have been rigorously analyzed to date.

Selected References

* B. Podolsky, 62 Phys. Rev. 68 (1942).
* B. Podolsky, C. Kikuchi, 65 Phys. Rev. 228 (1944).
* B. Podolsky, P. Schwed, 20 Rev. Mod. Phys. 40 (1948).
* R. R. Cuzinatto, C. A. M. de Melo, L. G. Medeiros, P. J. Pompeia, "How can one probe Podolsky Electrodynamics?", 26 International Journal of Modern Physics A 3641-3651 (2011) (arVix preprint linked to in post).

Monday, February 23, 2015

Occam's Razor v. Clinton's Rule In Physics

[T]he A particle is one of the five physical states arising in the Higgs boson sector if you admit to add to the Standard Model Lagrangian density two doublets of complex scalar fields instead than just one.

Why should we be such perverts ? I.e., if the theory works fine with fewer parameters, why adding more? Well: the answer is the same as that given by Clinton when he was asked why he seduced an intern... Because we could.
From Quantum Diaries Survivor (emphasis in the original) discussing the paper previously blogged here in this post (note that the physicist author's first language is Italian, not English).

In other physics news, a new paper sketches out the main current issues of active investigation in QCD. 

And, a new analysis determines that the latest potential signs of SUSY at the LHC aren't.  A new CMS study of a quite generalized class of high energy proton-proton collisions with missing traverse energy, an opposite sign lepton pair, and jets, likewise fails to detect hints of SUSY and sets new SUSY limits (e,g., excluding gluino masses of less than about 900 GeV for neutralinos of under 200 GeV, excluding gluino masses up to 1100 GeV for neutralino masses of up to about 800 GeV, and excluding bottom squark masses of 250 GeV-350 GeV and 450 GeV to 650 GeV subject to various assumptions).

Sunday, February 22, 2015

True Statements About Fundamental Physics


Mouse over:

"Of these four forces, there's one we don't fully understand."
"Is it the weak force or the strong --"
"It's gravity."

------------------

Seriously, a hundred years after Einstein published his paper on General Relativity, this is still the most problematic of the fundamental forces.  The strong force (QCD) is still hard to calculate, but we think we understand it at a fundamental level.

Tuesday, February 17, 2015

Grain As Prehistory

[This is another draft post from the year 2010 that is resurrected without major further research.  It too was never posted because of its original, overambitious scope that has been abandoned in this post.]
Although corn was domesticated only 8,000 to 10,000 years ago from the grass teosinte, the genetic diversity between any two strains of corn exceeds that found between humans and chimpanzees, species separated by millions of years of evolution. For instance, the strain B73, the agriculturally important and commonly studied variety decoded by the maize genome project, contains 2.3 billion bases, the chemical units that make up DNA. But the genome of a strain of popcorn decoded by researchers in Mexico is 22 percent smaller than B73’s genome.

“You could fit a whole rice genome in the difference between those two strains of corn,” says Virginia Walbot, a molecular biologist at Stanford University.
From here.

Mostly, this is a sideshow. Different kinds of wheat (which comes in duploid to hexaploid varieties, with the hexaploid monster genome found in the most commonly consumed varieties) for example, also shows great genetic diversity as measured by numbers of bases, but some of this is a product of how you choose to measure genetic diversity, because the variations aren't simply random mutations.

In fact, other recent research on grain genetics shows that just a handful a key, independent, simple mutations account for the traits that led to the practical differences between the wild plants that were the marginally nourishing ancestors of the world's staple grains (e.g., rice, maize a.ka. corn, and wheat) and the contemporary domesticated varieties that feed the world.

Far more interesting to me is the extent to which a genetic analysis of foods and domesticated animals makes it possible to localize the epicenters of the Neolithic revolution and date the times when the domestication occurred. Modern humans (i.e. post-Neanderthals) transitioned from many tens of thousands of years as a hunter-gatherer society to a farming society at something like a dozen independent locations around the world at roughly the same time give or take a few thousand years. Each location domesticated different plants and animals.

Seeds from domesticated plants can often be recognized on sight, survive thousands of years better than dead animal matter, and can be carbon dated.

Comparisons of the genomes of domesticated plants and animals to wild species usually makes it possible to make a very specific identification of a common wild ancestor, often an ancestors with a quite narrow geographic range. For example, genetic evidence provides strong support for the notion that all maize has a single common domesticated ancestor about 9,000 years ago in a fairly specific highlands part of Southern Mexico.

Similarly, the story of wheat is also a high definition bit of genetic history.
Einkorn wheat was one of the earliest cultivated forms of wheat, alongside emmer wheat (T. dicoccon). Grains of wild einkorn have been found in Epi-Paleolithic sites of the Fertile Crescent. It was first domesticated approximately 9000 BP (9000 BP ≈ 7050 BCE), in the Pre-Pottery Neolithic A or B periods. Evidence from DNA finger-printing suggests einkorn was domesticated near Karacadag in southeast Turkey, an area in which a number of PPNB farming villages have been found.
The origins of maize and wheat are known in time to a period of plus or minus a few hundred years, and in place to regions the size of a few contiguous counties. We can trace the origins of the greater Mexican squash-maize-bean triad of staples to the separate but adjacent areas, resolve order they were developed in (squash came first), and determine how quickly this pattern of domestication spread from carbon dated archeological evidence.

The Hittites

[The following post is extracted from a post written on April 14, 2010, but got lost in the shuffle in my drafts folder when its overambitious original scope got out of hand.  I am posting it now, retroactively, without modification to reflect new information, so that the sources and facts noted at the time are available for reference purposes.  Further edited for style and grammar on February 18, 2015.]

The Pre-Hittites

The capitol of the Hittite empire, where the first documents written in an Indo-European language are found, was Hattusa. Archaeological evidence shows that it was founded sometime between 5000 BCE and 6000 BCE, and Hittite history records the fact that it and the city of Nerik to the North of it, were founded by the non-Indo-European language speaking Hattic people who preceded them.

Both Hattsua and Nerik had been founded by speakers of the non-Indo-European, non-Semitic language called Hattic. Hattic shows similarities to both Northwest (e.g., Abkhaz) and South Caucasian (Kartvelian) languages, and was spoken sometime around the 3rd to 2nd milleniums BCE. The pre-Indo-European language spoken in Eastern Anatolia and the Zargov mountains, which also shows similarities to the languages of the Caucuses was Hurrian.

"Sacred and magical texts from Hattusa were often written in Hattic [and] Hurrian . . .even after Hittite became the norm for other writings." This is similar to the survival of Sumerian for religious purposes until around the 1st century BCE, despite the fact that it was replaced by the Semitic language Akkadian in general use roughly 1800 years earlier.

The Early Hittites

The first historical record of an Indo-European language is of Hittite in eastern Anatolia. An Indo-European Hittite language speaking dynasty dates back to at least 1740 BCE in a central Anatolian city (see generally here).

The Hittites called their own language the "language of Nesa," which is the name in Hittite of the ancient city of Kanesh, about 14 miles Northwest of the modern city of Kayseri in central Anatolia. This was an ancient Anatolian city, of pre-literate non-Indo-European language speaking farmers to which an Akkadian language speaking trading colony attached itself as a suburb for about two hundred years until around 1740 BCE.

Around 1740 BCE, the Assyrian culture ends and a Hittite culture appears in the archeological record, when the city was taken by Pithana, the first known Hittite king.

The Hittites of Nesa conquered this city from Kussara, which is believed to be between the ancient cities of Nesa and Aleppo (which was first occupied as a city around 5000 BCE), which continues to be a major city in Northern Syria also known as Halab. Their first king was described as the king of this one city-state.

His son, Anitta, sacked Hattsua around 1700 BCE and left the earliest known written Hittite inscription. Kanesh is closer to the source of the Red River than Hattsua, which would later become the Hittite capitol, and was ruled at the time of the sack by Hattic king Piyusti whom he defeated. The text of Anitta's inscription translates to:

Anitta, Son of Pithana, King of Kussara, speak! He was dear to the Stormgod of Heaven, and when he was dear to the Stormgod of Heaven, the king of Nesa [verb broken off] to the king of Kussara. The king of Kussara, Pithana, came down out of the city in force, and he took the city of Nesa in the night by force. He took the King of Nesa captive, but he did not do any evil to the inhabitants of Nesa; instead, he made them mothers and fathers. After my father, Pithana, I suppresed a revolt in the same year. Whatever lands rose up in the direction of the sunrise, I defeated each of the aforementioned.

Previously, Uhna, the king of Zalpuwas, had removed our Sius from the city of Nesa to the city of Zalpuwas. But subsequently, I, Anittas, the Great King, brought our Sius back from Zalpuwas to Nesa. But Huzziyas, the king of Zalpuwas, I brought back alive to Nesa. The city of Hattusas [tablet broken] contrived. And I abandoned it. But afterwards, when it suffered famine, my goddess, Halmasuwiz, handed it over to me. And in the night I took it by force; and in its place, I sowed weeds. Whoever becomes king after me and settles Hattusas again, may the Stormgod of Heaven smite him!
The capitol of the Hittites is moved to Hattsua within a century or two. The Hittites sacked Babylon around 1595 BCE.

The Hittites did not inhabit the North Black Sea plain of Anatolia to the Northeast, however. In this region, they were blocked by the Kaskians who make their first appearance three hundred years into the Hittite written record, around 1450 BCE, when they took the Hittite holy city of Nerik to the North of the then Hittite capitol of Hattsua. Less than a century before the Kaskians sacked the city of Nerik and moved to Anatolia, the Kaskians conquered the Indo-European Palaic language speaking people of Northwest Anatolia.

The Kaskians probably hailed from the Eastern shores of the sea of Marmara, which is the small sea between the Black Sea and the Mediterranean.  The Kaskians continued to harry the Hittites for centuries, sacking Hattsua ca. 1280 BCE, although Hattsua was retaken as was the city of Nerik which they had again lost to the Kaskians. The Kaskians in an alliance with the Mushki people, toppled the Hittite empire around 1200 BCE, were then repulsed by the Assyrians, and appear to have migrated after being defeated by the Assyrians to the West Caucuses.

The Mushki were a people of Eastern Anatolia or the Caucuses, associated with the earliest history of state formation for Caucasian Georgia and Armenia. This suggests that they were likely non-Indo-European speakers, at least originally, although they may have adopted the local Luwian language of their subjects in Neo-Hittite kingdoms that arose after the fall of the Hittite empire in East Anatolia.

From about 1800 BCE to 1600 BCE, the city of Aleppo ruled the Kingdom of Yamhad based there and ruled by an Amorite dynasty.  The Amorites were a linguistically North Semitic people.  There had been Amorite dynasties in the same general region for two hundred years before then (i.e. since at least around 2000 BCE). This fell to the Hittites sometime in the following century (i.e. sometime between 1600 BCE and 1500 BCE) and was subsquently close to the boundary between the Egyptians to the Southwest, the Mesopotamian empires to the Southeast, and the Hittites to the North, for hundreds of years. There was a non-Indo-European Hurrian minority in Yamhad that exerted a cultural influence on the Kingdom and on its ruling Amorites who were a Semitic people whose language was probably ancestral to all of the Semitic languages (including Aramahic, Hebrew and Arabic), except Akkadian and language of Ebla, which is midway between Akkadian and the North Semitic language of the Amorites.

The Amorites a.k.a. the Mat.Tu were described as of sometime around the 21st century BCE as follows in Sumerian records (citing E. Chiera, Sumerian Epics and Myths, Chicago, 1934, Nos.58 and 112; E. Chiera, Sumerian Texts of Varied Contents, Chicago, 1934, No.3.)
The MAR.TU who know no grain.... The MAR.TU who know no house nor town, the boors of the mountains.... The MAR.TU who digs up truffles... who does not bend his knees (to cultivate the land), who eats raw meat, who has no house during his lifetime, who is not buried after death...

They have prepared wheat and gú-nunuz (grain) as a confection, but an Amorite will eat it without even recognizing what it contains!
In other words, the early pre-dynastic Amorites were probably nomadic herders.

The ancient Semitic city of Ebla was 34 miles southwest of Aleppo and was destroyed between 2334 BCE and 2154 BCE by an Akkadian king. It had a written language between Akkadian and North Semitic written from around 2500 BCE to 2240 BCE and was a merchant run town trading in wood and textiles whose residents also had a couple hundred thousand herd animals. The early Amorites were known to the people of Elba as "a rural group living in the narrow basion of the middle and upper Euphrates"(original source: Giorgio Bucellati, "Ebla and the Amorites", Eblaitica 3 (New York University) 1992:83-104), although Ebla would later become a subject kingdom of Yamhad.  The Akkadian kings campaigned against the Amorites following the fall of Ebla and recognized them as the main people to the West of their empire, whose other neighbors were the Subartu (probably Hurrians) to the Northeast, Sumer (in South Mesopotamia) and Elam (in the Eastern mountains).

The oldest Akkadian writing is found around 2600 BCE.

The Hittite, Mittani and Egyptians at 1400 BC

By around 1400 BCE, the Hittites ruled an area corresponding to the Red River (a.k.a. Kizilirmak basin) (map here citing Cambridge Ancient History Vol II Middle East & Aegean Region 1800-1300. I. E. S. Edwards (Ed) et al. as its source).

Adjacent to the Hittites to the Southeast, the Mittani empire, with commoners who spoke a non-Indo-European language called Hurrian and a ruling class that spoke an Indo-European language very close to Sanskrit, ruled the upper Euphrates and Tigris river valley about as far South as modern day Hadithah and Tikrit in Iraq. The Mittani also had a small piece of the Levant, extending Southwest to roughly the modern boundary between Turkey and Syria.

To the South of the Mittani in the East, the Kassite empire ruled the lower Euphrates and Tigris river valleys. The Kassites were a non-Indo-European Hurrian speaking people from the neighboring Zargos mountains to the East of the Mittani. The Kassites wrested power from the Akkadian empire.  The Akkadian empire, and its Semitic Akkadian language, in turn, had replaced the non-Indo-European, non-Semitic Sumerian language used before the Akkadian empire emerged.

To the Southwest of the Mittani were the Egyptians.  The Egyptian ruled from the Mittani boundary in the Levant to the greater Nile Delta in Northeast Africa.

The Hittites At Their Peak

Half a century later later, at its greatest extent, under Kings Suppiluliuma I(c.1350–1322) and Mursili II (c.1321–1295), the Hittite Empire included all of Anatolia (including Troy) except the immediate vicinity of modern Istanbul, the Levant from modern Turkey to a little bit North of Beruit on the coast and as far South as what is today Damascus further inland, and the ancient city of Mari, which is situated very close to where the Euphrates river crosses the modern Syrian border. (Map here).

The Hittites had absorbed all of the Mittani empire except some of its lands in the upper Tigris, and extended to the South the Mittani border with Egypt.

The End of the Hittite Empire and the Anatolian languages

A civil War followed by a series of regional events roughly contemporaneous with the Trojan War of the Greek epics and various historical accounts lumped together as part of the "Bronze Age collapse" destroyed the Hittite empire around 1200 BC. The Hittite language is replaced by successor languages after the Hittite empire falls.

Following this peak, one of the Anatolian languages was the Luwian language, which may have been the language of the Trojans. Luwain may have actually been a sister language of Hittite and equally old, as attested by its early use as a liturgical language along with pre-Indo-European languages of the area. Luwain may also have been an evolutionary linguistic predecessor to Hittite proper.

Luwain, in turn, evolved into the Lycian. See Bryce, Trevor R., "The Lycians - Volume I: The Lycians in Literary and Epigraphic Sources" (1986)). Other Anatolian languages including Lycian were successor Anatolian languages to Hittite that were spoken in Anatolia through the first century BCE.  Then, in the first century BCE, Alexander the Great conquered an area including Anatolia, and made Greek, which is a neighboring Indo-European language, the language of his kingdom.

Recap of Anatolian History 

All of the Indo-European Anatolian languages (with the possible exception of Luwian) spoken from around 1740 BCE to about 100 BCE, when they were replaced by Greek under Alexander the Great, trace their roots to the city-state they established by conquering the pre-existing city of Kanesh in central Anatolia.

There is no evidence for the presence of any Indo-European languages in Anatolia prior to about 2000-1800 BCE, and the available historical record seems to indicate that early Anatolian populations of Indo-Europeans were mere pockets of people at the time who may very likely have been recent arrivals.

Armenian is not an Anatolian language and is most closely related linguistically to Greek, but with many non-Indo-European and Indo-Iranian areal influences.  Armenian may have arrived in its current location shortly after the fall of the Hittite empire in a folk migration from Western Anatolia, the Aegean, or the Balkans.  It is sometimes associated with the Phrygians.

Post-Script: The Tocharians

[This fragment not about the Hittites is also salvaged from this old post.]

The oldest mummies in the Tarim basin of what is now Ugygur China (i.e. in the Xinjiang Uyghur Autonomous Region) in the far Northeast of modern China also date to 1800 BCE.

Pliny the Elder, in Rome, recounts a first century CE report from an ambassador to China from Ceylon who later served as an ambassador to the Roman empire, that corroborates the existence of people with this appearance and a language unlike those known locally.

Monday, February 16, 2015

Quark Masses Matter

Most of the mass in hadrons formed only by up, down and strange quarks comes from the binding energy of their gluon fields, and not from the rest mass of the quarks themselves.

But, that doesn't mean that the quark masses don't matter.  Lattice QCD studies of a model in which pions have a mass of 300 MeV rather than 135-139 MeV, which imply heavier quark masses than those present in reality, leads to QCD behavior very different from what is observed in real life.

For example, if up and down quarks were heavier than they are, particles make of two neutrons and no protons, would be stable, something that isn't the case in the real world.

Thus, binding energy in hadrons depends upon quark masses in a quite sensitive and non-linear way.

ATLAS Excludes CP-Odd Higgs At 220-1000 GeV

In models with two (or more) Higgs doublets, there are five or more Higgs bosons, rather than one as in the Standard Model.  The four extra Higgs bosons are customarily called H+, H-, A and h (or H), with H and h being CP-Even Higgs one heavier and one lighter, and A being a CP-Odd Higgs boson.

The A is excluded with 95% confidence at masses from 220 GeV to 1000 GeV by existing LHC measurements by the ATLAS experiment.  Two Higgs doublets are generically present in all SUSY models and in many other non-SUSY models beyond the Standard Model as well.

The 125 GeV Higgs boson is CP-Even as expected.

Meanwhile, the CMS experiment has produced more SUSY exclusions, because there is still not any sign of supersymmetry at the LHC.

Previous discussions of the theoretical and experimental barriers to two Higgs doublet models also dis favor the model, but a two Higgs doublet model has been proposed to explain multiple modest anomalies in LHC data.

Wednesday, February 11, 2015

Reich Paper Offers Wealth Of European Ancient DNA

A pre-print of a new paper by Reich, et al., offers a wealth of new ancient DNA information for Europe, including Y-DNA, mtDNA and autosomal DNA from the Mesolithic era through the late Bronze Age.

Most eagerly awaited is the new Y-DNA data (internal citations omitted):
We determined that 34 of the 69 newly analyzed individuals were male and used 2,258 Y chromosome SNPs targets included in the capture to obtain high resolution Y chromosome haplogroup calls. Outside Russia, and before the Late Neolithic period, only a single R1b individual was found (early Neolithic Spain) in the combined literature (n=70). By contrast, haplogroups R1a and R1b were found in 60% of Late Neolithic/Bronze Age Europeans outside Russia (n=10), and in 100% of the samples from European Russia from all periods (7,500-2,700 BCE; n=9). R1a and R1b are the most common haplogroups in many European populations today and our results suggest that they spread into Europe from the East after 3,000 BCE. Two hunter-gatherers from Russia included in our study belonged to R1a (Karelia) and R1b (Samara), the earliest documented ancient samples of either haplogroup discovered to date. These two hunter-gatherers did not belong to the derived lineages M417 within R1a and M269 within R1b that are predominant in Europeans today, but all 7 Yamnaya males did belong to the M269 subclade of haplogroup R1b.
The big surprise is that all of the Yamnaya Y-DNA was R1b-M269, which is now typical of Western Europe and the Northern European coast, in addition to the R1b of Samara which was a Mesolithic (i.e. hunter-gatherer) culture that preceded the Yamnaya culture in essentially the same geographic location. R1b-M269 is predominant in the Basque people, despite the widely held belief that their ancestors were not linguistically Indo-European, and the only Bell Beaker individual for whom Y-DNA data has been obtained (from Germany) is also R1b-M269.

Conventional wisdom had expected that the Yamnaya people were R1a-M417 bearing men who gave rise to the Corded Ware culture that produced the Y-DNA R1a predominance seen in Central and Eastern Europe today.  The genetic evidence tends to favor a NE European rather than SE European proximate source of R1a in Central Europe.

Karelia where the Mesolithic R1a sample was found is in modern day Russia just to the east of Finland.

The autosomal DNA data also provides new insights but is not so easily summarized.  Some notable observations:

* Eight of the nine Bell Beaker individuals for whom ancient autosomal DNA is available (all from one of two sites in Germany) are women.  Bell Beaker individuals have a considerably smaller Ancestral North European component than contemporaneous Corded Ware culture individuals.

* There are eight ancient autosomal DNA samples from the Unetice culture from sites in Germany. They have very similar automsomal DNA profiles to the Bell Beaker individuals.  But despite this similarity, all three of the men in the Unetice sample have Y-DNA I2.

* Y-DNA I2 is also found in five Swedish Mesolithic men, one early Neolithic man from Spain, two Middle Neolithic men from Spain, one Middle Neolithic man from Germany, one late Neolithic man from Germany, and two early Bronze Age men from Germany.

* The branching trees created using autosomal DNA similarities do not match the ones that would be inferred from Y-DNA data.

* The population discontinuity between the first farmers of Europe (LBK, etc.) and the linguistically Indo-European R1a/R1b populations that followed disfavors the Anatolian hypothesis of Indo-European language origins.

Monday, February 9, 2015

Standard Model Predictions For Tau Decays Still Match Experiments

In conclusion, we have made a measurement of the branching fractions of the radiative leptonic τ decays τ → eγνν¯ and τ → µγνν¯, for a minimum photon energy of 10 MeV in the τ rest frame, using the full dataset of e +e − collisions collected by BABAR at the center-of-mass energy of the Υ(4S) resonance. We find B(τ → µγνν¯) = (3.69 ± 0.03 ± 0.10) × 10−3 , and B(τ → eγνν¯) = (1.847 ± 0.015 ± 0.052) × 10−2 , where the first error is statistical and the second is systematic. These results are more precise by a factor of three compared to previous experimental measurements. Our results are in agreement with the Standard Model values at tree level, B(τ → µγνν¯) = 3.67 × 10−3 , and B(τ → eγνν¯) = 1.84 × 10−2 [3], and with current experimental bounds.
From here.

The pertinent language in the cited source for the Standard Model prediction published October 23, 2013 states:

For radiative τ− decays, with the same threshold Emin γ = 10 MeV, we obtain 1.84 × 10−2 (l = e) and 3.67 × 10−3 (l = µ), to be compared with the values measured by the CLEO Collaboration, (1.75 ± 0.06 ± 0.17) × 10−2 and (3.61 ± 0.16 ± 0.35) × 10−3, respectively, where the first error is statistical and the second one is systematic [41].

The experimental results from CLEO cited at [41] in the October 23, 2013 paper were published in the year 2000. The BABAR result was obviously much more accurate and much closer to the theoretical prediction as well. Indeed, the BABAR result is consistent with a true systemic error of zero, rather than the conservative estimate given, with all error seen actually being simply a function of statistical sample size. I noted similar instances of extremely accurate approximations of theoretical predictions in another post as year ago.

Thursday, February 5, 2015

Blog Roll Shuffle

I have recently added several new blogs to the blog roll which I encourage you to try out:

Bernard's Blog
bioRxiv
4 gravitons
Sean Carroll's Blog
West Hunter

Going out, due to inactivity for a prolonged period of time are:

Minoan Language Blog (last post September 30, 2012),
A Very Remote Period Indeed (last post February 27, 2013) (reluctantly since this has a local author), and
The World According To Kevin (last post August 15, 2012).

I am being quite lenient in my culling.  Several other blogs on the blog roll are quite inactive, but none of the others appear to be truly abandoned.

Science On A Green Venus

Introducing Green Venus

Imagine a world that I will call "Green Venus" (I was tempted to call it "Forks" after the very cloudy town in Washington State, but decided against that.)

Like our planet Venus, it has a nearly circular orbit around its star with an orbital period in the same time range of Earth and Venus, it doesn't have a moon, it has no detectable magnetic field, it is geologically active, it is located in the "habitable zone" from its star, it is similar in gravity and size to Earth, it is a terrestrial planet with a continent sized solid surface, it has clouds that admit heat.

While enough light makes it through the clouds on most days to reveal the star's location and to be similar to that of a cloudy day on Earth from its star, it is rarely clear enough to allow someone on the surface to see the stars or planets.  The sunniest mountain top on Green Venus gets a few hours at a time when the clouds do not conceal the sky a couple of times a year, and more brief glimpses of the sky free of cloud cover that may last just a few minutes, every two or three weeks.  Most places on Green Venus go decades between several minute long glimpses of the sky not obscured by clouds.

Like Earth, and unlike the real planet Venus, it does not have a retrograde orbit, it rotates around its axis roughly once every twenty-four hours.  It is mountainous and geologically active to a similar degree to Earth.  None of the elements that do not naturally occur on Earth, like Technetium and elements with atomic numbers of 93 or greater, are found in nature on Green Venus.  It has surface temperatures that are similar to, but slightly less extreme than those on Earth.  On the surface of Green Venus it is never more than about 40 degrees centigrade and never less than about 20 below zero centigrade (without considering windchill).  Green Venus has large and deep oceans of salt water that cover most of its surface, it has many fresh water lakes and rivers, and it has an atmosphere essential identical to Earth's in chemical composition and surface air pressure.  The greenhouse effect created by its clouds is balanced by a combination of its distance from its star and the size of its star.  Green Venus is not as tilted on its axis relative to the plane in which it orbits as Earth and has a more circular orbit and few tidal influences, so it experiences considerably less seasonal variation than Earth.  Like Earth, large swaths of Green Venus are thick with vegetation and animal life, and even the most hostile environments on the planet support life of some kind.

Critically, millions of years of intense struggles to survive in the face of natural conditions and fierce predators have produced a species of highly intelligent life on Green Venus that has become the dominant species on the planet with a vast, flourishing and sophisticated civilization.

In short, Green Venus is a lot like humans imagined that it would be until we sent some probes there and discovered just how ugly a place to live it was under its toxic clouds.

Unlike our solar system, there are no gas giant planets that orbit the same star as Green Venus, and it is the only terrestrial planet orbiting its star in an inner orbit.  There are asteroids, comets, and dwarf planets that orbit at a distance comparable to Pluto, but the largest planets also happen to be not very reflective.  The outer region of this solar system is full of interstellar dust and gas, as is a large swath of interstellar space between this solar system and the closest galaxy around which its star rotates at a much greater distance from the galactic center than Earth.  The closest star to Earth is about 4.3 light years away.  The closest star to Green Venus is about 43 light years away and has an absolute magnitude of about 2.6.  All stars are obscured from the perspective of Green Venus by a large region of dust and interstellar gas that is relatively clear only in the inner solar system around Green Venus.  As a result, even from orbit above the clouds of Green Venus, there are far fewer bright stars and planets in the sky visible to the naked eye than there are from the surface of Earth on a clear day.

While Green Venus has a very similar atmosphere and set of oceans to Earth, the minerals on Green Venus that are accessible on the surface of the planet or through mining are different.  Green Venus has been home to life only about as long as Earth had during the Carboniferous period, so it has far fewer hydrocarbon deposits (e.g. peat, coal, oil and natural gas) than Earth.  Green Venus also has far fewer heavy elements, such as platinum group metals, as well as far less gold, silver, mercury, lead, and Bismuth.  Natural diamonds are several hundred times more rare on Green Venus than they are on Earth, with gem sized diamonds also being disproportionately more rare relative to deposits on Earth than tiny ones.

Green Venus no deposits of heavy radioactive elements like Polonium, Astatine, Francium, Radon, Thorium, Protactinium, or Uranium in its surface crust where they can be accessed by mining.  No element with atomic number 84 or greater is found in nature in any accessible part of Green Venus, and elements with atomic numbers higher than 74 (Tungsten) are profoundly more rare than they are on Earth on Green Venus.

I pose this as a hypothetical, but such planets probably exist.  Stars in voids between filaments of dark matter have very low heavy metal content and very sparse stellar density.

What Kind of Science Would Green Venusian Develop?

The highly intelligent Venusian's who were in a fiercely competitive environment would clearly have reached a quite sophisticated level of scientific advancement.  In all areas of inquiry available to them, from mathematics, to social sciences, to organic chemistry, and so on, they would excel.  But, this environment would still produce a very different kind of scientific advancement than our actual history on Earth has produced.

Obviously, Green Venus would have much less sophisticated astronomy.  It would not have general relativity.  Newtonian gravity, modified to allow it to bend light, would be a cutting edge and controversial development comparable to quantum mechanics and general relativity for us, while most scientists would describe the phenomena of gravity with a formula consisting of formula relating gravitational pull based upon your weight and altitude relative to sea level, and would cling to a Venus centric view of space that is rooted in an ancient myth.

Nuclear fission power plants would be impossible without sufficient deposits of radioactive fuel, and the study of weak force decays would be much less advanced and much less a part of the scientific consciousness.  The lack of access to naturally active isotypes would also probably impair the development of nuclear physics generally, although QED would still probably be known to scientists and engineers and used daily by engineers and scientists.  Beta decay wouldn't be entirely unknown to scientists, but particle physics would be numerous decades behind.

The Venusians would probably have a proton, neutron, electron model of the atom and something closely approximating the periodic table, although with far fewer elements.

Why Did The First Farmers Farm?

Numbers Make Smart People Stupid

An open access paper in PNAS by Samuel Bowles (of the interdisciplinary Sante Fe Institute) and Jung-Kyoo Choi (a Korean academic economist), "Coevolution of farming and private property during the early Holocene," is a classic example of smart people doing lousy analysis with quantitative methods.  But, the focus on private property institutions in the context of the development of farming technology seems driven more by the political agenda of the investigators, than by the data.

If you read academic journal articles long enough, you will notice the trend.  Someone outside a field uses a simple mathematical model to solve problems that have confounded experts in the field for decades or centuries and publish it in a general interest journal. This provides a clear, precise answer to the problem.  This creates a huge controversy as experts in the field attempt to explain the huge conceptual or methodological errors made by the outsiders, usually with objections that are well founded.  The wider public, however, remembers the flawed study, but not the apt criticisms of it made in response to it, long after it is published.

One of the authors at the Language Log blog rants about some similar cases involving historical linguistics at Science magazine where a study with weak methodology used mathematical "entropy analysis" to argue that the Indus River Valley civilization's symbols were part of a written language.  Science wasn't interested in publishing an attempted replication using better data and methods that contradicted this famous result, concluding that the Harappan symbols, like the symbols used by the Vinca civilization of the Balkans and symbols used by the Picts were a mere proto-linguistic symbol system and not a full fledged written language.

Another PNAS computational linguistic paper by repeat offender and New Zealand academic Quentin D. Atkinson and others was also recently ripped apart by the linguistics experts at Language Log in a takedown that I discussed in a recent post.  Atkinson released another notoriously flawed computational linguistics paper on serial founder effects and phonemic diversity in 2011 which I blogged at the time.  He co-authored a flawed computational linguistics paper on the age of the Indo-European language family in Nature in 2003 and reproduced by others with improved by still flawed methods in 2011 (blogged here).

And, there was also the flawed foray into computational linguistics analyzing the case for an Altaic language family, also sponsored by the Sante Fe Institute, with retired eminent physicist Murray Gell-Mann as a co-author.  This study was actually better than some of Atkinson's efforts, but is was bad enough that Gell-Mann was ridiculed for it, and included some glaring overstatements about what had been shown.  For example, deceptively the study prominently discussed the Korean language as part of the Altaic language family in its analysis, despite the fact that the study didn't include any Korean words in the data sets that were used for the paper's computational analysis.

The Bowles and Choi Paper On The Neolithic Revolution

Bowles and Choi are trying to answer the question of why people started farming when they did (an event called the Neolithic Revolution), despite the fact that the first farmers were generally less well fed than contemporaneous foragers.  They pose the question, somewhat misleadingly, as follows (emphasis added):
[A]s a number of archaeologists have pointed out (1113), farming was probably not economically advantageous in many places where it was first introduced. Indeed, recent estimates suggest that the productivity of the first farmers (calories per hour of labor including processing and storage) was probably less than that of the foragers they eventually replaced, perhaps by a considerable amount (14) (SI Appendix). In many parts of the world, stature and health status appear to have declined with cultivation (15). Farming did raise the productivity of land and animals, and this, we will see, was critical to its success. However, why an erstwhile hunter–gatherer would adopt a new technology that increased the labor necessary to obtain a livelihood remains a puzzle.  
They are correct that calories per hour of labor including processing and storage was probably less for early farmers than for foragers, and that stature and health status declined among early adopters of farming.  My quarrel is with their use of the term "economically advantageous", when what they really mean is "nutritionally advantageous."  Their failure to distinguish these two concepts is a critical flaw in the study.

There is no good reason at all to believe that farmers were acting in anything but their best economic interests taken as a whole, even though this may have involved nutritional deficits.  The question instead, is what benefits did early adopters of farming receive that made up for the nutritional deficits that they experienced.

When People Seem Irrational, You Are Ignorant Of One Or More Key Facts 

One of the most powerful lesson of economics is that people, even ill educated people whom their betters don't give much credit for acting rationally do indeed very consistently, at least on average, respond to incentives and take actions that enhance their well being.  Almost always, when people do something that looks like it is irrational, this is because the observer isn't aware of the factors that are driving people to take the action in question.

For example, while renting rather than owning a home when interest rates are low and small down payment can be secured, making it cheaper to make mortgage payments than rent payments, may seem like an irrational choice for a working class individual, when one looks closer, it frequently isn't irrational.  If that working class individual has bad credit, the interest rates he would have to pay may be much higher than those paid by his middle class peers.  And, if that working class individual is likely to be unemployed for a few months at a time over the course of the next decade and has little savings, owning a home exposes him to a loss of a down payment in a foreclosure when he is unemployed and can no longer make his monthly payment, while his down side loss in the event of a few months of unemployment may be less (particularly if he keeps his limited assets in a liquid savings account rather than a down payment to buffer him at these times) if he rents.  A choice that looked irrational when only a simplistic analysis is conducted may make more sense when more factors are considered.

Key Factual Observations About the Neolithic Revolution

This Bowles and Choi paper is better than most at presenting some of the key historical facts from archaeology and paleoclimate studies, even though its model is too simple to capture many of these insights and adds little to what can be completed more reliably without a mathematical simulation which is really just a gimmick for presenting the foregone conclusion associated with the assumptions that go into the model. 

Paleoclimate

The key paleoclimate data are presented in the chart below.


Ignoring for the moment their simulated data in light gray bar graph form:
Estimated dates of some well-studied cases of the initial emergence of cultivation are on the horizontal axis (8, 54, 55). Climate variability (Left) is an indicator of the 100-y maximum difference in surface temperature measured by levels of δ18O from Greenland ice cores (SI Appendix). A value of 4 on the vertical axis indicates a difference in average temperature over a 100-y period equal to about 5 °C.
As the chart indicates, intermittent periods of wild temperature variation over the span of just a few generations was the norm for the entire Upper Paleolithic era (about 40,000-50,000 years ago), after which temperatures became much more stable starting at the beginning of the Holocene era about 10,000 years ago when farming first emerged in the Fertile Crescent and China and the middle latitudes of the Americas (farming arose independently at later times in the New Guinea Highlands, Sub-Saharan Africa and the Eastern United States).

Thus, a key part of the answer to the question of why farming emerged when it did is that the climate was too unpredictable for farming to be a viable means of food production during almost the entire Upper Paleolithic era.  Farming didn't emerge before the Holocene because it couldn't in the climate conditions at the time.

It is very plausible to think that property rights were an effect driven by the fundamental economics of farming and herding, rather than an important cause of this shift, when reduced climate variability provides a much more plausible proximate cause of the Neolithic revolution.  The authors certainly don't suggest any plausible way that the Neolithic revolution could have been possible earlier if Upper Paleolithic humans had adopted different property regimes, something that ought to have been possible if property rights were as important as they suggest in this transition.  And, Coase's theorem, a jewel of modern economics, which basically says that economics will generally drive people to find a work around to secure economically efficient arrangements in the face of bad legal rules, tends to favor that direction of causation.

Prior to the Upper Paleolithic era, modern human presence was largely restricted to Africa and mainland Asia (probably excluding North Asia).  Modern humans prior to that era were absent from the Americas, Oceania, Australia, the islands east of the Wallace line, Japan, Taiwan, Tibet, the Andaman Islands, and Crete. 

There is no solid positive evidence for any modern human presence in Asia beyond South Asia until the Toba eruption ca. 75,000 years ago, although there is evidence of a modern human presence in Southwest Asia more than 100,000 years ago, some of which was part of the same archaeological culture as a contemporaneous group of modern humans in Upper Egypt and the Sudan.  Before then, modern humans were confined to Africa where they evolved in the first place as foragers.

The Archaeological Record in the Fertile Crescent Neolithic

Bowles and Choi do nicely summarize some key elements of the archaeological record describing the Neolithic transition:
Kuijt and Finlayson very plausibly write that a “transition from extramural to intramural storage system may reflect evolving systems of ownership and property … with later food storage systems becoming part of household or individual based systems” (2). . .                          
Southwestern Asia provides the best-documented cases providing evidence of the gradual adoption of food production along with evidence suggesting the emergence of private property in stores in the Levant between 14,500 and 8,700 B.P. (SI Appendix). At the beginning of this period, Natufians hunted and collected wild species and possibly practiced limited wild-species cultivation along with limited storage (41, 42). Somewhat before 11,000 B.P. there is direct evidence of storage of limited amounts of wild plants outside of dwellings, consistent with the hypothesis that access to stored goods was not limited to the members of a residential unit (2, 43). A millennium later, goats and sheep had been domesticated (constituting a substantial investment), and we find large-scale dedicated storage located inside dwellings, suggesting more restricted access (4, 44).
Over none of this period could one describe these communities as either simply foragers or farmers. Their livelihoods were mixed; in many cases, their residential patterns varied over time between sedentary and mobile, and it seems their property rights, too, varied among the types of objects concerned, with elements of both private and common property in evidence. Bogaard (4) and her coauthors found that at Catalhoyuk in central Anatolia (10,500–10,100 B.P.), “families stored their own produce of grain, fruit, nuts and condiments in special bins deep inside the house.” This restricted-access storage coexisted with the prominent display of the horns and heads of hunted wild cattle. The authors concluded that “plant storage and animal sharing” was a common juxtaposition for “the negotiation of domestic [the authors elsewhere call it “private”] storage and interhouse sharing.” The process of change was neither simple, nor monotonic, nor rapid. However, in both its institutions and its technology, Levantine people were living in a very different world in 8,700 B.P. from the world of the early Natufians almost six millennia earlier.
Thus, a second key fact is that people didn't just "adopt farming."  They became proto-farmers of wild crops at a time when hunting was good enough for them to remain at one location.

Another Economics Consideration

Yet another key consideration that doesn't really properly get posed is the fact that in economics, you need to look at the margins, and not at averages.  Those who adopted farming first, no doubt, had the most comparative advantage.  This is to say, they were best at farming relative to their success at hunting.  Perhaps they had mobility problems due to injured family members, or pregnant women in the family.  Perhaps they were in places that had been over-hunted and storage of hunted and gathered products reflected this scarcity (people hoard things that have value), but had especially fertile soils.

Anyway, the key point is that even if the first farmers were less well fed than the average hunter-gatherer, that doesn't imply that this was true for this particular subset of individuals.

Trade Offs

Why else would someone sacrifice calories to farm?

Farmers can live sedentary lives, which means that they can invest resources to build homes, rather than rebuilding camps every few days, providing better quality shelter with less of an investment in labor, that shelters them from the elements, hungry animals or unfriendly tribes of fellow humans.  This saved time may have also freed up time to make other things, like clothing, that can also protect people from the elements and thereby increase their ability to survive.  And, it can allow them to accumulate goods, like pottery, baskets and food stores, that can buffer short term shortfalls in food production.

A little more hunger may be worth fewer deaths from hypothermia, wild animals, brief periods of unsuccessful hunting and gathering, and attacks by hostile foreign tribes.  These benefits may be particularly beneficial for the survival of children, the temporarily injured, and pregnant women, whose well being enhances selective fitness, and to the elderly whose knowledge and capacity to free up the time of community members who are able to hunt and gather and farm and herd while caring for children and doing sedentary tasks like making flour, ceramics, weapons and clothing.

Proto-farmers may also have traded lower average calories in order to have a steadier food supply not so strongly subject to feast and famine cycles, that culled weaker members of the community leaving those who survived better fed, but meant that many died of hunger when the hunting and gathering wasn't as abundant, wresting control from Nature to mankind.

Stray Physics Ideas

Dark Matter

About a year ago, Matt Strassler noted some recent reports of high energy photons from galactic source that have been pitched as possible direct evidence of dark matter:
For the moment let me just note that there are two completely different excesses —

* one in X-ray photons (specifically photons with energies of about 3500 eV) noticed by two groups of scientists in a number of different galaxies, and

* one in gamma ray photons (specifically photons with energies of 1 – 10 GeV [GeV = 1,000,000,000 eV]), extracted with care by one group of scientists from a complex set of astrophysical gamma ray sources, coming from a spherical region around, and extending well beyond, the center of our own galaxy.

These seem to the experts I’ve spoken with to be real excesses, signs of real phenomena — that is, they do not appear to be artifacts of measurement problems or to be pure statistical flukes. This is in contrast to yet another bright hint of dark matter — an excess of photons with energy of about 130 GeV measured by the Fermi satellite — which currently is suspected by some experts, though not all, to be due to a measurement problem.
The 3.5 keV photons are at an energy level that would be on the right order of magnitude to be produced by the annihilation of warm dark matter of some sort. A paper describing of these two observations is here. Sterile neutrinos and axions are among dark matter candidates that have been proposed as sources of this signal.

The 1- 10 GeV photons observed by the Fermi satellite have been pitched as having a distribution and scale consistent with the annihilation of dark matter at the light end of the WIMP range (e.g. 25 Gev to 30 GeV) and have been dubbed by some as "hooperons", after one of the lead investigators. One of the earliest announcement of this excess from 2009 is here. More than four years later, a more firmly argued case for this as a signal of a dark matter candidate has been advanced.

Neither of the photon excesses have a source that is well known to astonomers, and the "hooperons" have an apparently cuspy halo distribution very much like that expected for Cold Dark Matter with that mass scale.

The debate about both possible sources is, so far as I know, ongoing.

Deep Theory

So far as we know, fundamental physics conserves the conjugate quantity CPT (charge-parity-time).  In an equation where there is CP violation in one direction of time, the inverse amount of CP violation takes place in the other direction of time.  Antiparticles, for example, have the opposite charge and parity of the corresponding particles.

In charged particles, there is a correspondence between charge and status as matter or antimatter.  Positively charged up-like quarks, negatively charged down-like quarks, and negatively charged leptons are matter.  Negatively charged up-like quarks, positively charged down-like quarks, and positively charged leptons are anti-matter.  In each of these cases particles of matter can be both left handed and right handed, and particles of antimatter can be both left handed and right handed.

Baryon number and leptons number are both concepts tied to CPT conservation, because both concepts distinguish between particles and antiparticles.  Quarks can be created so long as corresponding antiquarks are created.  Leptons can be created so long as corresponding antileptons are created.  The Standard Model places no limit on the number of particles in the universe, it just fixes the number of quarks minus antiquarks, and fixes the number of leptons minus antileptons.

The Standard Model also theoretically allows for sphaleron processes that don't separately conserve B or L number, but do conserve B-L, but no one has ever observed such a process in real life.

For neutrinos, which have no electric charge, parity corresponds to matter and antimatter status.  Left handed neutrinos are matter and right handed neutrinos are anti-matter.  This property is a central one to the concept of a neutrino that was pivotal in the hypothesis that they existed in the first place.  A direct oscillation of a neutrino from a left handed parity to a right handed parity would be a lepton number violating event that would violate lepton number by an increment of two.  Yet, lepton number violation is something never observed experimentally outside one set of neutrinoless double beta decay studies that have been discredited in efforts to replicate the results.

This analysis seems to rule out the possibility of right handed or Majorana neutrinos.

F Mesons

There are a variety of scalar mesons whose internal structure is not well understood.  This section recaps some of the relevant raw data:

A meson with zero isospin (I), zero electric charge, quantum number G=+, and quantum numbers C and P = + and + is denoted with the symbol "f" and an integer number for total angular momentum with values 0, 1, 2, 4 and 6 observed.

The lightest of them, f0(500) is also known as a "sigma meson".

There have been ten different kinds of f mesons with J=0 that differ only in mass that scientists think that they have observed:

f0(500), f0(980), f0(1370), f0(1500), f0(1710), f0(2020), f0(2100), f0(2200), f0(2330) and f0(2510).

There are three that have been observed with J=1:

f1(1285), f1(1420), and f1(1510).

Eleven f mesons have been observed with J=2:

f2(1270), f2(1430), f'1(1525) (with J=2), f2(1565), f2(1640), f2(1810), f2(1910) f2(1950) f2(2010), f2(2150), f2(2340).

Two f mesons have been observed with J=4:

f4(2050) and f4(2300).

One f meson has been observed with J=6:

f6(2510).

There is also one f meson observation fJ(2200) that could be J=0, 2 or 4.

All ground state mesons have excited states with higher spin.  But, there are no simple two quark combinations that produce the quantum numbers of the observed f mesons.

Vector Meson Dominance

One approach to doing QCD calculations that is quite old school is called "vector meson dominance." (full disclosure: I've contributed to this Wikipedia page).  It is notable because it sometimes outperforms some better theoretically grounded QCD calculation methods, for example in [1] and [2].

[1] The COMPASS Collaboration, Measurement of radiative widths of a2(1320) and pi 2(1670) (March 2014) (citing J.L. Resner, Phys. Rev. D 23 (1981) using VDM as superior to three other alternatives).  See also here.
[2] Susan Coito, Unquenched Meson Spectroscopy (December 2013).

The implication would be that modern models may be missing something important that VMD captured.  VMD was dropped because it didn't work well with heavy mesons, but workarounds and fixes to those problems have been developed since those problems were identified.

Long Delayed Medical Developments

* The technology necessary to manufacture safe and effective intrauterine "copper T" contraceptive devices has been in place since ca. 3000 BCE in multiple parts of the world.  And, while we have a decent understanding of how they work now, at the time that these devices came into widespread use in the 1970s, medical professionals and scientists had only a dim and conjectural understanding of how they worked.  So far as I can discern, however, this innovation was never made in any pre-modern society (or at least, didn't enter widespread use).  I thought I once saw an argument that this has been done in ancient Egypt, but have been unable to locate a confirming source for that assertion.

* The germ theory of disease was highly effective in dramatically reducing infectious disease deaths decades before there were any antibiotics or vaccines in widespread use, and well before we had a comprehensive understanding of leading infectious disease agents (i.e. bacteria, viruses, parasites and prions) and vectors.  Yet, while treatments or a more particularized understanding of infectious diseases required access to advanced laboratory sciences, a highly distilled germ theory of disease framework that could be summed up in a couple of pages of text or a few minutes of oral tradition, which would be sufficient to reduce infectious disease mortality by perhaps 80% or more, could have been developed with pre-metal age ceramic tool making technologies in the Neolithic era and if they ever were, would have conferred immense selective fitness enhancing benefits.

We know this because the decline in infectious diseases in the modern era coincided with the widespread acknowledgment of the germ theory of disease and significantly preceded the development of vaccines and antibiotics, which also helped, but appeared only well after the germ theory was developed.  Simple sanitation and quarantine concepts were more important than either of these medical treatments.

In point of fact, religious purity concepts and taboos, and crude quarantine concepts were kludges that benefited from similar concepts.  For example, in areas with high infectious disease risks great religious diversity marked by purity taboos that varied from village to village and county sized areas to county sized areas evolved to prevent outbreaks in human populations from killing off everyone, because some of the many religions in any region would have the right taboos to survive the infectious disease threat de jure.  Meanwhile, religious purity taboos were less central to religious practice and had greater geographic scope in areas where infectious disease risks were more mild.

If Higgs Couplings Substantially Deviate From Standard Model, The Universe Explodes

If the couplings of the Higgs boson to particles detectable at the Large Hadron Collider (i.e. the relative proportion of Higgs boson decays into different kinds of particles) differ from the Standard Model expected value by more than about 20%, then the vacuum is unstable, and the universe should have exploded by now according to a recent analysis.

Fortunately for us, there is no indication whatsoever that the Higgs boson detected at the LHC differs materially from the Standard Model prediction of its properties.  The fit of the Higgs boson data from the LHC and Fermilab to the Standard Model predictions, indeed, is growing better as more data accumulates.