Sunday, March 29, 2015

Few European Hunter-Gatherers Survived Last Ice Age

The last ice age peaked at 20,000 years ago.  The people who survived in pockets of survivable territory started to repopulate Europe in the period from 14,000 to 7,000 years ago, called the Mesolithic era.  Several ancient genomes are now available from that era.  They suggest that the effective male population size prior to the repopulation of Europe was just 30 men.

Thus, modern humans only barely hung on through the Ice Age in Europe, and ultimately, their ancestry makes up a fairly modest share of modern European ancestry.

Friday, March 27, 2015

The Latest Combined Higgs Boson Mass Measurement From The LHC

The most up to date available measurement of the Higgs boson mass combining ATLAS and CMS experiment data in two different channels each at the end of the first LHC run to get a single number is:

125.09 +/- 0.237 GeV/c^2.

Analysis

The two sigma range for the Higgs boson mass is now:  124.61 GeV to 125.56 GeV.

This is a material improvement in the margin of error, which had previously hovered around 0.4 GeV.  Some further improvement in the margin of error should come from the second run of the LHC.

This value disfavors the 2H=2W+Z mass formula by 3.7 standard deviations.

There is an argument that the "tree-level" mass of the Higgs boson is 123.114 GeV (half the Higgs vev) but that it is increased by higher order loop corrections that bring it to its experimental value.  The "tree-level" estimate of the mass of the W boson is 78.9 GeV.  If the percentage increase in mass due to higher order loop corrections for the Higgs boson from the tree level value is the same as the higher order loop corrections of the W boson to the experimental value, then the implied Higgs boson mass value would be 125.43 GeV which is consistent at a 1.4 sigma level with the latest combined mass measurement.  No published source actually calculates these higher order loop adjustments, however.  While the actual higher order loop calculation is probably of that order of magnitude, it could easily be higher or lower.  The claim is plausible, but requires further investigation.  If the higher order loop corrections produced a value consistent with 124.65 GeV, that would be remarkable indeed as discussed below.

The hypothesis that the sum of the squares of the Higgs boson mass, W boson mass and Z boson mass equals half of the Higgs vev (using a global fit value of 80.376 GeV for the W boson mass) implies a Higgs boson mass of 124.65 GeV, which is within two sigma of the current measurement.

Using the 80.385 GeV PDG value of the W boson mass and assuming that the sum of squares of boson mass equals one half of the square of the Higgs vev implies a Higgs boson mass of 124.65 GeV as well, so the difference created by that assumption is too small to matter.

This suggests that the quantum corrections to the Higgs boson mass may indeed be very highly fine tuned making supersymmetry unnecessary to address that seemingly unlikely reality.

As discussed below, there is some tension between the best fit Higgs boson mass measurement and the best fit top quark measurement (under the assumption that the sum of the square of all fundamental particle masses equals to the square of the Higgs vev), with the Higgs boson measurement implying a higher than measured top quark measurement.  But, these tensions are within the margins of error in the measurements.  The latest combined best fit value of the top quark mass (i.e. 173.34 GeV) would imply a Higgs boson mass of 125.60 GeV, which is just outside the two sigma band of Higgs boson masses based upon the most recent measurement.

Implications for Top Quark Mass

This also significantly tightens the expected value of the top mass from the formula that the sum of the square of each of the fundamental particle masses equals the square of the Higgs vaccum expectation value.  The uncertainty in the Higgs boson mass had been the second greatest source of uncertainty in that calculation.  The best fit for the top quark mass on that basis (using a global fit value of 80.376 GeV for the W boson rather than the PDG value) is 173.73 GeV (173.39 to 174.07 GeV within the plus or minus one sigma band of the current Higgs boson measurement).  If the the sum of the square of the boson masses equals the sum of the square of the fermion masses the implied top quark mass is 174.03 GeV if pole masses of the quarks are used, and 174.05 GeV if MS masses at typical scales are used.

That compares to the latest top quark mass estimate from ATLAS of 172.99 +/- 0.91 GeV.  The latest combined mass estimate of the top quark (excluding the latest top quark mass measurement estimate from ATLAS) is 173.34 +/- 0.76 GeV.

How big are the gaps?

The fermion side of the balance sheet could fit a particle as massive as 19 GeV if the fermion sides and boson sides must be equal, and about 16 GeV if they need not be equal, consistent with current particle mass data alone.

But, particles in this mass range would greatly distort the expected cross-sections of Higgs boson decays in ways that would probably already be detectable.  Any such particle has been ruled out by W and Z boson decays to the extent that it can be produced by decays of these particles, and if they were present in Higgs boson decays would dramatically reduce, for example, the expected cross-section of bottom quark pairs from Higgs boson decays (which is the largest single cross-section from Higgs boson decays, making up about two-thirds of them, although this cross-section is hard to measure due to significant backgrounds that also produce bottom quark pairs).  Particles with masses of 10 MeV or less, in contrast, would have only a modest impact on the decay patterns observed in Higgs bosons decays, but would still have to be sterile as to W and Z boson interactions.

Another interesting possibility is that baryons could contribute to the fermion side, and that mesons could contribute to the boson side, rather than just the fundamental particles.  My intuition is that this would not work, but I haven't run the numbers.  Light baryons wouldn't add much, but the heaviest baryons with B quarks would make a significant contribution.  Still, order of magnitude, it isn't impossible.

Another issue is which masses we should be using: pole masses or masses at a single consistent mass scale.  Quark and lepton masses get slightly lower at higher energy scales.  The Higgs boson mass declines more rapidly with higher energy scales.  I think, but don't know, that the W and Z boson masses also decrease faster the quark and lepton masses at higher energy scales.

Since the top quark is the predominant contribution to the fermion side of the equation, only the decline from the top pole mass to some energy scale above the top pole mass is relevant.  But, since the fermion side is already "light" relative to the 50-50 expectation, any decline in the top mass hurts the balance (and would probably be less than 1% in addition to being less than the boson side reduction).  On the boson side, a reduction of 0.185% from the current best fit values would bring the sum of the square of the masses to one half of the Higgs vev.  This may understate the amount of actual renormalization reduction at plausible targets like the top quark mass and the Higgs vev.

The Higgs boson mass runs from 125 GeV at 125 GeV to zero at about 10^15 GeV, on a curve that is concave with respect to a log-linear relationship (i.e. masses are lower at every point except the end points relative to a log-linear relationship of Higgs boson mass and energy scale).  This seems to suggest that the Higgs boson mass at 246 GeV should be more than 13% lower than the pole mass (i.e. about 108.8 GeV), which is far too much of a reduction to fit the formula and would favor using pole masses across the board, as would the Higgs boson mass at 173.35 GeV which should be more than 11% lower than the pole mass of the Higgs boson, if I have the calculations right.  The 0.185% shift required would imply an energy scale of something less than 133.34 GeV (but more than 125.09 GeV), which doesn't make much sense under any theory.

Given how close the experimental masses are to the preferred values using pole masses, however, it isn't obvious that renormalized values are necessary.

But, if the apparent relationship does involve pole masses, then there is very little wiggle room indeed in the predicted values of the Higgs boson mass and top quark mass, although this can be relaxed a little if the sum of the square of the fundamental fermion masses need not be exactly equal to the sum of the square of the fundamental boson masses.

Tuesday, March 24, 2015

Did Dogs Drive The UP?

A new book entitled "The Invaders" by Pat Shipman, argues that the domestication of dogs was key to the Middle Paleolithic-Upper Paleolithic transition and to the demise of Neanderthals. The theory is reasonably plausible.

Thursday, March 19, 2015

The Population Genetics Of The British Isles

A major new study of the whole genomes of the people of the British Isles has been published in the journal Nature.

A few of the "forest" level conclusions:

* The sample size is 2,039 people in the British Isles compared to 6,209 European individuals.  This is pretty much as big a sample as you get in historical population genetics.  The significance of these sample size is magnified by the fact that these are whole genomes, and not just Y-DNA or mtDNA haplogroup data.  Even very small samples of whole genomes can be highly informative, and these samples aren't small.

* All of the peoples of the British Isles are very homogeneous genetically, and the people of Central and Southern Britain (as well as Cornwall), are extremely homogeneous genetically.

This is particularly notable, given that Britain has more than four cultural/linguistic units that are recognized politically as different enough to require autonomy to some extent (Britain, Scotland, Wales, and Northern Ireland, plus some minor dependencies on nearby islands).

There is far more dialect variation in the British Isles than there is in the United States to the extent that one could make a very well informed estimate about someone's origins that is even finer grained than the genetic clusters within the British Isles identified by this study based upon that person's dialect and accent in the English language.

The British Isles are perhaps one of the clearest examples of strong cultural and linguistic substructure in a population that is genetically very homogeneous.  In most areas of the world, the kind of cultural and linguistic substructure seen in the British Isles corresponds to fairly dramatic population genetic differences between the various groups.  In the British Isles, in contrast, populations that are genetically almost identical and not all that far from each other geographically have very distinct cultural identities.

* The fine grained differences between British subpopulations correspond to influences from different parts of the Continental Europe and Scandinavia that largely correspond to population genetic events in the historic era and archaeologically well documented late prehistoric eras, more or less as one would expect with only minor surprises (like Cornwall that one might have expected to be more like  the Welsh people).

For example, as expected from our historical knowledge, the population genetics of Northern Ireland overlap heavily with the population genetics of the area around the western Scottish-English borderlands.

* The Danish Vikings who imposed Danelaw on Britain in the 1st millennium, while they had significant cultural and linguistic impact, had almost no genetic impact outside of the Orkney Islands.  This is analogous to the situation in Hungary, where the people who are the source of Hungary's current language have left almost no genetic trace in the country.

* The Anglo-Saxon contribution to the British gene pool in the 1st millennium is about 10% to 40% of the total (the spread is disappointingly large for such an impressive data set).  This is a significant minority contribution, but not population replacement.  This is reflected mostly in the Central-Southern British cluster that out of 17 clusters makes up about half of the total sample.  Many of the other clusters are in areas that have some level of political autonomy.  Wales, for example, appears to have five different small regional genetic clusters.

* The pre-Anglo-Saxon Celtic substrate in Britain was not uniform; it varied by region.

* There was substantial migration to Britain from Continental Europe in the Neolithic and later eras prior to the arrival of the Romans (and therefore also prior to the arrival of the Danes, the Anglo-Saxons, the Normans, and recent immigration from the 19th century onwards).

* I suspect, but don't know for a fact, that the sample was limited to people who are "ancestrally" British and hence excludes individuals with known recent immigrant ancestry.  Thus, Britain today is probably much more genetically diverse than this sample which was probably picked to be as informative as possible about the ancient and prehistoric genetic history of Britain would indicate.  For example, Britain has a significant South Asian minority population that is largest in greater London, but is found throughout the British Isles, that is not reflected in this data.  These huge genetic chasms, however, are not reflected in Britian's system of regional autonomy and instead is blended into preexisting communities across the British Isles.

Latest Top Quark Mass Measurement From ATLAS

The latest top quark mass measurement from ATLAS (at the LHC) is 172.99 +/- 0.91 GeV.

By comparison the PDG value is 173.21 +/- 0.87 GeV.

A somewhat more recent combined value (because it considers pre-prints and not just published papers) is 173.34 +/- 0.76 GeV.

An extended Koide's rule estimate of the top quark mass using only the electron and muon masses as inputs, predicted a top quark mass of 173.263947 ± 0.000006 GeV.

A prediction that I made in March 2014 which assumed a Higgs bosons mass of about 125.96 GeV and some other assumptions (some of which are just conjectures themselves), predicted a top quark mass of 173.1125 ± 0.0025 GeV.

Of course, the latest experimental value is consistent with all of the other values due to a lack of experimental precision in the top quark mass measurement, which is improving over time, but ever so slowly.

Thursday, March 12, 2015

Has ATLAS Seen SUSY?

The ATLAS experiment at the LHC has reported a three sigma excess of events beyond the Standard Model expectation in a particular kind of search for squarks and gluinos, a type of particle predicted by Supersymmetry models.

This is one of the strongest experimental indicators of SUSY phenomena to date amidst an ocean of searches and may simply be an overstated statistical fluke due to look elsewhere effects (i.e. the notion that if you do enough searches, some will come up positive by random chance, undermining the significance of any particular result that is not replicated).  If CMS sees the same thing (and this study failed to reproduce a similar, but slightly weaker excess in the CMS data) then it could very well be real.  If CMS does not see it, it is probably just a fluke.

A five sigma effect is considered necessary to call a finding a "discovery" of a particle.

A summary at the conclusion of the paper states:
This paper presents results of two searches for supersymmetric particles in events with two same-flavour opposite-sign leptons, jets, and E miss T , using 20.3 fb−1 of 8 TeV pp collisions recorded by the ATLAS detector at the LHC. 
The first search targets events with a lepton pair with invariant mass consistent with that of the Z boson and hence probes models in which the lepton pair is produced from the decay Z → ``. In this search 6.4 ± 2.2 (4.2 ± 1.6) events from SM processes are expected in the µµ (ee) SR-Z, as predicted using almost exclusively data-driven methods. The background estimates for the major and most difficult-to-model backgrounds are cross-checked using MC simulation normalised in data control regions, providing further confidence in the SR prediction. Following this assessment of the expected background contribution to the SR the number of events in data is higher than anticipated, with 13 observed in SR-Z µµ and 16 in SR-Z ee. This corresponding significances are 1.7 standard deviations in the muon channel and 3.0 standard deviations in the electron channel. These results are interpreted in a supersymmetric model of general gauge mediation, and probe gluino masses up to 900 GeV. 
The second search targets events with a lepton pair with invariant mass inconsistent with Z boson decay, and probes models with the decay chain χ˜ 0 2 → ` + ` −χ˜ 0 1 . In this case the data are found to be consistent with the expected SM backgrounds. 
No evidence for an excess is observed in the region in which CMS reported a 2.6σ excess [24]. 
The results are interpreted in simplified models with squark- and gluino-pair production, and probe squark (gluino) masses up to about 780 (1170) GeV. 
Lubos Motl offers a cautious but hopeful assessment of the result being really due to SUSY.

UPDATED March 22, 2015: More SUSY exclusions here.

Wednesday, March 11, 2015

Prehistoric European Quick Hits

* The prehistoric record of tsunamis in Southwest Iberia helps to explain the archaeological record and suggests a possible Iberian tsunami as the source of the Atlantis myth.

* Maju notes the availability of a new collection of papers regarding the early Balkan Neolithic.  The was a launching pad from which much of the process of bringing farming and herding to Europe originated and in turn provides a way to discern its sources in turn.

* Bell Beaker blogger notes a polemic arguing that the Iberian expansion theory of Bell Beaker expansion isn't necessarily as strong, vis-a-vis a central European origin and dispersal in light of the archaeological evidence as has frequently been asserted.  The paper is thin on evidence, looking mostly to dates of Central European cemeteries without detailed discussion, and reinterpreting existing evidence, neither of which are powerful when going up against a prevailing paradigm in the field.  But, it does not mention evidence from the European Y-DNA R1b phylogeny that does tend to support a central European origin and makes the arguments there worth examining more closely.

* Bell Beaker blogger also continues to explore the links between ancient beer brewing and mystical or magic lore in prehistoric Europe, with a linguistic slant.

* Another intriguing Bell Beaker blogger post explores the potential roots of European pottery traditions in the far East and Jomon pottery traditions and links it to Y-DNA R expansion.

* And, Bell Beaker blogger also has a nice post on trade across the Strait of Gibraltar, before and after the Bell Beaker period, in goods like ivory and ostrich egg shells.

* Dienekes' Anthropology blog has picked up on a paper also discussed at Marginal Revolution (where I commented noting various adjustments that could be made to obtain a more accurate measurment) making back of napkin estimates of the potential genetic impact of capital punishment from 1500-1750 CE on the murder rate in Britain.  The murder rate fell tenfold in that period which also experienced many executions.

* Another paper notes at the same blog discusses the arrival of wheat in Britain thousands of years before farming commenced there.

* And, Dienekes comes to some conclusions from his own analysis of Armenian genetics.

* Scandinavian rock art suggests that ancient Swedes may have personally gone all the way to Cyprus to trade copper and tin for amber without a middle man in the Bronze Age.

* Eurogenes notes the important discovery of ancient Y-DNA R1a1 paired with mtDNA H in NW Russian towards Finland in hunter gatherer populations from ca. 4000 BCE.  This is one of several new ancient data points (another being the paper discussed here) that really reinforce the theory that Y-DNA R1a1 paired with mtDNA H in Europe arrived with Indo-Europeans from NW Russia as part of the Corded Ware culture in the Copper Age, and that R1b in Europe paired with mtDNA H might derive in the same period from further South around the Pontic-Caspian Steppe.  The autosomal data from the Pontic-Caspian steppe is a good fit for a major component (perhaps 75% replacement in Central Europe) of Europe's DNA.

* Eurogenes also discusses a new linguistics paper on a European steppe origin for the Indo-European languages.

Prospects For General Relativity A Century Later

Background

A hundred years ago, Albert Einstein came up with the theory of General Relativity that was first presented publicly at a conference in November of 1915, and was published early in 1916 in a series of three papers.

General Relativity basically describes gravity in a way that is subtly different from that of Newton's simply F=GMm/r2 law of the 1600s that is perfectly sufficient for most purposes.  But, its formulations allow for a variety of phenomena that Newtonian gravity did not.

One of the most important distinctions is that energy, not just matter, generates and is subject to gravitational fields, and that energy is equivalent to matter for purposes of the conservation of matter-energy, and for gravitational purposes, according to the formula E=mc2, where m is mass, E is energy, and c is the speed of light in a vacuum.  For example, since light has energy, it gravitates and is affected by gravity, giving rise to the phenomena of gravitational lensing.

Another critical distinction between General Relativity and Newtonian gravity arises in strong gravitational fields, where singularities such as Black Holes and the Big Bang can arise.  Both phenomena are observed.

There are other distinctions: frame dragging, gravitomagnetic effects, and more.  But, they are beyond the scope of this post.

One integration constant in Einstein's formulation of general relativity, known as the cosmological constant full describes to the limits of astronomy data such as the Planck satellite observations, a phenomena know today as "dark energy" when set to the appropriate value.

Einstein's insights come to us virtually unchanged in the leading textbook on the subject, "Gravitation", written by Charles W. Misner, Kip S. Thorne and John Archibald Wheeler in 1973 (called MTW by advanced physics students everywhere).

It is widely asserted that the behavior of a massless spin-2 boson that couple to Standard Model particles and itself with a strength equal in magnitude to the mass-energy of the particle, reproduces general relativity.  There is good reason to believe that this is wrong, and I discuss one of the reasons below.  But, I think that the spin-2 massless graviton model discussed in some of Feynmann's lectures, may be a more accurate description of gravity itself, than it is of General Relativity.

The Problem of Dark Matter Phenomena

But, neither General Relativity nor the Standard Model of Particle Physics, describe a set of phenomena known as "dark matter" which is necessary to model the cosmology of the universe from the Big Bang onward in a way that matches Planck data, and is also necessary to describe, for example, the disconnect between the observed rotation curves of galaxies which do not match the naive predictions of simplified versions of General Relativity applied to the observed luminous matter in those galaxies.

There are two ways to reconcile these effects to General Relativity and the Standard Model.

Dark Matter

One is to hypothesize the existence of "dark matter" that is massive, nearly collisionless with ordinary matter, made up of something other than the protons and neutrons that make up ordinary "baryonic" matter, and makes up the lion's share of matter in the universe.

The simplest model has just a single dark matter fermion particle, but many dark matter theories imagine the existence of dark forces that led to self-interaction of dark matter particles with each other by means other than gravity, or additional kinds of dark matter particles in a complex "dark sector" similar to that of the sector of ordinary matter described by the Standard Model.

At first it was hoped that this problem would be solved by Supersymmetry (SUSY) or string theory models, which would provide dark matter candidates.

Astronomers are no slouches and have worked hard on several fronts to infer the properties of the dark sector from observation.

One approach has been to use analytical and numerical models to determine what the universe should look like if it has a particularly kind and quantity of dark matter and then to compare the predictions of those models to what we actually observe.  The dark matter hypothesis, set forth vaguely, does a very good job of fitting observed cosmic background radiation patterns (which give rise to radio static among other things) to very precise observations, but have proved less successful at predicting the amount of structure observed in the universe (e.g. how many dwarf galaxies surround the Milky Way galaxy and other galaxies of similar size) and halo distribution shape.

One approach has been to infer the distribution and mass of dark matter halos around galaxies from their rotation curves and known luminous matter.  Similar inferences have been made in systems like the Bullet Cluster where a colliding galaxy provides a means by which to discriminate between theories, an in examinations of RAVE stars in the Milky Way galaxy that are outside the galactic plane.  We have some idea regarding what shape dark matter halos must have to fit observations.  Unfortunately, the shape of the halos observed are not a great fit to the NFW dark matter halo shapes we would expect from analytical simulations to see if they universe were made entirely of a single kind of "cold" (i.e. GeV to TeV mass) dark matter particles.  Simulations match experiment between when gravitational interactions between baryons and dark matter are considered, but still show far more scatter in dark halo shapes than is observed.  Simulations also tend to favor "warm dark matter" (in the keV particle mass range) over "cold dark matter" in the GeV to TeV particle mass range that lack self-interactions.

A third approach has been to try to directly detect dark matter particles at high energy colliders.  This has not revealed any evidence of dark matter.  But, they have made clear that dark matter must be made up of one or more types of particles not found in the Standard Model, if it exists, and these effects strongly constrain the parameter space of potential dark matter particles.

A fourth approach has been to in build direct dark matter detection experiments.  So far, these have not revealed any convincing evidence of dark matter.  There have been a few potential "hits" but those have been contradicted by more accurate measurements, or confirmed by experiments of similar accuracy.

A fifth approach has been to look at cosmic rays to see if any could be produced by a hypothetical dark matter annihilation interaction and have no other known source.  Several candidates have been identified as potential signals of dark matter by these means, although there is not a consensus on how to interpret this data.

Modifications of Gravity

The other is to tweak General Relativity in weak gravitational fields of large objects such as galaxy and galactic clusters, in a way that reproduces phenomena attributed to dark matter.  There are about half a dozen to a dozen ways of doing this in a way that reproduces the dark matter phenomena seen in galaxies with a fair degree of accuracy that have been published and compared to the data, the most famous of which is MOND, a toy model theory proposed in 1983 by Mordehai (Moti) Milgrom.

MOND itself is not the best of those theories.  It is strictly a phenomenological relationship, it underestimates dark matter effects in galactic clusters, and it isn't great at predicting galactic dynamics outside the plane of a galaxy.  But, it also has an impressive record of making firm predictions about unobserved phenomena that were later confirmed by observation, with only one new experimentally measured physical constant, and has a general relativistic generalization, TeVeS, devised by one of Milgrom's colleagues, Jacob D. Bekenstein.

MOG and similar theories proposed by John M. Moffat as the University of Toronto, has the distinction of having a broader range of applicability that describes phenomena in galactic clusters as well as galaxies.

A number of theories known as F(R) theories, which add a term that is a function of the Ricci scalar to the equations of general relativity, have also had some success in describing dark matter, dark energy and cosmological inflation.

If General Relativity Is Wrong, In What Way Is It Wrong?

General Relativity is a very tightly formulated set of equations based on a handful of mathematical first principles.  But, perhaps quantum gravity or simply an omitted term in General Relativity could do what modified gravity theories that account for dark matter do.

To go beyond purely phenomenological models like MOND to a full fledged competitor to General Relativity, however, one needs a good theoretical justification for the modifications to the equations of General Relativity.

Deur has been at the forefront of demonstrating that the real key problem could be that conventional general relativity theory is wrong about the real world effects of gravitational self-interactions.

Section 20.4 of MTW at 467 is emphatic about this question:
To ask for the amount of electromagnetic energy and momen tum in an element of 3-volume make sense.  First, there is one and only one forula for this quantity.  Second, and more important, this energy-momentum in principle "has weight."  It curves space.  It serves as a source term on the righthand side of Einstein's field equations.  It produces a relative geodesic deviation of two nearby world lines that pass through the region of space in question.  It is observable.  Not one of these properties does "local gravitational energy-momentum" possess.  There is no unique formula for it, but a multitude of quite distinct formulas.  The two cited are only two among an infinity.  Moreover, "local gravitational energy-momentum" has no weight.  It does not curve space.  It does not serve as a source term on the righthand side of Einstein's equations.  It does not produce any relative geodesic deviation of two nearby world lines that pass through the region of space in question.  It is not observable. 
Anybody who looks for a magic formula for "local gravitational energy-momentum" is looking forthe right answer to the wrong question.  unhappily, enormous time and effort were devoted in the past to trying to "answer this question" before investigators realized the futility of the enterprise.  Toward the end, above all mathematical arguments, one came to appreciate the quiet but rock-like strength of Einstein's equivalence principle.  One can always find in any given locality a frame of reference in which all local gravitational fields" (all Christoffel symbols . . .) disappear.  No [Christoffel symbols] means no "gravitational field" and no local gravitational field means no "local gravitational energy-momentum." 
Nobody can deny or wants to deny that gravitational forces make a contribution to the mass-energy of a gravitationally interacting system.  The mass-energy of the Earth-moon system is less than the mass-energy that system would have if the two objects were at infinite separation.  The mass-energy of a neutron star is less than the mass-energy of the same number of baryons at infinite separation.  Surround a region of empty space where there is a concentration of gravitational waves, there is a net attraction, betokening a positive net mass-energy in that region of space (see Chapter 35).  At issue is not the existence of gravitational energy, but the localizability of gravitational energy.  It is not localizable.  The equivalence principle forbids it.
Of course, in 1915, and even in 1973, the analogy of QCD, in which a force is carried by particles that are self-interacting (gluons in that case) was not known.  But, QCD without self-interacting gluons would produce a very different effect.

A graviton, of course, is the very epitome of localized gravitational energy, which is why conventional General Relativity as espoused in MTW is fundamentally inconsistent with quantum gravity theories.

Deur argues, by analogy to QCD, that self-interacting gravitons do indeed have observable effects and gravitons curve space just like any other carrier boson would.  To the extent that Einstein's equations do not reflect this fact, they are wrong.  This, he argues with back of napkin estimates, produces dark matter phenomena of approximately the right amount in galaxies and galactic clusters, and accurately reflects the pattern seen in which more spherically symmetric systems have less apparent dark matter than those that have (in the original sense of the word) more pretzelosity.

Gravity is weak, and so, the gravitational self-interactions of gravity in low mass systems are modest.  but, gravity is also cumulative, because it is always attractive, so in immense systems, gravitational self-interactions have material observable effects that probably give rise to substantially all dark matter phenomena, and by weakening gravitational fields in directions from which gravitons are diverted to give rise to dark matter phenomena effects elsewhere, also some or all dark energy phenomena.

Future Prospects

I am quite convinced that the failure of the Einstein equations to reflect a contribution of gravitational self-energy is the most likely by far reason for dark matter phenomena that we observe and most of the dark energy phenomena that we observe, and that correcting this error will cause theory and observation to match exquisitely without the need for any beyond the Standard Model particles other than the massless spin-2 graviton that couples with a strength equal to the mass-energy of a particle.

I am confident that sooner or later, probably within ten to forty years, such a theory will be well articulated and tested against the data and will become the scientific consensus, and that dark matter theories will be discarded.

Thus, we will be left with the Standard Model and a very simple quantum gravity, with no dark matter or dark energy.  Thus, the six quarks, three charged leptons, three neutrinos, photon, three weak force bosons, eight gluons and Higgs boson of the Standard Model, plus the graviton and their interactions according to four coupling constants, will prove to be the only particles needed to account for everything in the universe.  All other proposed theories of fundamental physics will end up on the scrap heap of intellectual history.  Maybe somebody will come up with a way to unify these pieces and explain the source of all of their constants, and maybe they won't.  But, for practical purposes, it doesn't really matter one way or the other.  The results will be the same.

Monday, March 9, 2015

Trying 23andMe

I have ordered 23andMe personal genome kits for the whole family.  The price is right and the technology seems to be mature for the near future.  Honestly, as much as anything, the reason for doing so is not to learn anything new about my ancestry as it is to confirm my confidence in the testing system.

I know a great deal more about my ancestry than most people.  I've met numerous third and fourth cousins, have met relatives from Finland (whom my parents and brother visited in person), and I am familiar with the time and exact place from which my German ancestors emigrated (my father was finally able to meet some of my relatives there when West Germany and East Germany merged), and I have a fair amount of familiarity with my Irish ancestors (although no one in my family has made contact with my Irish relations).

We also know quite a bit about my wife's Korean ancestors and could probably trace their ancestry for hundreds of years with a visit to Korea if I spoke Korean and the relevant records weren't destroyed or placed beyond reach in North Korea during the Korean War.

We've been close enough to family that we also have a reasonably complete medical history, and genetics is still enough in its infancy that even if 23andMe were allowed to make medical commentary of the raw genome data (for the foreseeable future it is prohibited by the FDA from doing that), the predictions from our medical history would probably be more accurate.

We also know at least some of our relatives are in the system and might pop up as related.  And, of course, we have actual phenotype knowledge about ourselves that can be matched against those genes for which phenotype-genotype relationships have been established (e.g. eye and hair and skin pigmentation, ear wax type, etc.).

Still, it will be fun to see the results, and to see the strong predictions that I can make about them fulfilled (or contradicted) in detail.

Wednesday, March 4, 2015

Neutrino Physics Update

The 16th Neutrino Telescope Conference is underway.

One notable early result puts the sum of the three neutrino masses between the 0.056 eV lower bound from oscillation experiments, and an upper bound of 0.14 eV at a 95% confidence interval.  This is almost, but not quite, tight enough to distinguish between a "normal" and an "inverted" neutrino mass hierarchy.  This means that the absolute masses of the neutrinos are now known with a precision rivaling that of the up quark.

State of the art direct measurements of absolute neutrino masses which are in the process of being carried out would place only a 0.4 eV cap on the mass of the electron neutrino, which we know from cosmology must actually be less than about 0.05 eV, and in a normal hierarchy is likely to be about 0.001 eV or less.

UPDATED March 5, 2015:

The lastest neutrinoless double beta decay exclusion from GERDA at a 90% confidence interval is now a low limit on the half-life of neutrinoless double beta decay of 2.1*10^25 years.  This is unchanged since last summer.