Friday, August 30, 2013

The Most Notable "New Physics" Proposals That Probably Aren't True

Some of the "new physics" theories that receive the most academic attention are in my view, looking at the field from a "forest" rather than a "trees" view as an informed layman, very implausible. Here are some of the most notable of them.

1. Cold dark matter and WIMPS.

The dark matter paradigm is alive and kicking, but the "cold dark matter" paradigm that assumes that dark matter is made up of exotic particles of 8 GeV mass to hundreds of GeV, particularly "WIMPS" (weakly interacting massive particles) in that mass range increasingly appears to be at odds with data from astronomy.

2. SUSY, Supergravity, and string theory.

Most mainstream versions of string theory aka M-theory imply supersymmetry aka SUSY as a low energy effective field. But, decades of research later, the positive evidence for SUSY is still not there and the motivation for the theory is increasingly weak. Supergravity theories, which extend SUSY by incorporating gravity are similarly problematic.

3. SM4.

Experiments at the LHC have pretty much ruled out sensible extension of the Standard Model with four rather than three generations of fermions.

4. Technicolor.

This theory pretty much died when the Higgs boson was discovered. It had been a leading approach to explaining Standard Model data without a Higgs boson.

5. Anthropic Principal Theories and the Multiverse.

Cosmology theories that resort to explaining the current universe with the anthropic principle aren't really science in the ordinary sense.

Wednesday, August 28, 2013

MOND works and is predictive (and why we should care)

Modified Gravity theory (MOND) is a simple empirical relationship that has been predictive (most recently here) at explaining gravitational dynamics without dark matter at galactic scale, although it understates "dark matter" effects at galactic cluster scales.  It predicts not just velocity dispersion of objects in galaxies but subtle effects like the impact of proximity to a host galaxy on dwarf galaxy behavior.  There are good reasons to doubt that its mechanism is correct and that a more conventional dark matter theory is the right mechanism for causing these effects.

But, the great predictive success of a very simple, one parameter MOND theory over very large data sets and involving new kinds of data not used to generate the theory long in advance, implies that it must be possible to derive the MOND rule at the proper galaxy level scale from any correct dark matter theory.  Likewise, if a simple one parameter formula can explain all of that data, any dark matter theory must itself be very simple.  The simply theory is obviously flawed in some respects (e.g. in the original version it is not relativistic).  But, it can be generalized without losing its essential features (e.g. in the TeVeS formulation that is fully relativistic).

It is also possible that MOND, "dim matter" and some kind of "cluster dark matter" that is abundant in galactic clusters, but almost absent everywhere else could be at work.

Another attractive feature of MOND is that the dark matter particles that particle physics was supposed to provide as dark matter candidates have not been detected.  But, if MOND is correct, we don't need them.

There are a variety of ways to work MOND effects into modifications of general relativity.  Some flow from the observation that the MOND constant has a strong coincidence with the size of the universe, suggesting that MOND may arise from the suppression of gravity waves with amplitudes larger than the size of the universe itself.

UPDATE August 30, 2013:

The observation that MOND works and predictive is more than an observation of a mere coincidence or even as I noted before an strong indication that any dark matter mechanism, if there is a dark matter mechanism is very likely very simple because the MOND theory itself is (although it is possible that the complex bits are simply small contributions to the overall result in the same way that the general relativity corrections to Newtonian gravity, while very deep and complex are usually negligible).

But, the fact that MOND works and predictive implies something else about the correct theory that produces this phenomenological relationship. While correlation does not imply causation, correlation does imply that some cause, direction unknown and possibly indirect, causes that correlation. Robust and predictive correlations happen for a reason even if that reason is not a direct causal relationship between the two data sets.

What is MOND?

The MOND hypothesis is that there is a functional relationship between the gravitational fields that would be generated by luminous matter in a galaxy and the "dark matter" effects in that galaxy that are observable only in the parts of the luminous matter gravitational field that are weak which is defined as having gravitational acceleration below the MOND acceleration constant a0. MOND argues that gravity gets weaker according to a conventional 1/r2 law (where r is the distance between the two objects which are attracted to each other by gravity) in fields stronger than a0 and according to a "new physics" 1/r relationship in fields weaker than a0. An ad hoc interpolation function is used to estimate the force of gravity around the transitional field strength and data don't allow meaningful ways to distinguish between the alternative transition formulas.

Because GMm/r2 << G'Mm/r where G' is the constant that produces the MOND gravity prediction in the limit as r >> r at a0, the simplest interpolation is simply to assume that MOND gravity equals Newtonian gravity + G'Mm/r gravity where the second term is too small to discern experimentally in gravitational fields that are strong relative relative to a0= approximately 1.2*10^-10 ms^-2, and the first term is too small to discern relative to the second experimentally in gravitational fields that are weak relative to a0.

What does this imply?

One of the most profound implications of the fact that MOND works and is predictive is that there is a direct and reasonable precise functional relationship between the input into MOND's black box formula, the amount and distribution of luminous matter in a galaxy, and the output, which is the "dark matter" effects that are observed empirically in a galaxy.

This means that in any dark matter theory that accurately replicates reality, the distribution of dark matter particles in the dark matter halo of a galaxy must be functional related to the amount and distribution of luminous matter in that galaxy.

There are several ways that this could be possible. To illustrate this point, here are three broad kinds of scenarios that could cause this to be true. I marginally favor the first, although I don't rule out the second. The third, I consider to be very unlikely, but include it for completeness.

First, it could be that galaxies differ from each other in a very simple, more or less one dimensional way as a result of the way that galaxies evolve. Galaxies of a particular mass may always have a particular or one of a couple of particular luminous matter distributions and any factor that impacts how a galaxy of a particular size evolves impacts the distribution of dark matter in that galaxy in a way that corresponds to the distribution of luminous matter in that galaxy. Thus, the MOND relationship between the galaxy's luminous matter distribution and its dark matter halo distribution arises because the evolution of both kinds of matter distributions is a process that in each case is almost entirely gravity dominated and is shared by all of the matter luminous and dark in a given galaxy. In this process, Newtonian gravitational effects predominate over additional general relativistic gravitational effects, and this very simple gravitational law produces very simple and characteristic distributions of matter than can be summed up in the empirical MOND relationship that is observed. Deriving the MOND relationship from this process may take some pretty clever analytical modeling of the evolution of galaxies that exhibits shrewd understanding of how this process can be drastically simplified without loss of significant accuracy.

In particular, there is a fair amount of evidence to suggest that inferred dark matter halo shapes are strongly related to the shape of a galaxy's inner bulge, but are fairly indifferent to the distribution of matter at the fringe of a galaxy. The shape of a galaxy's inner bulge, in turn is largely a function of the nature of a galaxy's central black hole. If the distribution of the luminous matter in a galaxy and the distribution of the dark matter in a galaxy are largely a function of the nature of the central black hole of the galaxy, then it would follow that luminous matter distributions in a galaxy and dark matter distributions in a galaxy should be functionally related to each other. Moreover, is a central black hole in a galaxy of a given mass is pretty much like every other central black hole in a galaxy that has the same mass, then the distribution of both luminous matter and dark matter in galaxies should both be a function of a single number - the mass of the central black hole of the galaxy.

One version of this kind of scenario is one in which apparent "dark matter" effects are actually driven by ordinary "dim matter" emitted by the central black hole mostly in the "upward" and "downward" directions of the axis of rotation of that central black hole and the galaxy that arises around it. A 1/r relationship between force and distance is precisely the relationship one would expect in a simple Newtonian gravitational scenario in which there is a long, narrow, axial distribution of dim matter in both directions from the central black hole of a galaxy. If the axial distribution of ordinary "dim matter" is long enough and coherent enough that is generates its own 1/r gravitational field to a distance at least as great as the most distant star for which the galaxy's gravitational influence can be observed by an astronomer, then this would generate apparent dark matter effects that approximately follow the phenomenological MOND law.

The combined distribution of luminous and non-luminous matter in a galaxy in the scenario discussed above would look something like the image above, but with thinner and longer extensions up and down out of its axis containing matter and energy that is in rapid motion away from the galaxy.

It should be fairly elementary, moreover, for anyone with a year or two of calculus based physics under their belt to use the MOND constant a0 to calculate the characteristic ratio of axial dim matter to galactic ordinary matter in such a scenario (I could do it this weekend if I could find the time in an hour or two). With a few additional data points about the most distant stars that have been observed to experience MOND-like effects in the vicinity of a galaxy one could also fairly easily establish a minimum length of this axial dim matter and the amount of mass per linear distance of axial dim matter that would be anticipated in a typical galaxy, although any bound on the width of this axial mass distribution would be fairly weak. Since there are at least two different processes observed by astronomers by which black holes are known to emit matter and energy in a more or less axial direction and much of that matter is "dim" and the speed of the emitted matter and emitted energy and the minimum age of a galaxy can be determined to within reasonable bounds, the extent to which known processes could account for axial dim matter giving rise to MOND-like effects wouldn't be too hard to estimate, and the amount of axial "dim matter" that would necessarily have a source in some other unknown form of black hole emissions could also be estimated fairly accurately. It wouldn't be surprising if the sum of the total amount of axial dim matter in the universe resolved much of the "missing baryon" problem - that the number of baryons in the universe according to widely held baryongenesis hypotheses is smaller than the number that are present in all observed luminous matter by a substantial amount without giving rise to any notable cosmological effects that have been attributed to this missing baryonic matter.

Of course, given what my weekend looks like - violin supply store, CostCo, bank deposits, working on my cases the weekend, writing course packs, buying groceries, BBQing, getting someone to a tennis lesson, weeding, mulching, fertilizing, laundry, etc., the efforts of anyone else interested in doing so and posting the results in the commons would be welcome.  Scientific discoveries can't be patented and I would love to know the answer and have no deep need to be the one who finds it.

This black hole emitted matter unaccounted for by known processes could be created by the extreme conditions that exist only the large central black holes in the center of galaxies (which would explain why we can't produce this kind of matter in particle accelerators), or it could simply be ordinary matter that does coalesce into stars or other large objects when emitted from a black hole in this way because it is emitted in a diffuse spray of fast moving particles whose speed and common direction prevent them from gaining the critical mass necessary to combine into compact objects that astronomers can observe.

 Perhaps astronomers looking for this very particular and well defined kind of dim matter signature could find a way to measure it in some natural experiment arrangement of stellar objects somewhere among the billions and billions of stars in the universe that we can observe on all manner of wavelengths.

Any such process would, by definition, produce neither non-baryonic dark matter, nor ordinary dim matter that ends up in the galaxy's disk of rotation. So, direct dark matter detection experiments conducted in the vicinity of Earth which is in the plane of the Milky Way's disk are doomed to fail if this hypothesis is correct, because in this model, there is no dark or dim matter in that part of the galaxy.

 This would also explain why estimated cosmological amounts of dark matter are on the same order of magnitude as estimated cosmological amounts of ordinary matter, another great unsolved question in physics.

In any case, if the missing matter, whether in the form of "novel" dark matter particles of an unknown type, or merely "dim matter" has a distribution that is driven by the same central black hole gravitational effects that drive the distribution of luminous matter in galaxies, the key to reconciling MOND theories and dark matter theories would be at hand.

It is not clear, however, that such a theory would adequately fill the role that dark matter plays in the highly predictive six parameter lamdaCDM model of cosmology, or would be consistent with bottom up galaxy formation models that have been highly successful in 2 keV warm dark matter scenarios that help address problems with the cold dark matter model like "cuspy halos" and the "missing satellites" problem.  This hypothesis could create as many new problems as it solves for the dark matter paradigm.

The warm dark matter literature is surprisingly devoid is simple diagrams like the one above illustrating the inferred shape of the Milky Way's dark matter halo in one recent study.  But, this illustration is closer to the conventionally expected warm dark matter halo shape.  The dark matter paradigm favors structures that are blob-like rather than cylindrical for dark matter halos because it is hard to make nearly collisionless particles with significant kinetic energy that interact primarily only through gravity form more compact structures.  The differential effects of the central mass of the galaxy prevent the dark matter halos from behaving like an ideal gas, but only modestly.

The non-spherical shape of the halo, however, is critical to generating the apparent 1/r gravitational field strengths that are observed.

 (It is also worth noting that the roughly 2 keV sterile neutrino-like warm dark matter particles that seem to be the best fit to the empirical data within the dark matter paradigm are virtually undetectable in existing direct dark matter detection experiments which are designed to observe weakly interacting dark matter particles with masses in the GeV or heavier mass range.)

A result like this that involves only ordinary "dim matter" however, would be a huge blow to physicists longing for "new physics." It would explain the biggest unsolved problem in physics when it comes to the fundamental laws of physics and their observable effects using only a deeper understanding of processes that occur entirely according to already well understood "old physics." The biggest empirical arrow pointing towards undiscovered types of stable fundamental particles would turn out to have been a mere mirage.

Without the "crutch" of some sort of theory to explain dynamically the evolution of dark or dim matter halo shapes in galaxies parallel to luminous matter distributions in those galaxies, no dark matter theory can be considered complete.

Second, the MOND law could, for whatever, reason actually constitute the true law of gravity when suitably formulated in a general relativistic form (something that has actually been done successfully in several different varieties of proof of concept efforts). As noted above, this would call for some kind of quantum gravity theory or perhaps something related to the impact of a bounded universe of finite size on the way that gravity waves behave.

This would be exciting news for quantum gravity researchers and bad news for particle physics theorists. A 1/r relationship would quite plausibly derive from some process that reduced the effective dimensionality of space-time from three spatial dimensions to two. Perhaps, for example, due to quantum entanglement of distant points between which a particle has traveled or because gravity models have underestimated the gravitational effect of the angular momentum of a spinning galaxy due to some subtle flaw in the normal formulation of general relativity or the way that this formulation is inaccurately applied in models of complex massively many bodied systems like galaxies.

Of course, in particular, this would also be bad news for direct dark matter detection experiments because in this scenario there is no dark matter to detect anywhere except possibly in galactic clusters - all of which are a very, very long way from planet Earth making direct detection of cluster dark matter virtually impossible. Making sense of anomalous gravitational effects that might be due to dark or dim matter in galactic clusters is hard. This is because the structure and non-luminous ordinary matter content of galactic clusters is far less well understood and is far more complex, than the structure and non-luminous ordinary content of ordinary individual spiral, elliptical and dwarf galaxies.

This mechanism for a MOND theory also directly and transparently explains why it doesn't work as well in galactic clusters, whether or not "cluster dark matter" exists. The MOND relationship, in any variation of this hypothesis flows from the parallel evolution processes that are more or less the same for any one given galactic central black hole of a given size, it makes sense that these relationships might not hold for a system with many galactic central black holes in close proximity to each other and different typical ages in the galaxy formation process. Galactic clusters may be profoundly more complex to such an extent that no simple model like MOND can explain them.

Third, there could be a non-gravitational interaction between luminous matter and dark matter that causes dark matter halos to be distributed in a particular way relative to luminous matter. For example, the flux of photons out of a galaxy is roughly proportional to the Newtonian component of the gravitational field of the luminous matter in that galaxy at any given distance from the galaxy. So, if dark matter had very weak electromagnetic interactions with the outgoing flux of photons, this could produce a dark matter distribution that tracks the distribution of luminous matter in a system, while still having a character as collisionless, non-self-interacting particles. Of course, since photon flux, like graviton flux has a 1/r2 relationship to distance from the luminous matter source, this doesn't easily explain a 1/r MOND effect. Also, the photon flux generated by a star is not all that perfectly related to the mass of the star generating the flux, so far as I know (more accurately, I have no idea one way or the other how tight the relationship is between photon flux and stellar mass). Perhaps, at long distances, the geometry of a galaxy impacts this flux somehow in a manner different than for graviton flux.

This kind of explanation would be a field day for particle physicists, because no known fundamental particle has this kind of interactions. I don't see it as a likely option, but one should consider all possibilities for unexplained phenomena for sake of completeness.

Dark matter of this variety ought to be highly amenable to detection by a model driven direct dark matter detection experiment, although existing direct dark matter detection experiments, which involve a very different paradigm and model, might be useless at detecting it.

Homo Erectus Out Of Africa Wave Happened All At Once

Old Homo Erectus Dates In China Confirmed

Newly refined age estimates for the oldest hominin sites in China, establish that Homo Erectus spread at about the same time to Java Indonesia (1.9 million years ago), to Northern China (1.7 million years ago) and to the Southern Caucasus mountains and to a wider geographic range within Africa, all at around 1.7-1.9 million years ago from the previous core range within Africa.  The evidence for the oldest H. Erectus anywhere in Africa is a bit older.

Age estimates at the scale are about +/- 100,000 years in accuracy, and the thinness of the data in this time frame also suggests that there is a certain amount of statistical variation due to the random sampling of existing data points from all available data points for undiscovered sites of the same type.

These factors combined, informed by data points from modern human expansion and expansion of other species on how long it took those species to disperse, make the precisely differences between the ages of the various early non-core H. Erectus site dates small enough to be insignificant and suggest as single wave of H. Erectus expansion both Out of Africa and within Africa.


This new data largely refutes the alternative hypothesis that H. Erectus expanded Out of Africa in a staged migration that reached some parts of Eurasia much later than others.

Of course, expansion "all at once" is a relative thing.

It could truly mean a true single wave of expansion (and honestly, that is what I believe is the most likely scenario), but several successive waves of expansion 10,000-20,000 years apart, of the kind that may have happened in the modern human process of expansion "Out of Africa" would be indistinguishable from a single wave of expansion in the Homo Erectus case.

The new data simply shows that expansion from an original source territory to the entire ultimate Homo Erectus range probably took place over a period that was probably shorter than 200,000 years, contrary to earlier theories based upon incomplete or less accurate data from China that had suggested that there might have been a pause of 400,000 years or more before Homo Erectus spread from SE Asia and the Southern Caucasus mountains to China.

Open Questions

These days, however, the really hot issues in the prehistory of H. Erectus relate to the tail end of the story, rather than the beginning.

When did H. Erectus go extinct and why?  Was H. Erectus the source of the Denisovan genome or the H. Florensis species, and if not, as the Denisovan genome seems to suggest, what hominin species were each of these associated with, how did they end up where they did, and when?  In particular: Does the Denisovan species have a relationship to early archaic hominin evidence in China?  Did the Denisovan species replace or coexist with H. Erectus, and if so, when and where? (The distribution of lithic tools in Asia suggests that there might have been a limited replacement or co-existence of H. Erectus and Denisovans in Zomia, Malaysia and Indonesia, a path that connects most of the dots between the Denisovan cave in Southern Siberia and the island of Flores, but the more modern lithic tools are recent enough to be attributable to Homo Sapiens as well).  During which time frames, if any, did H. Erectus co-exist with modern humans?  Why isn't there a discernible trace of H. Erectus in most modern humans in Asia?  How much did H. Erectus evolve after the species left Africa?  Did H. Erectus evolve into other hominin species outside of Africa, and if so, which ones?  What impact, if any, did the Toba explosion have on H. Erectus

The picture is quite unclear for the events of the time frame from sometime after 150,000 years ago (i.e. after the period where there are no potentially more modern hominin remains) to 50,000 years ago (i.e. the period when modern humans were the undisputed dominant hominin species of Asia barring some currently unknown relict populations of archaic hominins and H. Florensis).  Any sub-periods for Asian hominin populations from ca. 1,900,000 years ago to 150,000 years ago are also quite fuzzy.  This was a period 1.5 million years plus long that was quite static relative to that of Neanderthals or modern humans over the similarly long time frames, and even relative to non-African H. Erectus populations.

Background Context

The Basic Story.

The H. Erectus who left Africa about 1.9 million years ago have not been important ancestral genetic contributors to modern humans.  It is likely, however, that Neanderthals and modern humans are among the hominin species derived from H. Erectus.

All modern humans are predominantly descended, genetically, from common modern human African ancestors.  Those modern humans evolved ca. 250,000 years ago, or so.

All non-African modern humans are predominantly descended from one or more groups of modern humans who left Africa much later. There is academic debate over when the first sustained modern human presence Out of Africa arose with the earliest estimates being around 130,000 years ago and the youngest being around 50,000-60,000 years ago with earlier Levantine modern human remains attributed to an "Out of Africa wave that failed" by supporters of that theory.  As in the case of H. Erectus there is evidence that the "Out of Africa" migration by modern humans may have coincided with a range expansion of modern humans within Africa (roughly speaking around the time the Paleo-African populations like the Khoisan and Pygmy populations broke off from other African populations).

New archaeological data and increasingly refined understandings of population genetic data increasingly favors Out of Africa dates that are closer to the older date in this range, although the appearance of a younger age in some respects requires a fairly complex narrative of human expansion beyond Africa to fit the data precisely.  For example, a population bottleneck for the Out of Africa population, or a second wave of expansion after a first less numerous one, could make non-Africans look genetically younger, on average, than the time when their earliest ancestors actually left Africa.

The Tweaks To The Story Associated With Archaic Admixture

* Neanderthal Admixture.

All modern humans who are descended from "Out of Africa" modern humans have significant traces of genetic admixture from Neanderthals (estimates have ranged from 2% to 10% with some individual and group variation).  Apart from this Neanderthal admixture sometime after leaving Africa and before reaching Southeast Asia, modern humans are not directly descended from Neanderthals who were the dominant hominin species in Europe before modern humans arrived there over a period from ca. 50,000 years ago to ca. 42,000 years ago.  Neanderthals were replaced in Europe over thousands of years of co-existence with modern humans in Europe (less in any one place) and were extinct or moribund as a species by 29,000 years ago.  The details of the timing and scale and structure of this admixture process are the subject of ongoing research (e.g. described here).

* Denisovan Admixture.

There are also genetic signs of "Denisovan" admixture in aboriginal Australians, and in indigenous Papuans, Melanesians and Polynesians, in addition to their Neanderthal admixture which they share with other non-African modern humans.  These populations have the highest known percentage of archaic admixture in their genomes of any modern human populations, but are still overwhelmingly genetic descendants of early modern human "Out of Africa" migrants.

No one archaeologically classified species of archaic hominin has been definitively identified with the non-Neanderthal archaic hominin admixture seen in the small number of modern humans that have genetic traces of Denisovan admixture that have been identified.  The Denisovan genome is based on bones found in Siberia that are too fragmentary to make an archaic hominin species identification (although the Denisovans were not modern human and not pure-blooded Neanderthals) at a place far removed from where existing modern humans showing signs of this archaic admixture are found.  It is also likely that there are traces within the Denisovan genome of archaic admixture between them and a previous archaic hominin species.

* Archaic admixture in Africa.

There are genetic signs of other kinds of archaic hominin admixture with modern humans are present in a couple of groups of Africans (one a pygmy group, and another found more widely adjacent to pygmy or former pygmy territories in tropical West Africa).  These populations still have less archaic admixture than almost all non-Africans.  No particular archaelogically classified species of archaic hominin has been definitively identified with the non-Neanderthal archaic hominin admixture seen in the small number of modern humans that have genetic traces of African non-Denisovan, non-Neanderthal admixture that have been identified.  The genetic evidence points to relatively recent admixture with relict populations of archaic hominin species who had previously not been known to exist that late in time in Africa.

The traces of admixture with archaic hominins in Africa are at much lower levels than for Neanderthal and Denisovan admixture in the relevant populations, although the African case presents the strongest evidence yet for a single Y-DNA haplogroup that introgressed from an archaic hominin into modern humans.

* Mostly, archaic admixture DNA is selectively neutral.

Immune system related HLA complex genes appear to have been the main part of the archaic admixture package outside Africa that conferred selective benefit and has left a non-trace level mark in the region's genomes.  The vast majority of archaic admixture in modern human genomes shows statistical frequencies and patterns consistent with the selective neutrality of those genes.  Any archaic admixture sourced genes that were selective fitness reducing would have vanished from the modern human genome long before the time from which our oldest available ancient DNA samples were left behind.

Regional Evolution Compared

Notably, contrary to the original "regional evolution" hypothesis, the main phenotype distinctions (i.e. visible differences) between regional populations of modern humans described as "race" do not have archaic admixture with different archaic hominins as an important source.  Serial founder effects and selective adaptation effects not related to archaic admixture are the source for most of these differences.

In short, while some of the processes associated with a regional evolution hypothesis did take place, they did not have the impact, or involve the kind of narrative, that the original proponents of the hypothesis suggested.

Thursday, August 22, 2013

More Neutrino Data From Daya Bay

Daya Bay's first results were announced in March 2012 and established the unexpectedly large value of the mixing angle theta one-three, the last of three long-sought neutrino mixing angles. The new results from Daya Bay put the precise number for that mixing angle at sin2(13)=0.090 plus or minus 0.009.. . .

From the KamLAND experiment in Japan, they already know that the difference, or "split," between two of the three mass states is small. They believe, based on the MINOS experiment at Fermilab, that the third state is at least five times smaller or five times larger. Daya Bay scientists have now measured the magnitude of that mass splitting,  |Δm2ee|, to be (2.59±0.20)x10-3 eV2. The result establishes that the electron neutrino has all three mass states and is consistent with that from muon neutrinos measured by MINOS. Precision measurement of the energy dependence should further the goal of establishing a "hierarchy," or ranking, of the three mass states for each neutrino flavor.

MINOS, and the Super-K and T2K experiments in Japan, have previously determined the complementary effective mass splitting (Δm2μμ) using muon neutrinos. Precise measurement of these two effective mass splittings would allow calculations of the two mass-squared differences (Δm232 and Δm231) among the three mass states. KamLAND and solar neutrino experiments have previously measured the mass-squared difference Δm221 by observing the disappearance of electron antineutrinos from reactors about 100 miles from the detector and the disappearance of neutrinos from the sun.
From this press release.

Neither of the two numbers is far from previous estimates. As of March 2012, the estimates were sin2(2θ13) = 0.092±0.017 and |Δm231| ≈ |Δm232| ≡ Δm2 atm = 2.43+0.13−0.13×10−3 eV2.

The precision in the theta13 number is about twice as great as the estimate from a year and a half ago, and is slightly lower than previously estimated. But, the results are consistent with each other at the one sigma level. The mass splitting estimate is consistent with prior data and similar in precision on a percentage basis.

Wednesday, August 21, 2013

The Twelve Most Important Experimental Data Points In Physics

What are the most important questions we need experiments to answer in physics right now?

Here is my list:

1. Discover or reduce the experimental bound on the maximum rate of neutrinoless double beta decay (I suspect it will not be found and this rules out many BSM theories).
2. Continue to place experimental bounds on proton decay rates (I suspect that it does not happen and this rules out many BSM theories).
3. Determine the CP violating phase of the PMNS matrix that governs neutrino oscillations (anybody's guess, but probably not zero).
4. Determine the absolute masses of the three neutrino mass eigenstates and whether the neutrinos have a "normal", "inverted" or "degenerate" hierarchy of masses (probably "normal" and very small).
5. Refine the precision with which we know the mass of the top quark (relevant to making relationships between Standard Model experimentally measured masses convincing).
6. Refine the precision with which we know the properties of the Higgs boson, particularly its mass (relative to making relationships between experimentally measured masses convincing) and any possible other variation from the Standard Model prediction (something that will probably not be found).
7. Complete the second phase of the analysis of the Planck satellite data (relevant to distinguishing between quintessence and the cosmological constant, with the latter more likely supported, and to ruling out possible cosmological inflation theories with the simplest theories currently favored).
8. Continue the search for glueballs, tetraquarks and pentaquarks - all of which are theoretically possible in QCD but not yet definitively observed (everything that is not forbidden in physics is mandatory, so the absence or presence of these phenomena have great importance by adding new QCD rules).
9. Tighten experimental boundaries on the masses of the five lightest quarks (this would allow for the proof or disproof of extensions of Koide's rule for quarks - the precision of these measurements is currently very low).
10.  Conduct more astronomy observations that constrain the possible parameters of dark matter or any alternative theory that explains phenomena attributed to dark matter (dark matter is the single most glaring missing piece of modern physics), including measurements of ordinary "dim matter".
11.  Experiments to reconcile discrepancies between muonic hydrogen and ordinary hydrogen's properties (probably due to imprecision in ordinary hydrogen measurements).
12.  Improve the precision of QCD calculations that form backgrounds for other experimental measurements giving all other measurements at the LHC and elsewhere more statistical power.

LHC, the current physics show horse, is pertinent only to numbers 5, 6, 8 and 9.  Progress on 5, 6 and 9 is likely to be very incremental after the next couple of years.

Experimental searches that I deemed less worthy, because I think they are less likely to be fruitful include:

1.  Searches for supersymmetric particles or additional Higgs bosons (SUSY is increasingly ill motivated).
2.  Searches for additional compact dimensions.
3.  Searches for W' and Z' particles.
4.  Searches for fourth generation Standard Model particles (basically ruled out already).
5.  Direct dark matter detection experiments (the cross-section of interaction is too small to be likely to find anything with conceivable near term experiments as other data favors something akin to sterile neutrino 2 keV warm dark matter).

Alternative suggestions in the comments (with justifications) are welcome.

Top Quarks Are Short Lived

The top quark is the heaviest fundamental particle and is also heavier than any possible hadron made of two or three, or even theoretically four or five, confined quarks. Therefore, it also the shortest lived particle that exists (this explains why top quarks do not become confined in hadrons - top quarks decay into other kinds of particles before the strong force has time to form a hadron involving a top quark).
Because top quarks are exceptionally heavy - 173.3 GeV, give or take less than a GeV - they have a large amount of energy to impart to their decay products, and this has several consequences; one of these is their quite ephemeral nature. Theoretical calculations allow us to predict that for such an object the lifetime depends on the inverse of the third power of the mass, yielding a very short existence for top quarks - less than a trillionth of a trillionth of a second!
Even imagining such a short time interval is a challenge. Light quanta do not even manage to travel through a proton in 10^-24 seconds. How to picture it? Let's say that if you could travel from here to the center of the Andromeda galaxy in one second (forgetting the limits of special relativity for a moment), a top quark created when you start that quite fast journey would decay before you move by one millimeter!
A recent paper directly measures this lifetime (called decay width) more accurately than any prior study.

Nima Arkani-Hamed On The Hierarchy Problem

Matt Strassler's blog reports on day one of the SEARCH workshop in Stoney Brook, New York and in particular on presentations by Raman Sundrum and Nima Arkani-Hamed on the "hierarachy problem" associated with the discovery of a seemingly Standard Model Higgs boson at 125 GeV but no other new particles or phenomena.

I've never been as impressed with the hierarchy problem as many (most?) theoretical physicists seem to be, but I think that Nima Arkani-Hamed has his the nail on the head in describing in general terms what is going on: "The solution to the hierarchy problem involves a completely novel mechanism." We have not, however, figured out what that mechanism is yet.  Arkani-Hamed focuses on two approaches, neither of which have worked so far:
One is based on trying to apply notions related to self-organized criticality, but he was never able to make much progress. 
Another is based on an idea of Ed Witten’s that perhaps our world is best understood as one that has:

  • two dimensions of space (not the obvious three);

  • is supersymmetric(which seems impossible, but in three dimensions supersymmetry and gravity together imply that particles and their superpartner particles need not have equal masses);

  • has extremely strong forces.
  • I am unimpressed with the second approach (Arkani-Hamed hit a pretty fundamental dead end in trying to square this with the Standard Model reality too), and don't know enough about the first to comment (the link above is suggestive of the idea but doesn't apply it to quantum physics). But, I do think that the bottom line, that their is a mechanism or perspective that makes the seeming unnaturalness associated with the hierarchy problem seem natural is correct.

    In my view, the hierarchy problem is an issue of defective framing, category error, or failure to appreciate a key relationship between parts of the Standard Model, rather than anything that should surprise a physicist who knew the whole story.

    Some Generic Quantum Gravity Predictions

    Jean-Philippe Bruenton has made some interesting model-independent predictions in a pre-print regarding the phenomenological laws of quantum gravity.  He argues that:

    (1) There exists a (theoretically) maximal energy density and pressure.

    (2) There exists a mass-dependent (theoretically) maximal acceleration given by mc3/(h bar) if m < mp and by c4/Gm if m > mp. This is of the order of Milgrom's acceleration a0 for ultra-light particles (m approximately H0) that could be associated to the Dark Energy fluid. This suggests models in which modified gravity in galaxies is driven by the Dark Energy field, via the maximal acceleration principle. It follows trivially from the existence of a maximal acceleration that there also exists a mass dependent maximal force and power.

    (3) Any system must have a size greater than the Planck length, in the sense that there exists a minimal area (but without implying for quanta a minimal Planckian wavelength in large enough boxes).

    (4) Physical systems must obey the Holographic Principle. Holographic bounds can only be saturated by systems with m > mp; systems lying on the "Compton line" "l" approximately equal to 1/m are fundamental objects without substructures

    Bruenton's conjectures are driven by observations about the relationships of the Planck length, mass, and time which are derived from the speed of light c, the gravitational force constant G, and the reduced Planck's constant h bar, the Schwarzchild solution for the event horizon of a Black Hole in General Relativity reformulated in a generalized and manifestly covariant way, observations about the Kerr-Newman family of black holes, an alternate derivation of the Heisenberg uncertainty principle, the notion of a Compton length, and a few other established relationships.

    Bruenton presents his conclusions as heuristic conjectures for any quantum gravity theory that displays a minimum set of commonly hypothesized features, rather than rigorously proven scientific laws.

    I have omitted some of his more technical observations and consolidated others.

    Bruenton acknowledges that these observations may fail in the case certain theoretically possible exotic "hairy" black holes (while implying that they probably don't exist for some non-obvious reason).  He equivocates on the question of whether Lorentz symmetry violations near the Planck scale are possible, reasoning that an absence of a minimal Planckian wavelength could rescue Lorentz symmetry from quantum gravity effects.

    I find his suggestion that there is a maximal energy density and pressure particularly notable because of the remarkable coincidence between the maximum density observed by astronomers in Black Holes and neutron stars on one hand, and the maximum observed density of an atomic nucleus on the other.

     His suggestion that the Planck scale my denote the line between systems that are "fundamental objects without substructures" and "physical systems" is also shrewd.

    Thursday, August 15, 2013

    Climate Link To Bronze Age Collapse Substantiated

    A new study (open access) has convincingly linked the dramatic events around 1200 BCE in the Eastern Mediterranean called "Bronze Age Collapse" that led to events including the end of the Hittite Empire, the arrival of the Philistines in the Southern Levant, and the fall of many cities at the hands of the "Sea People" to a severe and sudden three hundred year long drought.

    While other historians have cast doubt on the connection, the paper's rigorous analysis of radiocarbon dates in a consistent manner explains that:
    [T]he [Late Bronze Age] crisis, the Sea People raids, and the onset of the drought period are the same event. Because climatic proxies from Cyprus and coastal Syria are numerically correlated, as the LBA crisis shows an identical calibration range in the island and the mainland, and because this narrative was confirmed by written evidence (correspondences, cuneiform tablets), we can say that the LBA crisis was a complex but single event where political struggle, socioeconomic decline, climatically-induced food-shortage, famines and flows of migrants definitely intermingled. . . . the LBA crisis coincided with the onset of a ca. 300-year drought event 3200 years ago. This climate shift caused crop failures, dearth [sic] and famine, which precipitated or hastened socio-economic crises and forced regional human migrations at the end of the LBA in the Eastern Mediterranean and southwest Asia. The integration of environmental and archaeological data along the Cypriot and Syrian coasts offers a first comprehensive insight into how and why things may have happened during this chaotic period. The 3.2 ka BP event underlines the agro-productive sensitivity of ancient Mediterranean societies to climate and demystifies the crisis at the Late Bronze Age-Iron Age transition.
    The study's fairly narrow geographic range doesn't permit a determination of the extent to which this climate change extended further than the Eastern Mediterranean, although we do know that droughts typically affect very large geographic areas all at once. They are the natural disaster opposites of a tornado that can ravage one house while leaving the one next door virtually untouched.

    Wednesday, August 14, 2013

    Central Black Hole Of Milky Way Has Strong Magnetic Field

    There is a pulsar near the black hole at the center of the Milky Way galaxy in which we live.
    As radio waves travel from the pulsar toward Earth, they encounter magnetic fields generated by clouds of gas getting pulled in by the Milky Way’s central supermassive black hole, called Sagittarius A*. The fields twist the radio waves, which initially oscillated in one direction, into corkscrews.
    By measuring this twisting effect, the researchers determined that the magnetic field around Sagittarius A* is relatively strong. Roughly 150 light-years from the black hole’s core, the field is only one-hundredth of the strength of the magnetic field around Earth. But the researchers estimate that the field likely strengthens by five orders of magnitude just outside Sagittarius A*’s core.
    Reference: R. P. Eatough et al. A strong magnetic field around the supermassive black hole at the center of the galaxy. Nature. August 15, 2013. doi:10.1038/nature12499.

    Tuesday, August 13, 2013

    Follow up on Dravidian origins

    I want to pluck out of the previous post on the Dravidian language and the timing of ANI-ASI population genetic admixture in Indian using LD methods (based on an article that is also widely discussed elsewhere) to focus on one key conclusions:

    While the Dravidian languages likely have a source that pre-dates Indo-Aryan (i.e. Sanskrit derived) linguistic expansion in India, all of the modern varieties of that language likely derived from an expansion that followed the first wave of the Indo-Aryan invasion of India (at least linguistically).  Essentially all other non-Austro-Asiatic, non-Tibeto-Burman languages of India were wiped out at that time.

    This lack of basal diversity in the Dravidian languages is one of the factors that makes this language family hard to assign to any macro-family of languages.  The Dravidian language family was basically one proto-language around the time of the golden age of the classical Greeks.

    Of course, it also doesn't help that the Harappan language is lost and that there are no attestations, even snippets of phrases from any languages of the Deccan Pennisula prior to about the 7th century BCE.  All of the intermediate steps that gave rise from something else to Dravidian have been erased from the annals of history.

    Monday, August 12, 2013

    Higgs numerology from the LC & P paper and a new blogroll addition

    A Blog Roll Shake Up

    I have added theory blog of Mitchell, who frequently comments regarding physics posts at this blog, to the blog roll and removed a blog from a leading physics lab.  This is because Mitchell, while making only a few posts a month at his blogs, points out interesting theoretical speculations and conjectures that other sources miss, while the lab's blog's posts frequently came long after news had broken elsewhere and added little or nothing to the discussion by true blogger run blogs. I run though all the links on this blog at least once a month and never found anything worth posting about at the link that I deleted.

    The LC & P Paper Relationship

    In particular, I'd like to note a recent post culminates a series that he's been making about empirical relationships of fundamental particle masses and the Fermi constant observed in a simple four page paper by Lopez Castro and Pestieau, that he calls the LC & P paper.

    As background, "in the standard model, as set out e.g. in equations 1.4 and 1.14 in this thesis

    mW= 1/2 g v

    mZ= 1/2 sqrt(g'2+g2) v

    where g is the SU(2)L coupling, and g' is the U(1)Y coupling."

    The LC & P paper observes that the sum of the square of the mass of each of the fundamental bosons in the Standard Model is equal to the sum of the square of each of the fundamental fermions in the Standard Model, and that combined that they are equal to the square of the Higgs vacuum expectation value.  As LC & P explain:
    Using mH = (125.6±0.4) GeV and mt = (173.5±0.9) GeV and other masses from the PDG compilation, the left-hand side of Eq. (1) is in full agreement with the right-hand side because v^2 = 1/(√2GF ) = (246.2 GeV)^2.
    As Wikipedia explains, the Fermi coupling constant is itself of function of the W boson mass and the weak force coupling constant:
    The strength of Fermi's interaction is given by the Fermi coupling constant GF. The most precise experimental determination of the Fermi constant comes from measurements of the muon lifetime, which is inversely proportional to the square of GF.[8] In modern terms:[4]
\frac{G_{\rm F}}{(\hbar c)^3}=\frac{\sqrt{2}}{8}\frac{g^{2}}{m_{\rm W}^{2}}=1.16637(1)\times10^{-5} \; \textrm{GeV}^{-2} \ .
    Here g is the coupling constant of the weak interaction, and mW is the mass of the W boson which mediates the decay in question.
    In the Standard Model, Fermi's constant is related to the Higgs vacuum expectation value v = (\sqrt{2}G_{\rm F})^{-1/2} \simeq 246.22 \; \textrm{GeV}[9] 
    This coincidence is somewhat less impressive when one recognizes that only three of the twelve fundamental bosons have any rest mass; the photon and the eight varieties of gluons are massless.  Similarly, the sum of all of the terms in the fundamental fermion part of this equation is smaller than the margin of error in the currently measured square of the top quark mass.

    Disregarding the terms that make only a negligible contribution to the total, the sum of the square of the W boson mass, plus the square of the Z boson mass, plus the square of the Higgs boson mass, plus the square of the top quark mass equals the square of the vacuum expectation value of the Higgs field to within current mass measurement experimental limits.

    The masses of the W boson and Z boson, the numerical value of the weak force coupling constant and Fermi's constant are all known to a precision of about ten parts per million of more, while the measurements of the Higgs boson and top quark masses are known about one thousand times less precisely.  If this relationship turns out to be only approximate when greater precision is available, it is less interesting.

    Note also that the Z boson mass is a function of the W boson mass and the Weinberg angle, and that as noted previously at this blog, empirically, the Higgs boson mass is very close to 2H=2W+Z.

    Thus, the collective mass scale of the Standard Model fermions (with their internal relative masses following Koide triple relationships with each other in many cases), and the collective mass scale of the Standard Model bosons, can each be understood as a function of the W boson mass, the weak force coupling constant, and the Weinberg angle, to the extent that the empirical 2W+Z=2H empirical mass relationship is exactly true.

    If the sum of square of masses of empirical coincidence is also exactly true, then it is also true that:

    "If one divides Eq. (1) by v^2 it yields:

    2λ +g^2/4+(g^2 + g′^2)/4+sum over all fundamental fermion yukawas of (yf^2/2)=1, (2)

    λ, g, g′ and yf being, respectively, the effective and renormalized scalar, gauge and Yukawa couplings."

    The λ and yf are trivially functions of mass via the Higgs mechanism is which the mass of the of Higgs boson is a function mostly of its self-interaction term, and the masses of the fundamental fermions are a function of how strongly they interact with the Higgs field.  But, described in this form, the masses of the fundamental particles come across almost as decay products of the Higgs vev allocated by a unitary vector whose elements are related trivially to the coupling constants of the electro-weak sector.

    Implications for the functional relationship λ(g, g').

    If the bosonic and fermionic contributions to the Higgs vev are identical, then 2λ +g^2/4+(g^2 + g′^2)/4=sum over all fundamental fermion yukawas of (yf^2/2) exactly or very nearly.

    If 2λ +(g^2)/4 + (g^2+g′^2)/4=1/2 (which it would if the contributions were equal), it follows trivially that λ is a function of g and g'.  Specifically, λ=1/4 -(g^2)/4- (g′^2)/8, (3).  .

    Since g is approximately 0.0072973525376 (roughly 1/137) and g' is approximately 10^-6 (so (g'^2)/8 is about 1.25*10^-13 and makes a negligible contribution to λ in (4)), so this would imply λ is equal to approximately 0.249986687. This is double a λ of 0.13, which is considered the Standard Model value.  This could mean that I missed a factor of two somewhere in the math, or it could be the predicted value given masses close to what are observed is double the Standard Model prediction.  The results are robust to me reversing g and g'.  This is a strong, specific and falsifiable prediction of the conjecture articulated by LP & C, assuming that I have done the algebra correctly, which is similar to the prediction made by other means in this paper.

    If the Standard Model prediction is accurate, the measured value of λ based on all LHC data to date should be equal to 0.08 to 0.20 at one sigma confidence level and 0.04 to 0.40 at the two sigma confidence level, so this value would be considerably less than two sigma from the current best estimate of the measured value and is certainly not ruled out experimentally in these early days.

    Under the weaker assumption that (2), but not (3) is accurate, the Higgs boson scalar coupling constant and treating g and g' as fixed, λ is a function of the proportion of the Higgs vev that is attributable to bosonic rather than fermionic contributions and λ(sum(mB^2)/sum(mF^2)) and sum(mB^2)/sum(mF^2) must be quite different for

    While sum(mB^2)/sum(mF^2) may not equal precisely 1 as hypothesized, given that the error in mB is less than +/-1% and that the error in MF is less than +/-1%, the possibility that sum(mB^2)/sum(mF^2) is so far off seems implausible, as does the possibility that sum(mB^2)+sum(mF^2)=v^2 is off by more than a few percent.

    Thus, even if the relations in (1) and (2) are merely approximate, rather than exact, if it is not a total coincidence, even if it is an oversimplification, the Standard Model implication would seem to be that λ is at the high end of the range of experimentally measured possible values, given the masses of the various fundamental particles.  I'm not a professional physicist, of course, and I may be missing a moving part or obvious problem with this analysis somewhere.  As always, comments and corrections are welcome.

    [Corrected mathematical mistake on August 12, 2013; corrected error in name on August 13, 2013.]

    Other observations

    These quite simple relationships manage to coexist with the fact that the conventional Standard Model relationship between the W boson mass, the Higgs boson mass and the top quark mass also holds to within current experimental limitations.

    The symmetry of masses between the bosons and the fermions supports, and perhaps sheds light on, the inference that the symmetry that SUSY puts in obviously and by hand between bosons and fermions, may already exist for some far less obvious fundamental reason in the Standard Model (thus rendering SUSY superfluous).

    This analysis also reinforces the notion that strong force interactions have nothing to do with fundamental particle masses, which seem to be entirely a function of electro-weak interactions, even though strong force interactions account for most of the masses of composite quark matter like protons and neutrons.

    UPDATE August 13, 2013:

    Torrente-Lujan noted in a September 10, 2012 pre-print that mZ*mt/mH2 was 1.0022+/-0.007+/-0.009. In other words, that it was well within margins of experimental error of unity. He used the following data points:

    mZ = 91.1876 +/- 0.0021 GeV/c2;
    mt = 173.5 +/- 0.6 +/- 0.8 GeV/c2; and
    mH = 125.6 +/- 0.4 +/- 0.5 GeV/c2.

    This estimate of mt is a bit higher than the current error weighted world average but not enough, given the uncertainty, to rule out the relationship.

    If the ratio is exactly one and other values were as used in that preprint, mH would need to be 125.8 GeV/c2.

    It is also interesting, or at least equally amusing, to consider an alternative way to express the closeness of the ratio p t to one. If we consider the individual mass ratios mZ/mH; mH/mt, their current experimental values are mZ/mH= 0.726 +/- 0.003; mH/mt= 0.724 +/- 0.005. . . . Somehow the mass of the "lowest" scalar particle mass is the geometric mean of the highest spin 1 and spin 1/2 masses. 
    Again, a hint of the Higgs boson and field's possible role in balancing fermionic and bosonic Standard Model particles, giving rise to a fermion-boson superishsymmetry without the extra particles.

    The rest of the discussion is also interesting.  Don't forget, for example, that g'=g*tan(Weinberg Angle) in the Standard Model.

    Thursday, August 8, 2013

    Explaining South Asian Genetics And Dravidian Linguistic Unity

    Dienekes' Anthropology blog notes a recent paper by Moorjani, et al., estimating the date of admixture of genetically Ancestral North Indian (ANI) people and genetically Ancestral South Indian (ASI) people in South Asia.  This post makes a conjecture about what kind of prehistoric narrative could have given rise to their data that makes more sense than the one provided by the authors.

    ANI genes (which by definition tend to be more common in North India than in South India) are closer to those of other West Eurasians (e.g. in Iran and the Caucuses and Central Asia) than ASI genes.  This makes total geographic sense.  The easiest way for large numbers of people to migrate from the rest of West Eurasia to India is via Northwestern India.  If people migrate overland, from West Eurasia to India they will get to Northern India before they get to Southern India.  And, you would expect people who are geographically close to each other to be more genetically similar than people who are most distant from each other unless some geographic circumstance or remarkable and atypical folk migration led to a different result.

    But, surprisingly, North Indian populations with higher levels of ANI admixture tend to show more recent dates of ANI-ASI admixture than those in South Asia.  This doesn't make a lot of sense without an explanation.  The first substantial ANI-ASI admixture almost certainly had to take place in North India before it did in South India, in the time period (ca. 2200 BCE to 100 CE), estimated by the authors if the data are all lumped together and a single admixture event is assumed.

    What could give rise to this data?

    A sensible explanation requires two things.  First, an understanding of a quirk associated with the methodology they use in cases where there are multiple episodes of admixture between the same two populations that are separated greatly in time.  Second, a historical narrative that could account for the data observed.  This post provides each in turn below the break.

    Tuesday, August 6, 2013

    Mature Harappan Fortified Factory Complex Dispels Peaceful Society Myth

    A newly excavated mature Harappan city is the most heavily fortified ever discovered and appears to have been home to a citadel and multiple internal secured compounds including a factory.  This seems to contradict long standing interpretations of Harappan society as a peaceful one that didn't need fortification outside its frontier trading posts.

    Thursday, August 1, 2013

    Standard Model Still Works

    The chart below illustrates how the Standard Model of Particle Physics has survived yet another precision test of its predictions, in this case relating to the relationship between the W boson mass (measured with a precision of 0.03%), the top quark mass (measured with a precision of about 0.6%), and the Higgs boson mass (measured with a precision of about 1%). Any model that predicts a different relationships between these three masses that is statistically significant relative to these results is wrong.

    From here.

    The graph above shows the mass of the W boson and the mass of the top quark according to the recently determined world average of the most precise measurements to date, with the red ellipse showing the one standard deviation range of those masses given current uncertainty in the world average measurements. This is about a factor of four to five more precise than measurements as of the year 2000.

    For complicated reasons related mostly to interactions between the Higgs boson, the W boson and the top quark in the Standard Model of Particle Physics, there is a linear relationship between the W boson mass and the top quark mass that depends upon the mass of the Higgs boson. 

    The blue line cutting diagonally across the chart shows the expected linear relationship of the W boson mass to the top quark mass at a currently estimated Higgs boson mass.  The diagonal line in the chart above shifts to the right at higher Higgs boson masses, and to the left at lower Higgs boson masses.  But, the location of the blue line in the chart isn't terribly sensitive to the Higgs boson mass.  The blue line shown in based on the currently estimated Higgs boson mass which is known with about a 1% precision - which is allows for considerable accuracy because a roughly 20 GeV error in the Higgs boson mass translates to a roughly 1 GeV shift on the top quark mass axis above, and so the uncertainty in the Higgs boson mass measured shifts the blue line only about +/- 0.05 GeV on the top quark axis in the chart above.

    Since the one standard deviation red ellipse around the current best estimates of the W boson and top quark masses touches the blue line, this means that the relationship between these three experimentally measured physical constants is within one standard deviation of the theoretically expected relationship, confirming the Standard Model.

    This relationship also suggests that we should expect more precise future W boson mass measurements to favor the low end of the current error bars on this measurement (a difference from the central world average value of about 0.03%), while future measured top quark mass may be just a tad higher than the current best estimate (which is known with a precision of +/- 0.6%), although probably not quite as close to the error bar boundaries for that measurement.  This is because, with greater precision, I expect that the data will more precisely confirm, rather than contradict the Standard Model prediction.

    But, if the world average values of the W boson mass doesn't shift a bit lower as measurements of this value become more precise, this may be a signal of beyond the Standard Model physics.  However, if the average value remained unchanged, the error margins would have to shrink by another factor of five (as much as they did in the last twelve years or so), in order for this discrepancy to meet the threshold for considering an experimental result a new discovery, and this is likely to take more than twelve years to happen again given the kinds of experiments that are on the drawing board today.

    Note that the top quark mass estimate is actually much less precise on a percentage basis than the W boson mass determination by a factor of about twenty or so.  It doesn't intuitively look that way on the chart because the X and Y axis have different scales even though they are both in the same units.