Friday, June 22, 2018

Lensing And Rotation Curve Data Consistent At One Sigma In Distant Galaxy

A new study has compared the amount of gravitational lensing observed in a galaxy 500 million light years away, with an estimate of its mass (including dark matter in a dark matter hypothesis) based upon the velocity with which stars rotate around the galaxy, and found the two measurements of galactic gravitational mass to be consistent within a one sigma margin of error (i.e. one standard deviation).

This is not inconsistent with a general relativity plus dark matter model if the distribution of the dark matter particles is not significantly constrained. But, it is also consistent with any modified gravity model in which the modification to gravity affects photons and ordinary matter in the same way (most such models do, although "massive gravity", which was already ruled out with other data, does not even in the limit as graviton mass approaches zero). The paper states the restriction on modified gravity theories as follows:
Our result implies that significant deviations from γ = 1 can only occur on scales greater than ∼2 kiloparsecs, thereby excluding alternative gravity models that produce the observed accelerated expansion of the Universe but predict γ not equal to 1 on galactic scales.
So, it doesn't actually prove that general relativity is correct at the galactic scale relative to gravity modifications as the press release report on the study claims.

Notably, this paper also contradicts a prior study from July of 2017 by Wang, et al., that concluded that rotation curve and lensing data for galaxies are inconsistent, which I recap below the fold. The contradictory paper, however, relies upon the NFW dark matter halo shape model, which many prior observations have determined is a poor description of inferred dark matter distributions actually measured (which are inferred to have an "isothermal" distribution instead, see, e.g. sources cited here), even though the NFW halo shape is what a collisionless dark matter particle model naively predicts. Indeed, reaffirming Wang (2017) the paper in Science states in the body text that:
Our current data cannot distinguish between highly concentrated dark matter, a steep stellar mass-to-light gradient or an intermediate solution, but E325 is definitely not consistent with an NFW dark matter halo and constant stellar mass-to-light ratio.
This important finding is unfortunately not mentioned in the abstract to the paper.

The editorially supplied significance statement and abstract from the new article from the journal Science are as follows:
Testing General Relativity on galaxy scales 
Einstein's theory of gravity, General Relativity (GR), has been tested precisely within the Solar System. However, it has been difficult to test GR on the scale of an individual galaxy. Collett et al. exploited a nearby gravitational lens system, in which light from a distant galaxy (the source) is bent by a foreground galaxy (the lens). Mass distribution in the lens was compared with the curvature of space-time around the lens, independently determined from the distorted image of the source. The result supports GR and eliminates some alternative theories of gravity. 
Abstract 
Einstein’s theory of gravity, General Relativity, has been precisely tested on Solar System scales, but the long-range nature of gravity is still poorly constrained. The nearby strong gravitational lens ESO 325-G004 provides a laboratory to probe the weak-field regime of gravity and measure the spatial curvature generated per unit mass, γ. By reconstructing the observed light profile of the lensed arcs and the observed spatially resolved stellar kinematics with a single self-consistent model, we conclude that γ = 0.97 ± 0.09 at 68% confidence. Our result is consistent with the prediction of 1 from General Relativity and provides a strong extragalactic constraint on the weak-field metric of gravity.
Thomas E. Collett, et al., "A precise extragalactic test of General Relativity." 360 (6395) Science 1342-1346 (2018) DOI: 10.1126/science.aao2469 (pay per view). Preprint available here.

Wednesday, June 20, 2018

Measuring The Electromagnetic Force Coupling Constant

Jester at Resonaances has a new post on a new ultraprecision measurement of the electromagnetic force coupling constant based upon a two month old paper that missed headlines when it came out because of the way that it was published and tagged. He notes:
What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10-10, that is 0.4 parts par billion (ppb). . . . the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27). One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have:







Experimentally, ge is one of the most precisely determined quantities in physics, with the most recent measurement quoting ae = 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! . . . the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on ae down to 0.2 ppb: ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms. . . .  
it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! 
FWIW, I calculate the discrepancy to be 2.43 sigma, and not 2.5.

Jester has a pretty chart that illustrates the discrepancies, but it does more to obscure than reveal what is going on to the uninitiated. Words, which I will paraphrase below for even greater clarity, are more clear in this case.

As Jester explains, the direction of the discrepancy is important. 

New physics fixes that treat electrons and muons the same, in general, don't work, because the electron g-2 calls for a negative contribution to the theoretically calculated value, while the muon g-2 needs a positive contribution to the theoretically calculated value.

So, new physics can't solve both discrepancies without violating lepton universality, which is tightly constrained by other measurements that seem to contradict evidence that this is violated in B meson decays, so this isn't possible without some sort of elaborate theoretical structure that cause it to be violated sometimes, but not others.

On the other hand, discrepancies in the opposite directions in measurements of two quantities that are extremely analogous to each other in the Standard Model, and in different magnitudes, are exactly what you would expect to see if there is theoretical or experimental error in either of the measurement. If you assume that lepton universality is not violated and pool the results for electron g-2 and muon g-2 in a statistically sound way, the discrepancies tend to cancel each other other producing a global average that is closer to the Standard Model prediction.

More experimental data regarding these measurements is coming soon.
The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections...
QED v. QCD

It is also worth pausing for just a moment to compare the state of QED (the Standard Model theory of the electromagnetic force) with QCD (the Standard Model theory of the strong force).

The strong force coupling constant discussed in my previous post at this blog is known with a precision of 7 parts per 1000, which may be overestimated and actually be closer to 4 parts per 1000. This is based on NNLO calculations (i.e. three loops).

The electromagnetic force coupling constant, which is proportionate to the fine structure constant, α, is known with a precision of 0.2 parts per billion, and the electron g-2 is calculated to five loops. So, we know the electromagnetic coupling constant to a precision 2-4 million times greater than we know the strong force coupling constant.

For sake of completeness, we know the weak force coupling constant (which is proportional to the Fermi coupling constant) to a precision of about 2 parts per million. This is about 10,000 times less precise than the electromagnetic coupling constant, but about 2000-4000 times more precisely than the strong force coupling constant.

We know the gravitational coupling constant (i.e. Newton's constant G) which isn't strictly analogous to the three Standard Model coupling constants since it doesn't run with energy scale in General Relativity and isn't dimensionless, to a precision of about 2 parts per 10,000. This is about 20-40 times more precise than the precision with which we have measured the strong force coupling constant (even incorporating my conjecture that the uncertainty in the strong force coupling constant's global average value is significantly overestimated), is about 100 times less precise than our best measurement of the weak force coupling constant, and is about a million times less precise than our best measurement of the electromagnetic coupling constant.

Tuesday, June 19, 2018

Measuring The Strong Force Coupling Constant

The Latest Measurements Of The Strong Force Coupling Constant

The strong force coupling constant has a global average best fit value with a precision of a little less than 0.7% which is slowly but steadily improving over time according to the linked preprint released today. After considering the latest available data, the strength of the coupling constant at the Z boson mass momentum transfer scale is as follows:

 αs(m2Z)=0.1183±0.0008.


This means, roughly, that there is a 95% chance that the true value of the strong force coupling constant is between 0.1168 and 0.1199.

Why Does This Matter?

This roughly 27% precision increase in the precision of the global average, from plus or minus 0.0011 until the latest measurements, matters because the strong force coupling constant is a key bottleneck limiting the accuracy of calculations made throughout the Standard Model in phenomena that have significant QCD (i.e. strong force physics) contributions.

The Example Of Muon g-2

For example, 98.5% of the uncertainty involved in calculating the theoretically expected value of the muon magnetic moment, i.e. muon g-2, in the Standard Model comes from uncertainties in the strong force physics part of that calculation, even though the strong force contribution to muon g-2 accounts for only about one part in 16,760 of the final value of the muon g-2 calculation.

The remaining 1.5% of the uncertainty comes from weak force and electromagnetic force physics uncertainties, with 92.6% of that uncertainty, in turn, coming from weak force physics as opposed to electromagnetic force physics uncertainties, even though the weak force component of the calculation has only about a one part per 759,000 impact on the final value of the muon g-2 calculation.

One part of the strong force contribution to the muon g-2 calculation (hadronic light by light) has a 25% margin of error. The other part of the strong force contribution to the muon g-2 calculation (hadronic vacuum polarization) has a a 0.6% margin of error. The weak force component of the calculation has a 0.7% margin of error. The electromagnetic component of the calculation, in contrast, has a mere 1 part in 1.46 billion margin of error.

The overall discrepancy between a 2004 measurement of muon g-2 and the Standard Model prediction was one part in about 443,000. So, a 0.7% imprecision in a Standard Model constant necessary to make every single QCD calculation seriously impedes the ability of the Standard Model to make more precise predictions.

Implications For Beyond The Standard Model Physics Searches

Imprecision makes it hard to prove or disprove beyond the Standard Model physics theories, because with a low level of precision, both possibilities are often consistent with the Standard Model.

Even "numerology" hypothesizing ways to calculate the strong force coupling constant from first principles is pretty much useless due to this imprecision, because coming up with first principles combinations of numbers that can match a quantity with a margin of error of plus or minus 0.7% is trivially easy to do in myriad ways that are naively sensible, and hence not very meaningful.

Is The Margin Of Error In The Strong Force Coupling Constant Overstated?

Given the relative stability of the global average over the last couple of decades or so, during which many experiments would have been expected to have tweaked this result more dramatically than they actually have if the stated margin of error was accurate, my intuition is also that the margin of error that is stated is probably greater than the actual difference between the global average value and the true value of this Standard Model constant.

I suspect that the actual precision is closer to plus or minus 0.0004 and is overstated due to conservative estimates of systemic error by experimental high energy physicists. This would put the true "two sigma" error bars that naively mean that there is a 95% chance that the true value is within them at 0.1175 to 0.1191.

A review of the relevant experimental data can be found in Section 9.4 of a June 5, 2018 review for the Particle Data Group.

Why Is The Strong Force Coupling  Constant So Hard To Measure?

The strong force coupling constant can't be measured directly.

It has to be inferred from physics results involving hadrons (composite particles made up of quarks) and from top force physics measurements that infer the properties of top quarks from their decay products.

So, once you have your raw experimental data, you then have to fit that data to a set of very long and difficult to calculation equations that include the strong force coupling constant as one of the variables, containing multiple quantities that have to be approximated using infinite series that a truncated to a manageable level, to convert the experimental data into an estimated value of the strong force coupling constant that can be inferred from the results.

The main holdup on getting a more precise measurement of the strength of the strong force coupling constant is the difficulty involved in calculating what value for the constant is implied by an experimental result, not, for the most part, the precision of the experimental data itself.

For example, the masses and properties of the various hadrons that are observed (i.e. of composite particles made of quarks) have been measured experimentally to vastly greater precision than they can be calculated from first principles, even though a first principles calculation is, in principle, possible with enough computing power and enough time, and almost nobody in the experimental or theoretical physics community thinks that the strong force part of the Standard Model of particle physics in incorrect at a fundamental level.

This math is hard mostly because QCD calculations are very hard to do in practice to sufficient precision, which is mostly because the infinite series involved in calculating them converge so much more slowly than those in other parts of quantum physics, so far more terms must be calculated to get comparable levels of precision.

The Running Of The Strong Force Coupling Constant

Another probe of beyond the Standard Model physics is to look at how the strength of the strong force coupling constant varies with momentum transfer scale. 

Like all Standard Model empirically determined constants, the strength of the strong force coupling constant varies with energy scale, which is why the global average has to be reported in a manner normalized for energy scale, something called the "running" of the strong force coupling constant.

At low energies, in the Standard Model (illustrated in the chart below from a source linked in the linked blog post), as confirmed experimentally, the strong force coupling constant gets close to or near zero at zero energy, peaks at about 216 MeV, and then gradually decreases as the energy scale increases beyond that point. There is considerable debate over whether it goes to zero, or instead to a finite value close to zero, at zero energy, which is important for a variety of theoretical reasons and has not been definitively resolved.


The running of the strong force coupling constant in beyond the Standard Model theories like Supersymmetry (a.k.a. SUSY) is materially different at high energies than it is in the Standard Model (as shown in the charge below from the following linked post where the inverse of the strength of the strong force coupling constant at increasing energies on a logarithmic scale shown on the X-axisi is the SU(3) line) and the differences might be possible to distinguish with maximal amounts of high energy data from the LHC, is progress can be made in the precision of those measurements, as I explained in the linked blog post from January 28, 2014:


The strong force coupling constant, which is 0.1184(7) at the Z boson mass, would be about 0.0969 at 730 GeV and about 0.0872 at 1460 GeV, in the Standard Model and the highest energies at which the strong force coupling constant could be measured at the LHC is probably in this vicinity. 
In contrast, in the MSSM [minimal supersymmetric standard model], we would expect a strong force coupling constant of about 0.1024 at 730 GeV (about 5.7% stronger) and about 0.0952 at 1460 GeV (about 9% stronger). 
Current individual measurements of the strong force coupling constant at energies of about 40 GeV and up (i.e. without global fitting or averaging over multiple experimental measurements at a variety of energy scales), have error bars of plus or minus 5% to 10% of the measured values. But, even a two sigma distinction between the SM prediction and SUSY prediction would require a measurement precision of about twice the percentage difference between the predicted strength under the two models, and a five sigma discovery confidence would require the measurement to be made with 1%-2% precision (with somewhat less precision being tolerable at higher energy scales).
The same high energy running without a logarithmic scale and an inverse function plotted looks like this in the range where it has been experimentally measured:



In a version of SUSY where supersymmetric particles are very heavy (in tens of TeV mass range, for example), however, the discrepancies in the running of the strong force coupling constant between the Standard Model and SUSY crop up sufficiently to be distinguished only at significantly higher energy scales than those predicted for the MSSM version of SUSY.

The paper linked above doesn't discuss the latest measurements of the running of the strong force coupling constant, however. 

So far, the running of the strong force coupling constant is indistinguishable from the Standard Model prediction in all currently available data that I have seen, while monitoring new experimental results regarding this matter fairly closely since my last comprehensive review of it four and a half years ago. Of course, as always, I welcome comments reporting any new data that I have missed regarding this issue.

Friday, June 15, 2018

There Was A Neolithic Revolution In The Amazon


I've discussed this in a post from six and a half years ago (citing some sources from a decade ago), so this isn't exactly breaking news, but a new study (in Spanish or Portuguese) adds to the body of evidence demonstrating that there were once farmers in the pre-Columbian era in the Amazon region. 

For reasons unknown, this civilization collapsed in pre-Columbian times.

Thursday, June 14, 2018

Linguistic Exogamy Is A Thing

Who knew that there were cultures in which it was mandatory to marry someone who didn't speak the same language that you did?

I guess it provides an excuse for the marital miscommunications which are inevitable in any case, and encourages understanding of them. It could also promote language learning necessary for effective regional ties.
Human populations often exhibit contrasting patterns of genetic diversity in the mtDNA and the non-recombining portion of the Y-chromosome (NRY), which reflect sex-specific cultural behaviors and population histories. 
Here, we sequenced 2.3 Mb of the NRY from 284 individuals representing more than 30 Native-American groups from Northwestern Amazonia (NWA) and compared these data to previously generated mtDNA genomes from the same groups, to investigate the impact of cultural practices on genetic diversity and gain new insights about NWA population history. Relevant cultural practices in NWA include postmarital residential rules and linguistic-exogamy, a marital practice in which men are required to marry women speaking a different language. 
We identified 2,969 SNPs in the NRY sequences; only 925 SNPs were previously described. The NRY and mtDNA data showed that males and females experienced different demographic histories: the female effective population size has been larger than that of males through time, and both markers show an increase in lineage diversification beginning ~5,000 years ago, with a male-specific expansion occurring ~3,500 years ago. 
These dates are too recent to be associated with agriculture, therefore we propose that they reflect technological innovations and the expansion of regional trade networks documented in the archaeological evidence. Furthermore, our study provides evidence of the impact of postmarital residence rules and linguistic exogamy on genetic diversity patterns. Finally, we highlight the importance of analyzing high-resolution mtDNA and NRY sequences to reconstruct demographic history, since this can differ considerably between males and females.
Leonardo Arias, et al., "Cultural Innovations influence patterns of genetic diversity in Northwestern Amazonia" BioRxiv (June 14, 2018) doi: https://doi.org/10.1101/347336

See also Luke Fleming, "Linguistic exogamy and language shift in the northwest Amazon" 240 International Journal of the Sociology of Language (May 5, 2016) https://doi.org/10.1515/ijsl-2016-0013
The sociocultural complex of the northwest Amazon is remarkable for its system of linguistic exogamy in which individuals marry outside their language groups. This article illustrates how linguistic exogamy crucially relies upon the alignment of descent and post-marital residence. Native ideologies apprehend languages as the inalienable possessions of patrilineally reckoned descent groups. At the same time, post-marital residence is traditionally patrilocal. This alignment between descent and post-marital residence means that the language which children are normatively expected to produce – the language of their patrilineal descent group – is also the language most widely spoken in the local community, easing acquisition of the target language. 
Indigenous migration to Catholic mission centers in the twentieth century and ongoing migration to urban areas along the Rio Negro in Brazil are reconfiguring the relationship between multilingualism and marriage. With out-migration from patrilineally-based villages, descent and post-marital residence are no longer aligned. Multilingualism is being rapidly eroded, with language shift from minority Eastern Tukanoan languages to Tukano being widespread. Continued practice of descent group exogamy even under such conditions of widespread language shift reflects how the semiotic relationship between language and descent group membership is conceptualized within the system of linguistic exogamy.
And, this 1983 book:
This book is primarily a study of the Bará or Fish People, one of several Tukanoan groups living in the Colombian Northwest Amazon. These people '...form part of an unusual network of intermarrying local communities scattered along the rivers of the region. Each community belongs to one of sixteen different groups that speak sixteen different languages, and marriages must take place between people not only from different communities but with different primary languages. In a network of this sort, which defies the usual label of 'tribe', social identity assumes a distinct and unusual configuration. In this book, Jean Jackson's incisive discussions of Bará marriage, kinship, spatial organization, and other features of the social and geographic landscape show how Tukanoans (as participants in the network are collectively known) conceptualize and tie together their universe of widely scattered communities, and how an individual's identity emerges in terms of relations with others' (back cover). Also discussed in the text are the effects of the Tukanoan's increasing dependency on the national and global political economy and their decreasing sense of self-worth and cultural autonomy.

Wednesday, June 13, 2018

The Ecology Of An Empire

There are many apocryphal quotes attributed to Genghis Khan. And there’s a reason for that — in a single generation he led an obscure group of Mongolian tribes to conquer most of the known world. His armies, and those of his descendants, ravaged lands as distant as Hungary, Iran and China. After the great wars, though, came great peace — the Pax Mongolica. But the scale of death and destruction were such that in the wake of the Mongol conquests great forests grew back from previously cultivated land, changing the very ecosystem of the planet.
From here.

Monday, June 11, 2018

Bad Sportsmanship At Science (the Magazine).

Sabine Hossenfelder's new book, "Lost in Math" (although I like the German title, "The Ugly Universe" better), will arrive on my porch tomorrow afternoon. The review of that book at the magazine "Science" is unsporting, in bad taste, and does not adhere to the standards of civility we ought to expect in reputable professional science:
Science magazine has a review. For some reason they seem to have decided it was a good idea to have the book reviewed by a postdoc doing exactly the sort of work the book is most critical of. The review starts off by quoting nasty anonymous criticism of Hossenfelder from someone the reviewer knows on Facebook. Ugh.
Via Not Even Wrong (Bee's makes a pointed rebuttal to it here).

Bee also took the high road in writing this book with a human subjects committee "best practices" worthy approach to her interviews. The same post has a delightful anecdote recounting the arrival of the finished product at her door:
The cover looks much better in print than it does in the digital version because it has some glossy and some matte parts and, well, at least two seven-year-old girls agree that it’s a pretty book and also mommy’s name is on the cover and a mommy photo in the back, and that’s about as far as their interest went.
I'll reserve a substantive review until I've read the book itself.

An Archaic Hominin Recap

An Aeon article has a decent recap of the state of research on human dispersals out of Africa and archaic hominins. Nothing in its is new to longtime readers of this blog, so I won't add anything here, but it is a good, reasonably up to date starting place to get grounded for readers who are more new to this area of research.

Thursday, June 7, 2018

Physicists' Dirty Little Secret

It isn't widely known, but a post by Lubos Motl reminds us, that there are actually some papers that come out of the latest high energy physics experiments in which the results do not neatly match their predictions. The latest involves the production of pairs of top quarks, but it actually happens much more frequently than most people familiar with high energy physics knows or admits.

Almost all of these cases involve quantum chromodynamics, the Standard Model theory of the strong force. And, the reason this doesn't make headlines is that for all practical purposes, in anything more than the most idealized highly symmetric situations, it is impossible to do calculations that make actual predictions with the actual Standard Model equations of QCD. Instead, you must use one or more of several tools for numerically approximating what the Standard Model equations predict, each of which has its flaws, and some of which aren't fully compatible with each other.

Since each of these numerical approximations has been well validated in their core domains of applicability, there is nothing deeply wrong with any of them. But, none of them are perfect, even in isolation.

But, towards the edges of their domains of applicability, or in circumstances when you have to apply more than one not fully compatible approximation to the same problem to get an answer, both of which are present in the experiment the Motl describes in his recent post, sometimes the results can be wildly off. Also, lots of key QCD constants are only known to precisions of 1% of less, which also doesn't help produce accurate predictions from first principles.

Yet, this isn't terribly notable, because everyone knows that the relevant sources of theoretical prediction error in these situations often far exceeds the combined statistical and systemic experimental error. 

Hence, in QCD, we often know that the experimental measurements are sound but have deep doubts about the soundness of our predictions, while the situation is the other way around in all other parts of QCD physics. QCD is a whole different ball game in Standard Model physics.

In somewhat related news, it turns out that a method of doing calculations in QCD that was widely assumed in conventional wisdom to be the most efficient is actually much less efficient than an old school approach that takes a little more thought. The old school methods approaches the theoretical maximum of calculation efficiency, which makes it possible to calculate the infinite series approximations common in quantum mechanics calculations to far more terms and thus to achieve much greater precision and accuracy with the same amount of calculation work. So, progress is being made in fits and starts on the theoretical front, even though it can be painful to get there.

Caste Genetics

Razib Khan makes some important points about the genetics of caste at Brown Pundits:
[I]t looks like most Indian jatis have been genetically endogamous for ~2,000 years, and, varna groups exhibit some consistent genetic differences.
The level of this endogamy at the jati level is extreme. I personally wonder if some of that is due to non-endogamous individuals being jettisoned from their caste, as it is stunningly hard to maintain that level of endogamy (on the order of 99.9%+ compliance in each generation) for two thousand years in people who live cheek by jowl in the same cities, villages and regions, and have overlapping appearance phenotypes, without such a purifying mechanism.

There is lots of structure and diversity in the overall population of Pakistan-India, more or less on a northwest-southern cline, as well as by varna. At the varna level, this is mostly due to differing degrees to steppe ancestry, although there is a deeper level of Iranian Neolithic farmer v. South Asian hunter-gatherer cline that runs in parallel along very similar geographic clines in South Asia.

Bangladeshi people have essentially the same amount of South Asian ancestry which isn't very diverse, and considerable variation in Tibeto-Burman ancestry. This probably has something to do with frontier founder effects and the way the frontier destabilized traditional social organization.

Pakistani people, genetically, have a genetic mix very similar to that of people from India, despite the fact that as Muslims, they do not give religious credence to the caste structure of the Hindu religion.

Thursday, May 31, 2018

The MiniBoone Anomaly

A pre-print from May 30, 2018 from the MiniBooNE neutrino oscillation reactor experiment's collaboration reports a big anomaly. Lubos also discusses the announcement and he draws some conclusions that the paper does not. The abstract is as follows:
The MiniBooNE experiment at Fermilab reports results from an analysis of νe appearance data from 12.84×1020 protons on target in neutrino mode, an increase of approximately a factor of two over previously reported results. A νe charged-current quasi-elastic event excess of 381.2±85.2 events (4.5σ) is observed in the energy range 200<EQEν<1250~MeV. Combining these data with the ν¯e appearance data from 11.27×1020 protons on target in antineutrino mode, a total νe plus ν¯e charged-current quasi-elastic event excess of 460.5±95.8 events (4.8σ) is observed. If interpreted in a standard two-neutrino oscillation model, νμνe, the best oscillation fit to the excess has a probability of 20.1% while the background-only fit has a χ2-probability of 5×107 relative to the best fit. The MiniBooNE data are consistent in energy and magnitude with the excess of events reported by the Liquid Scintillator Neutrino Detector (LSND), and the significance of the combined LSND and MiniBooNE excesses is 6.1σ. All of the major backgrounds are constrained by in-situ event measurements, so non-oscillation explanations would need to invoke new anomalous background processes. Although the data are fit with a standard oscillation model, other models may provide better fits to the data.
Basically, it is seeing more electron neutrinos and electron anti-neutrinos than expected; twice the rate of previously reported results.

A similar anomaly was seen at the LSDN experiment, leading to the "reactor anomaly" in neutrino physics, but this had almost been ruled out by a variety of experiments and revised estimates of the number of neutrinos that should be produced from a reactor source, when the MiniBooNE result dropped in our lap, arguably supporting the hypothesis of the reactor anomaly that there is a fourth kind of neutrino out there.

For my money, I expect that the MiniBooNE anomaly will be resolved as well, just like the LSDN one was, due to some previously unconsidered source of additional neutrinos, particularly as MiniBooNE seems to be reporting different data than it did earlier in its run. 

For example, maybe the reactor that is the main source of its neutrinos was operated differently in some manner that the MiniBooNE experiment wasn't told about causing it to produce more neutrinos, such as having a different fuel mix. Or, perhaps somebody tuned up the neutrino detectors better causing them to capture a larger percentage of events than it had in the past and this wasn't considered in determining the expected number of events when evaluating this anomaly.

But, for now, this is the hot new mystery in physics land.

UPDATE:

Cherry picking the LSDN and MiniBooNE anomalies without also including data from other reactor experiments is statistically unsound, so the 6.1 sigma anomaly quoted isn't meaningful. If you include all available evidence, pro- and con- that is known and available, as any valid statistical probability estimate should, the case for a fourth neutrino type that oscillates with the three Standard Model neutrinos is greatly diminished.

A review of the pre-anomaly evidence that there is not a sterile neutrino that oscillates with the three active neutrinos can be found at this prior post at this blog (as does 2014 data from the Daya Bay and JUNO reactor neutrino experiments). Some highlights from that post:
[A]s of 2015, the constraint with Planck data and other data sets was 3.04 ± 0.18 (even in 2014 cosmology ruled out sterile neutrinos). Neff equal to 3.046 in a case with the three Standard Model neutrinos and neutrinos with masses of 10 eV or more not counting in the calculation. So, the four neutrino case is ruled out at a more than 5.3 sigma level already, which is a threshold for a scientific discovery that there are indeed only three neutrinos with masses of 10 eV or less, ruling out the sterile neutrino hypothesis for a stable sterile neutrino of under 10 eV (when a best fit of potential anomalies from reactors predicts a sterile neutrino mass of about 1 eV also here). A 2015 pre-print on notes that:
The 95% allowed region in parameter space is Neff < 3.7, meff s < 0.52 eV from PlanckTT + lowP + lensing + BAO. This result has important consequences for the sterile neutrino interpretation of short-baseline anomalies. It has been shown that a sterile neutrino with the large mixing angles required to explain reactor anomalies would thermalize rapidly in the early Universe, yielding ∆Neff = 1. The preferred short-baseline solution then corresponds to ms of about 1 eV and ∆Neff = 1 and is strongly excluded (more than 99% confidence) by the above combination of Planck and BAO data.
The MINOS and MINOS+ reactor experiments rule out a light sterile neutrino, confirming the cosmology result. The abstract of a new pre-print on their results states that:
"A simultaneous fit to the charged-current muon neutrino and neutral-current neutrino energy spectra in the two detectors yields no evidence for sterile neutrino mixing using a 3+1 model. The most stringent limit to date is set on the mixing parameter sin2θ24 for most values of the sterile neutrino mass-splitting Δm241>104 eV2."

The MINOS data explores a range of values for Δm41 between the lightest mass state and the sterile neutrino mass state of 10 meV to 32,000 meV, where the bounds on the sum of the three neutrino masses from cosmology in the currently experimentally preferred normal hierarchy is 60 meV to 110 meV. For example, the MINOS data shows that:
At fixed values of ∆m241 the data provide limits on the mixing angles θ24 and θ34. At ∆m241 = 0.5 eV2, we find sin2θ24 less than [0.0050 (90% C.L.), 0.0069 (95% C.L.)] and sin2θ34 less than [0.16 (90% C.L.), 0.21 (95% C.L.)].
Weak boson decays have long ago ruled out the possibility of a number of weakly interacting neutrinos different than three. The number of weakly interacting neutrinos of less than 45 GeV upon Z boson decay is 2.992 ± 0.007 (with a mean value 1.14 sigma from 3) which is consistent with the Standard Model, in a quantity that must have an integer value. The two neutrino and four neutrino hypotheses are ruled out at the 140+ sigma level, when a mere 5 sigma result is considered scientifically definitive.
The Daya Bay and Juno paper abstract states that:
In this work, we show that the high-precision data of the Daya Bay experiment constrain the 3+1 neutrino scenario imposing upper bounds on the relevant active-sterile mixing angle sin 2 2 θ14 . 0 .06 at 3 σ confidence level for the mass-squared difference ∆ m 2 41 in the range (10 − 3 , 10 − 1 ) eV 2 . The latter bound can be improved by six years of running of the JUNO experiment, sin2 2θ14 . 0.016, although in the smaller mass range ∆m2 41 ∈ (10 − 4 , 10 − 3 ) eV 2 . We have also investigated the impact of sterile neutrinos on precision measurements of the standard neutrino oscillation parameters θ13 and ∆ m 2 31 (at Daya Bay and JUNO), θ12 and ∆ m 2 21 (at JUNO), and most importantly, the neutrino mass hierarchy (at JUNO). We find that, except for the obvious situation where ∆ m 2 41 ∼ ∆ m 2 31, sterile states do not affect these measurements substantially. 
Further data from Daya Bay in 2017 further disfavors this hypothesis. The abstract of this paper notes that:
The Daya Bay experiment has observed correlations between reactor core fuel evolution and changes in the reactor antineutrino flux and energy spectrum. Four antineutrino detectors in two experimental halls were used to identify 2.2 million inverse beta decays (IBDs) over 1230 days spanning multiple fuel cycles for each of six 2.9 GWth reactor cores at the Daya Bay and Ling Ao nuclear power plants. Using detector data spanning effective 239Pu fission fractions, F239, from 0.25 to 0.35, Daya Bay measures an average IBD yield, σ¯f, of (5.90±0.13)×1043 cm2/fission and a fuel-dependent variation in the IBD yield, dσf/dF239, of (1.86±0.18)×1043 cm2/fission. 
This observation rejects the hypothesis of a constant antineutrino flux as a function of the 239Pu fission fraction at 10 standard deviations. The variation in IBD yield was found to be energy-dependent, rejecting the hypothesis of a constant antineutrino energy spectrum at 5.1 standard deviations. While measurements of the evolution in the IBD spectrum show general agreement with predictions from recent reactor models, the measured evolution in total IBD yield disagrees with recent predictions at 3.1σ. 
This discrepancy indicates that an overall deficit in measured flux with respect to predictions does not result from equal fractional deficits from the primary fission isotopes 235U, 239Pu, 238U, and 241Pu. Based on measured IBD yield variations, yields of (6.17±0.17) and (4.27±0.26)×1043 cm2/fission have been determined for the two dominant fission parent isotopes 235U and 239Pu. A 7.8% discrepancy between the observed and predicted 235U yield suggests that this isotope may be the primary contributor to the reactor antineutrino anomaly.