Friday, June 29, 2018

Cosmology Data Almost Rules Out Even Exotic Sterile Neutrino Models

One way to explain the MiniBooNE anomaly (which may or may not be real) is that it is caused by a "sterile neutrino" which is a neutrino that oscillates with other active neutrino types but has no other Standard Model of particle physics interactions with other kinds of particles.

This is disfavored by astronomy measurements, including Neff (the effective number of neutrino species). As noted in the MiniBooNE link: "[A]s of 2015, the constraint with Planck data and other data sets was [Neff is equal to] 3.04 ± 0.18." Neff equal to 3.046 in a case with the three Standard Model neutrinos and neutrinos with masses of 10 eV or more do not count in the calculation. A light (under 10 eV) sterile neutrino that oscillated with the active neutrinos would push Neff to 4.05 or so, which is ruled out at the five plus sigma level. Unlike limits on the number of active neutrino types from W and Z boson decays, the strong restrictions from Neff can apply to "right handed" neutrinos that oscillate with other neutrinos, but do not interact via the weak force.

Cosmology constraints from Neff measurements can be evaded with a "secret interaction" model, in which sterile neutrinos do not "freeze out" at the time when the astronomy features which are measured to determine Neff come into being. But, new cosmology constraints disfavor even this model, leaving only even more Baroque forms of sterile neutrinos as the only way to accommodate the cosmology data.
Several long-standing anomalies from short-baseline neutrino oscillation experiments -- most recently corroborated by new data from MiniBooNE -- have led to the hypothesis that extra, 'sterile', neutrino species might exist. Models of this type face severe cosmological constraints, and several ideas have been proposed to avoid these constraints. Among the most widely discussed ones are models with so-called 'secret interactions' in the neutrino sector. In these models, sterile neutrinos are hypothesized to couple to a new interaction, which dynamically suppresses their production in the early Universe through finite-temperature effects. Recently, it has been argued that the original calculations demonstrating the viability of this scenario need to be refined. Here, we update our earlier results from arXiv:1310.6337 [JCAP 1510 (2015) no.10, 011] accordingly. We confirm that much of the previously open parameter space for secret interactions is in fact ruled out by cosmological constraints on the sum of neutrino masses and on free-streaming of active neutrinos. We then discuss possible modifications of the vanilla scenario that would reconcile sterile neutrinos with cosmology.
Xiaoyong Chu, et al., "Sterile Neutrinos with Secret Interactions -- Cosmological Discord?" (June 27, 2018).

Normal v. Inverted Hierarchy and Absolute Neutrino Masses

Astronomy data can now credibly support a 0.091 eV upper limit on the sum of the three active neutrino masses at a 95% confidence level (i.e. 2 sigma). The "normal" neutrino mass hierarchy is now favored over the "inverted" neutrino mass hierarchy at the 3.5 sigma level by existing available data according to the linked review article.

A lower bound based upon neutrino oscillation experiments can be put on the sum of the three neutrino masses for each of the two scenarios. For a normal hierarchy, it is 0.0585 ± 0.00048 eV, for an inverted hierarchy it is 0.0986 ± 0.00085 eV.

Among other things this puts an upper bound the the lightest neutrino mass eigenvalue of less than 11 meV. There are also strong qualitative arguments for the lightest neutrino mass eigenvalue being non-zero, because massless particles behave differently than particles with any mass whatsoever, although the oscillation data alone does not compel this result. So, we really know the absolute neutrino masses of all three types to a precision of about +/- 0.0055 eV (with a strong bias towards the low end), and almost all of this error is perfectly correlated between the three neutrino masses. 

So, even though we don't have a direct measurement to confirm this fact, we actually have a pretty good idea regarding the absolute neutrino masses at this point, as well as their hierarchy.

Neutrinoless double beta decay experiments haven't quite done so yet, but we only need to improve their precision by a factor of about 30 to definitively determine if neutrino mass is Majorana or Dirac, which is an attainable goal that could be achieved in my lifetime and probably even a decade or so. As I've stated repeatedly before at this blog, my prediction is that the neutrino masses will turn out to be Dirac, contrary to the majority position in the theoretical physics community, for a variety of reasons.

Wednesday, June 27, 2018

Another Big Problem For String Theory


A diagram of string theory dualities. Yellow arrows indicate S-duality. Blue arrows indicate T-duality. These dualities may be combined to obtain equivalences of any of the five theories with M-theory (Image and caption via Wikipedia).


A schematic illustration of the relationship between M-theory, the five superstring theories, and eleven-dimensional supergravity. The shaded region represents a family of different physical scenarios that are possible in M-theory. In certain limiting cases corresponding to the cusps, it is natural to describe the physics using one of the six theories labeled there (from the same source).

Another new paper has largely established that one popular to theorize about class of string theories (Type IIB) has no variation ("vacua") that are consistent with our observations that the universe has a positive cosmological constant (technically speaking "de Sitter space"), rendering fifteen years of research attempting to apply this class of string theories to the real world a waste.

There are several other classes of string theories, so all hope for string theory is not lost yet. But, even more concerning for sting theorists is the possibility that since all of the different classes of string theories have deep connections to each other though S-dualities and T-dualities, that bind them to a common M-theory, that this result might be later proved to hold for all versions of sting theory, effectively falsifying the entire string theory enterprise.

Another recent paper, which I blogged previously, has made similar claims, and explores the impact of this result on non-Type IIB string theories for which there has not been as much research regarding the question that seems to have made it possible to falsify Type IIB string theory.

This is on top of some other long standing problems with string theory, such as the non-detection of any experimental evidence for supersymmetry (which is an element of almost all classes of string theory), the non-detection of more the four dimensions of space-time (which is an element of almost all versions of string theory), and the lack of any identified string theory vacua that correspond to the Standard Model and general relativity compatible universe in which we live.

String theorists are hoping that it might be possible to solve the lack of deSitter vacua with quintessence, i.e. by putting something into the vacuum rather than changing the nature of the vacuum itself.

This also leaves me wondering about questions like: how different phenomenologically from each other are (1) classical general relativity from (2) a spin-0 massless graviton in flat Minkowski space (with all of the properties apart from spin-2 of a standard spin-2 massless graviton) from (3) a spin-2 massless graviton in flat Minkowski space from (4) a spin-2 massless graviton in de Sitter space. My intuition is that options (2) and (3) are much less different phenomenologically from (1) and (4) than is conventionally assumed.

There is a short anthology of the literature on this new problem at this site, which I reproduce in the pertinent part below the fold (without reformatting).

Monday, June 25, 2018

Bantus Replaced All The Men In Matrilineal Southwestern Angola

It is hard to overstate the extent to which late prehistoric waves on conquest replaced local men with conquerors in the gene pool, and Africa's Bantu expansion was no exception, even in societies that remained matrilineal. 
Southwestern Angola is a region characterized by contact between indigenous foragers and incoming food-producers, involving genetic and cultural exchanges between peoples speaking Kx′a, Khoe-Kwadi and Bantu languages. Although present-day Bantu-speakers share a patrilocal residence pattern and matrilineal principle of clan and group membership, a highly stratified social setting divides dominant pastoralists from marginalized groups that subsist on alternative strategies and have previously been though to have pre-Bantu origins. Here, we compare new high-resolution sequence data from 2.3 Mb of the non-recombining Y chromosome (NRY) from 170 individuals with previously reported mitochondrial genomes (mtDNA), to investigate the population history of seven representative southwestern Angolan groups (Himba, Kuvale, Kwisi, Kwepe, Twa, Tjimba, !Xun) and to study the causes and consequences of sex-biased processes in their genetic variation. We found no clear link between the formerly Kwadi-speaking Kwepe and pre-Bantu eastern African migrants, and no pre-Bantu NRY lineages among Bantu-speaking groups, except for small amounts of ″Khoisan″ introgression. We therefore propose that irrespective of their subsistence strategies, all Bantu-speaking groups of the area share a male Bantu origin. Additionally, we show that in Bantu-speaking groups, the levels of among-group and between-group variation are higher for mtDNA than for NRY. These results, together with our previous demonstration that the matriclanic systems of southwestern Angolan Bantu groups are genealogically consistent, suggest that matrilineality strongly enhances both female population sizes and interpopulation mtDNA variation.
doi: https://doi.org/10.1101/349878

Still Waiting For Harappan Ancient DNA

A much awaited paper analyzing ancient DNA from Harappan subjects from Rakhigarhi in North India remains unpublished although it is supposed to appear on bioRxiv any day now and many rumors about it have been released. Apparently, only two samples produced autosomal results, one of which was contaminated partially by investigator DNA from Korea, there is Y-DNA L-M20 in the sample, the samples date to ca. 2500 BCE to 2250 BCE or so, and lack steppe ancestry.  The mtDNA (which is passed from mother to children) in the samples is also supposed to be very local in character. Leaks of the results have been seeping out for at least sixteen months.

This is as they should in the Aryan invasion theory (AIT), which is overwhelmingly supported by other linguistic evidence, archaeological evidence, modern genetic evidence and ancient DNA from adjacent regions, because the invasion was supposed to have not taken place until sometime ca. 2000 BCE to 1500 BCE. If there was no steppe ancestry in this area in two random individuals from 2500 BCE to 2250 BCE, and there is now a great deal of such ancestry (which is found in earlier time periods in ancient DNA from the steppe itself), then it had to come from somewhere and cause and effect aren't difficult to determine.

Based on other data, the samples are widely expected to be a mix of Caucasian/Iranian farmer and South Asian hunter-gatherer DNA.

The delays in the paper's release appear to be political, because AIT is very unpopular with the powerful Hindu Nationalist political forces whose political party is currently part of the governing coalition in India. A newspaper account in India reports this latest rumor about a lack of steppe ancestry in the ancient DNA under the headline that it "junks the Aryan invasion theory" of South Asian genetic origins even though this result does exactly the opposite.

Clovis Was A Culture That Lasted Only 300 Years

The initial dating of Clovis culture objects found in a grave with remains from a Clovis individual in Montana had indicated that the remains were much younger than the artifacts buried with the individual. (Ancient DNA shows that the individual is closely related to modern populations.) An effort to re-date the artifacts and the remains, however, found that they coincided in date as expected because a technical issue hadn't been addressed properly the first time.
"The human remains and Clovis artifacts can now be confidently shown to be the same age and date between 12,725 to 12,900 years ago," Waters notes. "This is right in the middle to the end of the Clovis time period which ranges from 13,000 to 12,700 years ago.
Most striking to me is a point that is a mere background footnote in this story. The Clovis culture which left distinctive artifacts across North America lasted only 300 years. Also, it began many centuries after the first archaeologically established evidence of modern humans in the New World. And, it generally progressed from East to West.

This culture coincided with the Younger Dryas period of abrupt climate change, which (contrary to overly skeptical statements at Wikipedia) was probably caused by an extraterrestrial impact in North America ca. 12900 years before present. Whether this abrupt climate change event caused the Clovis culture, or ended it, isn't entirely clear.

The source paper for the linked story is:

Lorena Becerra-Valdivia, et al., "Reassessing the chronology of the archaeological site of Anzick." Proceedings of the National Academy of Sciences 201803624 (2018) DOI: 10.1073/pnas.1803624115

Gibbons In Imperial China

Absence of evidence is not necessarily evidence of absence as this recent find of a previously unknown genus and species of gibbon in China from ca. 250 BCE reveals. 
The noblewoman's ape 
Human activities are causing extinctions across a wide array of taxa. Yet there has been no evidence of humans directly causing extinction among our relatives, the apes. Turvey et al. describe a species of gibbon found in a 2200- to 2300-year-old tomb ascribed to a Chinese noblewoman. This previously unknown species was likely widespread, may have persisted until the 18th century, and may be the first ape species to have perished as a direct result of human activities. This discovery may also indicate the existence of unrecognized primate diversity across Asia. 
Abstract 
Although all extant apes are threatened with extinction, there is no evidence for human-caused extinctions of apes or other primates in postglacial continental ecosystems, despite intensive anthropogenic pressures associated with biodiversity loss for millennia in many regions. Here, we report a new, globally extinct genus and species of gibbon, Junzi imperialis, described from a partial cranium and mandible from a ~2200- to 2300-year-old tomb from Shaanxi, China. Junzi can be differentiated from extant hylobatid genera and the extinct Quaternary gibbon Bunopithecus by using univariate and multivariate analyses of craniodental morphometric data. Primates are poorly represented in the Chinese Quaternary fossil record, but historical accounts suggest that China may have contained an endemic ape radiation that has only recently disappeared.
Samuel T. Turvey, et al., "New genus of extinct Holocene gibbon associated with humans in Imperial China" 360 (6395) Science 1346 (June 22, 2018). DOI: 10.1126/science.aao4903

New World Solstice Observation

Gambler's House summarizes what is known about solstice observation in California. 

Almost all pre-Columbian cultures in California kept track of the winter solstice. Some cultures that kept track of the winter solstice also kept track of the summer solstice but this tended to be more common at lower latitudes. This pattern holds for the larger area of what is now the Western United States.

Pre-Modern European Men Averaged Five Foot Seven Inches Tall


As explained here and here, prior to about the year 1800 CE, the average human male was five feet, seven inches tall. The recent increase and previous ups and downs have largely been attributed to changes in diet.

The modern distribution of male height in Europe by country can be found here, as shown on the map below from the link:



Earlier GWAS studies of height may have been thwarted by population structure. See also here.

Some Of My Personal Conjectures On Physics

This blog is not primarily a theory development blog. I report new scientific developments from credible or notable sources and I comment on those developments. 

But, I would be an empty head indeed if I didn't draw some conclusions from many years of reporting hundreds of new developments a year and reviewing far more papers that I don't comment upon because they add little to the discussion or concern topics that are beyond my usual areas of interest.

I don't devote a lot of effort to touting my personal theories because I'm just a guy who majored in math, took almost enough classes to have a physics major as an undergraduate, and who reads lots of educated layman oriented books and reads lots of original physics research papers. In short, I'm not an expert in the field and my personal opinion doesn't matter much.

But, neo asked about my views in comments to another post, however, and I answered, so I'm posting this exchange in a place a little more prominent as a sort of full disclosure so that readers can discern my biases. Overall, I am very skeptical of many popular extensions to the Standard Model, although I do have some ideas about "within the Standard Model" explanations of its constants, cosmology, and quantum gravity that are not majority views among scientists.
neo said...
whats ur fav extension of the SM to explain the remaining mysteries? June 19, 2018 at 9:39 PM 
andrew said...
Dirac neutrinos which receive mass in a manner similar to the Higgs mechanism, and quantum gravity effects that explain dark matter and dark gravity a la A. Deur with a single massless graviton. 
The aggregate mass of the fundamental particles squared is equal to Higgs vev squared, and a Koide-like relationship between the fermion masses that arises dynamically via W boson exchangesJune 19, 2018 at 9:47 PM 
andrew said...
andrew said...
Also, once quantum gravity is merged with the SM that changes the running of all of the SM constants subtly, and I think it is quite likely that this subtle tweak will lead to gauge unification. 
Quantum gravity effects will explain the impossible early galaxy problem and the 21cm result that is consistent with there not being any dark matter. 
Other than the graviton, I do not think that there are going to be any non-SM particles of any kind other than possibly more fundamental particles that can only give rise to SM particles and a massless graviton. 
I do not think that any fundamental forces other than the three of the SM and graviton carried by a massless graviton will be discovered. 
I am not strongly committed to the concept that the universe is either strictly causal or strictly local. I am agnostic about whether space-time comes in quanta or is continuous. 
I think that the baryon asymmetry of the universe arises from another universe on the other side of the Big Bang in the time dimension, in which the second law of thermodynamics has the opposite direction. 
I think that cosmological inflation is at best unprovable and is quite likely wrong. June 19, 2018 at 10:00 PM  
andrew said...
I think it is plausible that there is a maximal mass-energy density and that this gives rise to asymptotic safety in gravity
I think that gravitational energy is conserved and can be localized
I think that there are no primordial black holes, and that any wormholes are not sufficiently large to transfer anything macroscopic. 
I think that the muon g-2 discrepancy is due to a combination of experimental and theoretical error and will disappear, as will evidence for non-PMNS model neutrino oscillations such as sterile neutrinos, and evidence for violations of lepton universality of the kind purportedly seen in B meson decays. 
I think Koide's rule for chaged leptons will hold up to at least the ratio of neutrino mass to charged lepton mass level of precision. 
I think that some variant of Koide's rule will fit the relative masses of the charged leptons. I think that the lightest neutrino mass eigenstate is on the order of 1 meV or less and that it has a non-zero mass and that there is a normal mass hierarchy for neutrinos. 
I don't have a strong intuition regarding the ratio of neutrinos to antineutrinos in our universe as a whole. 
It isn't implausible that some version of quark-lepton complementarity could prove to be correct. 
I think that baryon number and lepton number are separately conserved except in sphaleron interactions, and I wouldn't be surprised if sphaleron interactions are found not to exist at all if we could ever generate of means to test that hypothesis. I'm not quite sure what the Noether's theorem implications are of that fact. There are no flavor changing neutral currents, no neutrinoless beta decay and there is no proton decay
I think that we may be missing a rule or two in QCD that is critical to understanding the spectrum of scalar and axial vector hadrons. I would not be surprised if one of those rules has the effect of prohibiting glue balls. I think that all true hadrons with more than three valence quarks will prove to be wildly unstable although somewhat less unstable "hadron molecules" might be a thing. I think it isn't implausible that we could discover top quark hadrons that are extremely rare except at extremely high energies that are very short lived. 
I expect that there are deep reasons for the gravity equals QCD squared coincidences that we observe. June 19, 2018 at 10:21 PM 
neo said...
no mention of string theory? personally i'm skeptical of both susy and extra dimensions. 
regarding space time i've wondered why if QM has a wave particle duality, space and time couldn't also be both continuous and respect lorentz invariance, and discrete to explain BH entropy. it's a contentious-discrete duality June 20, 2018 at 8:31 AM 
andrew said...
I do not think that there are an integer number of extra dimensions. It isn't entirely implausible to me that the four dimensions of space-time that we observe are emergent rather than fundamental (as is common in loop quantum gravity-like quantum gravity theories), and/or that the dimensionality of space-time could be a fractal quantity that is non-integer (something also suggested by some descriptions of quantum mechanics). 
I think mainstream SUSY with sparticals and extra Higgs bosons is almost surely wrong, but the particular balance of Standard Model constants that exists may reflect some fermion-boson symmetry in nature, because the sum of the square of the masses of the fundamental bosons is very close to the sum of the square of the masses of the fundamental fermions, and these sums may actually be equal at some appropriate running of those masses with energy scale (e.g. perhaps at the Higgs vev scale). 
I have no opinion on a continuous-discrete space-time duality. June 20, 2018 at 1:08 PM 
andrew said...
String theory/M Theory may have some concepts that have some place in a final theory.

But, it has gone far afield, is amorphous, and its commitment to pursue versions of string theory that reflect bad hypotheses like SUSY, Majorana neutrinos, dark matter particle theories, quintessence based dark energy theories, and a commitment to allowing baryon and lepton number violation in pursuit of a pure energy Big Bang, have led investigators in the field to explore corners of it that are particularly unfruitful. June 20, 2018 at 1:13 PM  
neo said...
I have no opinion on a continuous-discrete space-time duality.
interesting,

i'm not aware of any extensive literature on this .duality i'm proposing, just an observation that 
1- nature seemingly respects lorentz invariance to a very high degree implying spacetime is continuous 
2- black hole entropy seemingly implies spacetime is discrete 
based on current LHC and other results I'm inclined to agree with you on string theory. obviously data can change this. 
i've learned on physics forums urs scheiber and mitchell porter aren't fans of loop quantum gravity and even regard it is unphysical and an error since gravity is universal, i'm enchanted with ideas that gravity is a byproduct of QM, or that QM can be extended to give rise to gravity like phenomena June 20, 2018 at 4:33 PM 
Some notes:

* Links are to posts at this blog and/or outside links that are representative of ideas and are not necessarily comprehensive, definitive or best links to the concept.

* "similar to the Higgs mechanism" I recognize the theoretical issues with the Higgs mechanism itself being the source of mass for Dirac neutrinos. One possibility, for example, is that the Higgs mechanism gives mass to charged fermions and that W bosons transfer that to Dirac neutrinos.

* "dark gravity" This is a typo, I meant "dark energy".

* "I think that some variant of Koide's rule will fit the relative masses of the charged leptons." I meant "charged fermions".

B Quark Decays Still Anomalous

For the most part, the Standard Model is well behaved. Indeed, it is so well behaved that they call it the nightmare scenario for physicists who were suspecting some surprises at the LHC. But, there are still some anomalies out there.

The magnetic moment of the muon (muon g-2) still isn't quite right, and as noted in previous posts this month, new experiments may see if that was just an experimental measurement error.

The size of a hydrogen atom with muons instead of electrons in its shell isn't quite what we have predicted it to be, and scientists are looking into that.

And then, there are b quark decays, which don't seem to be behaving quite as we'd expect them to, seemingly violating "lepton universality", which is strictly observed to high precision in other experiments.
A test case for the bottom-up methodology is the bottom meson, a composite particle made of something called a bottom quark and another known as a lighter quark. Bottom mesons appear to be decaying with the ‘wrong’ probabilities. Experiments in the LHC have measured billions of such decays, and it seems that the probability of getting a muon pair from particular interactions is about three-quarters of the probability of what the Standard Model says it should be. We can’t be totally sure yet that this effect is in strong disagreement with the Standard Model – more data is being analysed to make sure that the result is not due to statistics, or some subtle systematic error.
Some of these anomalies will turn out to be statistical flukes or subtle systemic errors. But, physicists can always hope. The resolutions of these anomalies, however, if they do come from beyond the Standard Model physics, are not obviously resolutions that come from the "usual suspects" of popular beyond the Standard Model physics hypotheses. 

It is also curious that all three of these leading anomalies in particle physics involve muons. I'm not sure what to make of that, but it is worth putting out there. It could be as simple as the fact that the properties of the muon can be predicted with extraordinary precision in the Standard Model, so that even slight discrepancies that could have all sorts of sources that can usually be ignored, could be at fault. The only way we can know for sure is to keep doing science.

Friday, June 22, 2018

Lensing And Rotation Curve Data Consistent At One Sigma In Distant Galaxy

A new study has compared the amount of gravitational lensing observed in a galaxy 500 million light years away, with an estimate of its mass (including dark matter in a dark matter hypothesis) based upon the velocity with which stars rotate around the galaxy, and found the two measurements of galactic gravitational mass to be consistent within a one sigma margin of error (i.e. one standard deviation).

This is not inconsistent with a general relativity plus dark matter model if the distribution of the dark matter particles is not significantly constrained. But, it is also consistent with any modified gravity model in which the modification to gravity affects photons and ordinary matter in the same way (most such models do, although "massive gravity", which was already ruled out with other data, does not even in the limit as graviton mass approaches zero). The paper states the restriction on modified gravity theories as follows:
Our result implies that significant deviations from γ = 1 can only occur on scales greater than ∼2 kiloparsecs, thereby excluding alternative gravity models that produce the observed accelerated expansion of the Universe but predict γ not equal to 1 on galactic scales.
So, it doesn't actually prove that general relativity is correct at the galactic scale relative to gravity modifications as the press release report on the study claims.

Notably, this paper also contradicts a prior study from July of 2017 by Wang, et al., that concluded that rotation curve and lensing data for galaxies are inconsistent, which I recap below the fold. The contradictory paper, however, relies upon the NFW dark matter halo shape model, which many prior observations have determined is a poor description of inferred dark matter distributions actually measured (which are inferred to have an "isothermal" distribution instead, see, e.g. sources cited here), even though the NFW halo shape is what a collisionless dark matter particle model naively predicts. Indeed, reaffirming Wang (2017) the paper in Science states in the body text that:
Our current data cannot distinguish between highly concentrated dark matter, a steep stellar mass-to-light gradient or an intermediate solution, but E325 is definitely not consistent with an NFW dark matter halo and constant stellar mass-to-light ratio.
This important finding is unfortunately not mentioned in the abstract to the paper.

The editorially supplied significance statement and abstract from the new article from the journal Science are as follows:
Testing General Relativity on galaxy scales 
Einstein's theory of gravity, General Relativity (GR), has been tested precisely within the Solar System. However, it has been difficult to test GR on the scale of an individual galaxy. Collett et al. exploited a nearby gravitational lens system, in which light from a distant galaxy (the source) is bent by a foreground galaxy (the lens). Mass distribution in the lens was compared with the curvature of space-time around the lens, independently determined from the distorted image of the source. The result supports GR and eliminates some alternative theories of gravity. 
Abstract 
Einstein’s theory of gravity, General Relativity, has been precisely tested on Solar System scales, but the long-range nature of gravity is still poorly constrained. The nearby strong gravitational lens ESO 325-G004 provides a laboratory to probe the weak-field regime of gravity and measure the spatial curvature generated per unit mass, γ. By reconstructing the observed light profile of the lensed arcs and the observed spatially resolved stellar kinematics with a single self-consistent model, we conclude that γ = 0.97 ± 0.09 at 68% confidence. Our result is consistent with the prediction of 1 from General Relativity and provides a strong extragalactic constraint on the weak-field metric of gravity.
Thomas E. Collett, et al., "A precise extragalactic test of General Relativity." 360 (6395) Science 1342-1346 (2018) DOI: 10.1126/science.aao2469 (pay per view). Preprint available here.

Meanwhile, as the Triton Station blog points out, the Radial Acceleration Relation still holds with a single universal constant, for all galaxies, to a precision consistent with all scatter being due to errors in astronomy measurements, while a recent claim to the contrary is fundamentally flawed.

Wednesday, June 20, 2018

Measuring The Electromagnetic Force Coupling Constant

Jester at Resonaances has a new post on a new ultraprecision measurement of the electromagnetic force coupling constant based upon a two month old paper that missed headlines when it came out because of the way that it was published and tagged. He notes:
What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10-10, that is 0.4 parts par billion (ppb). . . . the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27). One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have:







Experimentally, ge is one of the most precisely determined quantities in physics, with the most recent measurement quoting ae = 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! . . . the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on ae down to 0.2 ppb: ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms. . . .  
it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! 
FWIW, I calculate the discrepancy to be 2.43 sigma, and not 2.5.

Jester has a pretty chart that illustrates the discrepancies, but it does more to obscure than reveal what is going on to the uninitiated. Words, which I will paraphrase below for even greater clarity, are more clear in this case.

As Jester explains, the direction of the discrepancy is important. 

New physics fixes that treat electrons and muons the same, in general, don't work, because the electron g-2 calls for a negative contribution to the theoretically calculated value, while the muon g-2 needs a positive contribution to the theoretically calculated value.

So, new physics can't solve both discrepancies without violating lepton universality, which is tightly constrained by other measurements that seem to contradict evidence that this is violated in B meson decays, so this isn't possible without some sort of elaborate theoretical structure that cause it to be violated sometimes, but not others.

On the other hand, discrepancies in the opposite directions in measurements of two quantities that are extremely analogous to each other in the Standard Model, and in different magnitudes, are exactly what you would expect to see if there is theoretical or experimental error in either of the measurement. If you assume that lepton universality is not violated and pool the results for electron g-2 and muon g-2 in a statistically sound way, the discrepancies tend to cancel each other other producing a global average that is closer to the Standard Model prediction.

More experimental data regarding these measurements is coming soon.
The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections...
QED v. QCD

It is also worth pausing for just a moment to compare the state of QED (the Standard Model theory of the electromagnetic force) with QCD (the Standard Model theory of the strong force).

The strong force coupling constant discussed in my previous post at this blog is known with a precision of 7 parts per 1000, which may be overestimated and actually be closer to 4 parts per 1000. This is based on NNLO calculations (i.e. three loops).

The electromagnetic force coupling constant, which is proportionate to the fine structure constant, Î±, is known with a precision of 0.2 parts per billion, and the electron g-2 is calculated to five loops. So, we know the electromagnetic coupling constant to a precision 2-4 million times greater than we know the strong force coupling constant.

For sake of completeness, we know the weak force coupling constant (which is proportional to the Fermi coupling constant) to a precision of about 2 parts per million. This is about 10,000 times less precise than the electromagnetic coupling constant, but about 2000-4000 times more precisely than the strong force coupling constant.

We know the gravitational coupling constant (i.e. Newton's constant G) which isn't strictly analogous to the three Standard Model coupling constants since it doesn't run with energy scale in General Relativity and isn't dimensionless, to a precision of about 2 parts per 10,000. This is about 20-40 times more precise than the precision with which we have measured the strong force coupling constant (even incorporating my conjecture that the uncertainty in the strong force coupling constant's global average value is significantly overestimated), is about 100 times less precise than our best measurement of the weak force coupling constant, and is about a million times less precise than our best measurement of the electromagnetic coupling constant.

Tuesday, June 19, 2018

Measuring The Strong Force Coupling Constant

The Latest Measurements Of The Strong Force Coupling Constant

The strong force coupling constant has a global average best fit value with a precision of a little less than 0.7% which is slowly but steadily improving over time according to the linked preprint released today. After considering the latest available data, the strength of the coupling constant at the Z boson mass momentum transfer scale is as follows:

 Î±s(m2Z)=0.1183±0.0008.


This means, roughly, that there is a 95% chance that the true value of the strong force coupling constant is between 0.1168 and 0.1199.

Why Does This Matter?

This roughly 27% precision increase in the precision of the global average, from plus or minus 0.0011 until the latest measurements, matters because the strong force coupling constant is a key bottleneck limiting the accuracy of calculations made throughout the Standard Model in phenomena that have significant QCD (i.e. strong force physics) contributions.

The Example Of Muon g-2

For example, 98.5% of the uncertainty involved in calculating the theoretically expected value of the muon magnetic moment, i.e. muon g-2, in the Standard Model comes from uncertainties in the strong force physics part of that calculation, even though the strong force contribution to muon g-2 accounts for only about one part in 16,760 of the final value of the muon g-2 calculation.

The remaining 1.5% of the uncertainty comes from weak force and electromagnetic force physics uncertainties, with 92.6% of that uncertainty, in turn, coming from weak force physics as opposed to electromagnetic force physics uncertainties, even though the weak force component of the calculation has only about a one part per 759,000 impact on the final value of the muon g-2 calculation.

One part of the strong force contribution to the muon g-2 calculation (hadronic light by light) has a 25% margin of error. The other part of the strong force contribution to the muon g-2 calculation (hadronic vacuum polarization) has a a 0.6% margin of error. The weak force component of the calculation has a 0.7% margin of error. The electromagnetic component of the calculation, in contrast, has a mere 1 part in 1.46 billion margin of error.

The overall discrepancy between a 2004 measurement of muon g-2 and the Standard Model prediction was one part in about 443,000. So, a 0.7% imprecision in a Standard Model constant necessary to make every single QCD calculation seriously impedes the ability of the Standard Model to make more precise predictions.

Implications For Beyond The Standard Model Physics Searches

Imprecision makes it hard to prove or disprove beyond the Standard Model physics theories, because with a low level of precision, both possibilities are often consistent with the Standard Model.

Even "numerology" hypothesizing ways to calculate the strong force coupling constant from first principles is pretty much useless due to this imprecision, because coming up with first principles combinations of numbers that can match a quantity with a margin of error of plus or minus 0.7% is trivially easy to do in myriad ways that are naively sensible, and hence not very meaningful.

Is The Margin Of Error In The Strong Force Coupling Constant Overstated?

Given the relative stability of the global average over the last couple of decades or so, during which many experiments would have been expected to have tweaked this result more dramatically than they actually have if the stated margin of error was accurate, my intuition is also that the margin of error that is stated is probably greater than the actual difference between the global average value and the true value of this Standard Model constant.

I suspect that the actual precision is closer to plus or minus 0.0004 and is overstated due to conservative estimates of systemic error by experimental high energy physicists. This would put the true "two sigma" error bars that naively mean that there is a 95% chance that the true value is within them at 0.1175 to 0.1191.

A review of the relevant experimental data can be found in Section 9.4 of a June 5, 2018 review for the Particle Data Group.

Why Is The Strong Force Coupling  Constant So Hard To Measure?

The strong force coupling constant can't be measured directly.

It has to be inferred from physics results involving hadrons (composite particles made up of quarks) and from top force physics measurements that infer the properties of top quarks from their decay products.

So, once you have your raw experimental data, you then have to fit that data to a set of very long and difficult to calculation equations that include the strong force coupling constant as one of the variables, containing multiple quantities that have to be approximated using infinite series that a truncated to a manageable level, to convert the experimental data into an estimated value of the strong force coupling constant that can be inferred from the results.

The main holdup on getting a more precise measurement of the strength of the strong force coupling constant is the difficulty involved in calculating what value for the constant is implied by an experimental result, not, for the most part, the precision of the experimental data itself.

For example, the masses and properties of the various hadrons that are observed (i.e. of composite particles made of quarks) have been measured experimentally to vastly greater precision than they can be calculated from first principles, even though a first principles calculation is, in principle, possible with enough computing power and enough time, and almost nobody in the experimental or theoretical physics community thinks that the strong force part of the Standard Model of particle physics in incorrect at a fundamental level.

This math is hard mostly because QCD calculations are very hard to do in practice to sufficient precision, which is mostly because the infinite series involved in calculating them converge so much more slowly than those in other parts of quantum physics, so far more terms must be calculated to get comparable levels of precision.

The Running Of The Strong Force Coupling Constant

Another probe of beyond the Standard Model physics is to look at how the strength of the strong force coupling constant varies with momentum transfer scale. 

Like all Standard Model empirically determined constants, the strength of the strong force coupling constant varies with energy scale, which is why the global average has to be reported in a manner normalized for energy scale, something called the "running" of the strong force coupling constant.

At low energies, in the Standard Model (illustrated in the chart below from a source linked in the linked blog post), as confirmed experimentally, the strong force coupling constant gets close to or near zero at zero energy, peaks at about 216 MeV, and then gradually decreases as the energy scale increases beyond that point. There is considerable debate over whether it goes to zero, or instead to a finite value close to zero, at zero energy, which is important for a variety of theoretical reasons and has not been definitively resolved.


The running of the strong force coupling constant in beyond the Standard Model theories like Supersymmetry (a.k.a. SUSY) is materially different at high energies than it is in the Standard Model (as shown in the charge below from the following linked post where the inverse of the strength of the strong force coupling constant at increasing energies on a logarithmic scale shown on the X-axisi is the SU(3) line) and the differences might be possible to distinguish with maximal amounts of high energy data from the LHC, is progress can be made in the precision of those measurements, as I explained in the linked blog post from January 28, 2014:


The strong force coupling constant, which is 0.1184(7) at the Z boson mass, would be about 0.0969 at 730 GeV and about 0.0872 at 1460 GeV, in the Standard Model and the highest energies at which the strong force coupling constant could be measured at the LHC is probably in this vicinity. 
In contrast, in the MSSM [minimal supersymmetric standard model], we would expect a strong force coupling constant of about 0.1024 at 730 GeV (about 5.7% stronger) and about 0.0952 at 1460 GeV (about 9% stronger). 
Current individual measurements of the strong force coupling constant at energies of about 40 GeV and up (i.e. without global fitting or averaging over multiple experimental measurements at a variety of energy scales), have error bars of plus or minus 5% to 10% of the measured values. But, even a two sigma distinction between the SM prediction and SUSY prediction would require a measurement precision of about twice the percentage difference between the predicted strength under the two models, and a five sigma discovery confidence would require the measurement to be made with 1%-2% precision (with somewhat less precision being tolerable at higher energy scales).
The same high energy running without a logarithmic scale and an inverse function plotted looks like this in the range where it has been experimentally measured:



In a version of SUSY where supersymmetric particles are very heavy (in tens of TeV mass range, for example), however, the discrepancies in the running of the strong force coupling constant between the Standard Model and SUSY crop up sufficiently to be distinguished only at significantly higher energy scales than those predicted for the MSSM version of SUSY.

The paper linked above doesn't discuss the latest measurements of the running of the strong force coupling constant, however. 

So far, the running of the strong force coupling constant is indistinguishable from the Standard Model prediction in all currently available data that I have seen, while monitoring new experimental results regarding this matter fairly closely since my last comprehensive review of it four and a half years ago. Of course, as always, I welcome comments reporting any new data that I have missed regarding this issue.

Friday, June 15, 2018

There Was A Neolithic Revolution In The Amazon


I've discussed this in a post from six and a half years ago (citing some sources from a decade ago), so this isn't exactly breaking news, but a new study (in Spanish or Portuguese) adds to the body of evidence demonstrating that there were once farmers in the pre-Columbian era in the Amazon region. 

For reasons unknown, this civilization collapsed in pre-Columbian times.

Thursday, June 14, 2018

Linguistic Exogamy Is A Thing

Who knew that there were cultures in which it was mandatory to marry someone who didn't speak the same language that you did?

I guess it provides an excuse for the marital miscommunications which are inevitable in any case, and encourages understanding of them. It could also promote language learning necessary for effective regional ties.
Human populations often exhibit contrasting patterns of genetic diversity in the mtDNA and the non-recombining portion of the Y-chromosome (NRY), which reflect sex-specific cultural behaviors and population histories. 
Here, we sequenced 2.3 Mb of the NRY from 284 individuals representing more than 30 Native-American groups from Northwestern Amazonia (NWA) and compared these data to previously generated mtDNA genomes from the same groups, to investigate the impact of cultural practices on genetic diversity and gain new insights about NWA population history. Relevant cultural practices in NWA include postmarital residential rules and linguistic-exogamy, a marital practice in which men are required to marry women speaking a different language. 
We identified 2,969 SNPs in the NRY sequences; only 925 SNPs were previously described. The NRY and mtDNA data showed that males and females experienced different demographic histories: the female effective population size has been larger than that of males through time, and both markers show an increase in lineage diversification beginning ~5,000 years ago, with a male-specific expansion occurring ~3,500 years ago. 
These dates are too recent to be associated with agriculture, therefore we propose that they reflect technological innovations and the expansion of regional trade networks documented in the archaeological evidence. Furthermore, our study provides evidence of the impact of postmarital residence rules and linguistic exogamy on genetic diversity patterns. Finally, we highlight the importance of analyzing high-resolution mtDNA and NRY sequences to reconstruct demographic history, since this can differ considerably between males and females.
Leonardo Arias, et al., "Cultural Innovations influence patterns of genetic diversity in Northwestern Amazonia" BioRxiv (June 14, 2018) doi: https://doi.org/10.1101/347336

See also Luke Fleming, "Linguistic exogamy and language shift in the northwest Amazon" 240 International Journal of the Sociology of Language (May 5, 2016) https://doi.org/10.1515/ijsl-2016-0013
The sociocultural complex of the northwest Amazon is remarkable for its system of linguistic exogamy in which individuals marry outside their language groups. This article illustrates how linguistic exogamy crucially relies upon the alignment of descent and post-marital residence. Native ideologies apprehend languages as the inalienable possessions of patrilineally reckoned descent groups. At the same time, post-marital residence is traditionally patrilocal. This alignment between descent and post-marital residence means that the language which children are normatively expected to produce – the language of their patrilineal descent group – is also the language most widely spoken in the local community, easing acquisition of the target language. 
Indigenous migration to Catholic mission centers in the twentieth century and ongoing migration to urban areas along the Rio Negro in Brazil are reconfiguring the relationship between multilingualism and marriage. With out-migration from patrilineally-based villages, descent and post-marital residence are no longer aligned. Multilingualism is being rapidly eroded, with language shift from minority Eastern Tukanoan languages to Tukano being widespread. Continued practice of descent group exogamy even under such conditions of widespread language shift reflects how the semiotic relationship between language and descent group membership is conceptualized within the system of linguistic exogamy.
And, this 1983 book:
This book is primarily a study of the Bará or Fish People, one of several Tukanoan groups living in the Colombian Northwest Amazon. These people '...form part of an unusual network of intermarrying local communities scattered along the rivers of the region. Each community belongs to one of sixteen different groups that speak sixteen different languages, and marriages must take place between people not only from different communities but with different primary languages. In a network of this sort, which defies the usual label of 'tribe', social identity assumes a distinct and unusual configuration. In this book, Jean Jackson's incisive discussions of Bará marriage, kinship, spatial organization, and other features of the social and geographic landscape show how Tukanoans (as participants in the network are collectively known) conceptualize and tie together their universe of widely scattered communities, and how an individual's identity emerges in terms of relations with others' (back cover). Also discussed in the text are the effects of the Tukanoan's increasing dependency on the national and global political economy and their decreasing sense of self-worth and cultural autonomy.

Wednesday, June 13, 2018

The Ecology Of An Empire

There are many apocryphal quotes attributed to Genghis Khan. And there’s a reason for that — in a single generation he led an obscure group of Mongolian tribes to conquer most of the known world. His armies, and those of his descendants, ravaged lands as distant as Hungary, Iran and China. After the great wars, though, came great peace — the Pax Mongolica. But the scale of death and destruction were such that in the wake of the Mongol conquests great forests grew back from previously cultivated land, changing the very ecosystem of the planet.
From here.

Monday, June 11, 2018

Bad Sportsmanship At Science (the Magazine).

Sabine Hossenfelder's new book, "Lost in Math" (although I like the German title, "The Ugly Universe" better), will arrive on my porch tomorrow afternoon. The review of that book at the magazine "Science" is unsporting, in bad taste, and does not adhere to the standards of civility we ought to expect in reputable professional science:
Science magazine has a review. For some reason they seem to have decided it was a good idea to have the book reviewed by a postdoc doing exactly the sort of work the book is most critical of. The review starts off by quoting nasty anonymous criticism of Hossenfelder from someone the reviewer knows on Facebook. Ugh.
Via Not Even Wrong (Bee's makes a pointed rebuttal to it here).

Bee also took the high road in writing this book with a human subjects committee "best practices" worthy approach to her interviews. The same post has a delightful anecdote recounting the arrival of the finished product at her door:
The cover looks much better in print than it does in the digital version because it has some glossy and some matte parts and, well, at least two seven-year-old girls agree that it’s a pretty book and also mommy’s name is on the cover and a mommy photo in the back, and that’s about as far as their interest went.
I'll reserve a substantive review until I've read the book itself.

An Archaic Hominin Recap

An Aeon article has a decent recap of the state of research on human dispersals out of Africa and archaic hominins. Nothing in its is new to longtime readers of this blog, so I won't add anything here, but it is a good, reasonably up to date starting place to get grounded for readers who are more new to this area of research.

Thursday, June 7, 2018

Physicists' Dirty Little Secret

It isn't widely known, but a post by Lubos Motl reminds us, that there are actually some papers that come out of the latest high energy physics experiments in which the results do not neatly match their predictions. The latest involves the production of pairs of top quarks, but it actually happens much more frequently than most people familiar with high energy physics knows or admits.

Almost all of these cases involve quantum chromodynamics, the Standard Model theory of the strong force. And, the reason this doesn't make headlines is that for all practical purposes, in anything more than the most idealized highly symmetric situations, it is impossible to do calculations that make actual predictions with the actual Standard Model equations of QCD. Instead, you must use one or more of several tools for numerically approximating what the Standard Model equations predict, each of which has its flaws, and some of which aren't fully compatible with each other.

Since each of these numerical approximations has been well validated in their core domains of applicability, there is nothing deeply wrong with any of them. But, none of them are perfect, even in isolation.

But, towards the edges of their domains of applicability, or in circumstances when you have to apply more than one not fully compatible approximation to the same problem to get an answer, both of which are present in the experiment the Motl describes in his recent post, sometimes the results can be wildly off. Also, lots of key QCD constants are only known to precisions of 1% of less, which also doesn't help produce accurate predictions from first principles.

Yet, this isn't terribly notable, because everyone knows that the relevant sources of theoretical prediction error in these situations often far exceeds the combined statistical and systemic experimental error. 

Hence, in QCD, we often know that the experimental measurements are sound but have deep doubts about the soundness of our predictions, while the situation is the other way around in all other parts of QCD physics. QCD is a whole different ball game in Standard Model physics.

In somewhat related news, it turns out that a method of doing calculations in QCD that was widely assumed in conventional wisdom to be the most efficient is actually much less efficient than an old school approach that takes a little more thought. The old school methods approaches the theoretical maximum of calculation efficiency, which makes it possible to calculate the infinite series approximations common in quantum mechanics calculations to far more terms and thus to achieve much greater precision and accuracy with the same amount of calculation work. So, progress is being made in fits and starts on the theoretical front, even though it can be painful to get there.

Caste Genetics

Razib Khan makes some important points about the genetics of caste at Brown Pundits:
[I]t looks like most Indian jatis have been genetically endogamous for ~2,000 years, and, varna groups exhibit some consistent genetic differences.
The level of this endogamy at the jati level is extreme. I personally wonder if some of that is due to non-endogamous individuals being jettisoned from their caste, as it is stunningly hard to maintain that level of endogamy (on the order of 99.9%+ compliance in each generation) for two thousand years in people who live cheek by jowl in the same cities, villages and regions, and have overlapping appearance phenotypes, without such a purifying mechanism.

There is lots of structure and diversity in the overall population of Pakistan-India, more or less on a northwest-southern cline, as well as by varna. At the varna level, this is mostly due to differing degrees to steppe ancestry, although there is a deeper level of Iranian Neolithic farmer v. South Asian hunter-gatherer cline that runs in parallel along very similar geographic clines in South Asia.

Bangladeshi people have essentially the same amount of South Asian ancestry which isn't very diverse, and considerable variation in Tibeto-Burman ancestry. This probably has something to do with frontier founder effects and the way the frontier destabilized traditional social organization.

Pakistani people, genetically, have a genetic mix very similar to that of people from India, despite the fact that as Muslims, they do not give religious credence to the caste structure of the Hindu religion.