Monday, March 30, 2020

Anomalous Magnetic Moment Of The Muon Predictions Reviewed

The anomalous magnetic moment of the muon is a measurable quantity that can be predicted, in principle, in the Standard Model of Particle Physics. Most parts of the Standard Model calculation are much more precise than any possible experiment, but the portions that involve the strong force of the Standard Model (i.e. the contribution from quantum chromodynamics or QCD), while a small part of the total value account for most of the error in the prediction.

The current experimental value is about 3.5 to 3.7 standard deviations away from the best theoretical estimate at the time it was made, which makes it one of the most important anomalies between Standard Model predictions and experimental observations. 

This quantity, since it can be measured and calculated theoretically extremely precisely, and has some contributions from essentially all parts of the Standard Model, is a good global measure of the magnitude of the extent to which beyond the Standard Model physics are not properly captured by the Standard Model over a quite broad range of energy scales. 

On one hand, the existing anomaly relative to estimate margins of error in the calculations and measurement, tentatively points to some deviation from the Standard Model, and on the other, the very small deviation on a percentage basis between the predicted value and the measured value (about 2 parts per million), suggests that any deviation from the Standard Model isn't a huge one with much observable impact.

Efforts to refine the Standard Model theoretical prediction have been an active area of research. The table below (from here) summarizes various theoretical predictions in recent years and compares them to the state of the art experimental measurement announced on January 8, 2004.

The experimental number will receive a more precise update within one to two years from now from the Fermilab E989 experiment. Data has been collected in that experiment since 2018 and the precision of the measurement at the experiment should rival the previous one early this year, with an eventual precision four times that of the previous measurement. A final result was anticipated in 2020 as of January of 2017, but like all major government projects, it is somewhat behind schedule. As of May of 2019, data collection was expected to continue through 2019-2020, with a result announced sometime in late 2020 or in 2021. The J-PARC (E34) experiment is also measuring muon g-2 at greater precision experimentally in a method with different kinds of systemic error, but it probably at least two years behind E989 in producing a publishable measurement.


Many Lattice QCD based estimates from the last few years (including this one from late February of 2020) are consistent with the state of the art experimental measurement, suggestion that the tension is mostly due to an inaccurate theoretical prediction. But, many estimates using a different technique called an R-ratio, independent of, or in conjunction with Lattice QCD methods (including this one from March 2020), can only be reconciled with experiment if the new state of the art measurement is significantly lower than the last one. Lots of background can be found in this power point presentation from March 2018. The R-ratio seems to be "the ratio of the bare cross section for e +e − annihilation into hadrons to the pointlike muon-pair cross section at center-of-mass energy √ s."

Conventional wisdom is that the theoretical calculation and experimental result will converge with greater precision in each.

Astronomy Constraints On Neutrino Properties Are Strong And Robust

The astronomy observation based constraints on the number of neutrino types (which also bounds possible number of light sterile neutrino species), and on the sum of the neutrino masses, are robust to a wide range of assumptions about the dark sector of cosmology made in the calculations made based upon observations. 

Both support Standard Model assumption that there are exactly three kinds of neutrinos and strongly favors the proposition that the sum of their masses arise from a "normal" neutrino hierarchy (i.e. in which the most frequent mass eigenstate of electron neutrinos is smaller than the most frequent mass eigenstate of muon neutrinos, which in turn is less massive than the most frequent mass eigenstate of tau neutrinos).

Thus, unless there is something profoundly wrong at a theoretical level which that way that we infer the number of neutrino types form astronomy observations, the equivocal hints of "sterile neutrino" species that oscillate with the three active neutrino types from more direct nuclear reactor produced neutrino experiments are extremely strongly disfavored. But, astronomy observations sensitive to neutrinos don't "see" neutrino types of more than about 10 electron volts in mass (which is roughly a factor of two hundred more massive the the most massive of the three Standard Model neutrino mass eigenstates). A sterile neutrino type heavier than that would escape the astronomy observation based constraints, but would also be pretty much outside the mass range suggested by nuclear reactor produced neutrino experiments. The reactor anomalies, where they have been observed (not consistently with each other), favor a fourth "sterile neutrino" that oscillates with the active neutrinos with a mass on the order of 1 electron volt. 

"Active neutrinos" (i.e. those that interact via the weak force at full strength) are ruled out experimentally up to about 62,500,000,000,000 milli-electron volts (because W and Z boson decays rule them out up to about 45,000,000,000,000 milli-electron volts, and an active neutrino with a mass of more than that would radically disturb the decays of the Higgs boson to a far greater extent than is consistent with observations of Higgs boson decays to date).

We know to a fair precision the differences in mass between the least massive and second least massive, and the second least massive and most massive neutrino mass eigenstate from neutrino oscillation experiments. This sets a floor to neutrino masses and also insures that the three neutrino masses are highly correlated. 

This, together with the cap of the sum of the three neutrino masses that is derived from astronomy observation constraints, leaves a quite narrow range of masses for the possible lightest neutrino mass eigenstate of about 17 milli-electron volts (with 95% confidence), favoring the middle to low end of that range. By comparison, the mass of an electron is approximately 511,000,000 milli-electron volts. The is a very small absolute margin of error, although the relative error is very high for the first neutrino mass eigenstate (accurate to roughly a factor of 100), and on the order of 150%+ for the second neutrino mass eigenstate, and on the order of 35%+ for the third neutrino mass eigenstate.

Upper bounds on neutrino mass (1) from direct measurements and (2) from the lowest frequencies for which neutrinoless beta decay has been ruled out, are much less constraining than those derived from astronomy observations.

As it is currently understood, any of the three kinds of neutrino types can have one of three neutrino masses called eigenstates, but the probabilities of each type of neutrino type having a particular mass varies by type.

A new article and its abstract on the topic are as follows:

Dynamical Dark sectors and Neutrino masses and abundances

We investigate generalized interacting dark matter-dark energy scenarios with a time-dependent coupling parameter, allowing also for freedom in the neutrino sector. The models are tested in the phantom and quintessence regimes, characterized by an equation of state wx<1 and wx>1, respectively. Our analyses show that for some of the scenarios the existing tensions on the Hubble constant H0 and on the clustering parameter S8 can be significantly alleviated. The relief is either due to (a) a dark energy component which lies within the phantom region; or (b) the presence of a dynamical coupling in quintessence scenarios. 
The inclusion of massive neutrinos into the interaction schemes does not affect neither the constraints on the cosmological parameters nor the bounds on the total number or relativistic degrees of freedom Neff, which are found to be extremely robust and, in general, strongly consistent with the canonical prediction Neff=3.045. The most stringent bound on the total neutrino mass Mν is Mν<0.116 eV and it is obtained within a quintessence scenario in which the matter mass-energy density is only mildly affected by the presence of a dynamical dark sector coupling.
Comments:16 pages, 9 tables and 8 figures; comments are welcome
Subjects:Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc)
Cite as:arXiv:2003.12552 [astro-ph.CO]
 (or arXiv:2003.12552v1 [astro-ph.CO] for this version)

Wednesday, March 25, 2020

High Res Modern DNA Sheds Light On Archaic Admixture

New high precision whole genomes from a diverse global sample is removing Eurocentric biases of earlier lower precision studies. One thing that this does is reveal more relatively young and less widespread mutations that can shed light on the relationships and admixture histories of modern human populations.

This has also shed light on modern human admixture with archaic hominins including Neanderthals and Denisovans. Neanderthal admixture either happened early with a small number of individuals, or homogeneous group of individuals, or both. Denisovan admixture, in contrast, shows signs of multiple admixture events with regionally distinct genetic populations of Denisovans.
[T]he new high-quality whole-genome analysis of the HGDP dataset is finally published in Science, Insights into human genetic variation and population history from 929 diverse genomes. The HGDP dates back 30 years, so this is the culmination of a long line of research. The authors in this paper looked at nearly 1,000 HGDP individuals at high coverage sequencing, meaning that they had extremely good confidence in their calls of the state of a base across all 3 billion pairs. 
This is in contrast to the ~600,000 markers in the original HGDP analyses from the 2000s, which came from results of a “SNP-array.” A SNP-array of this form focuses on the variation by looking at polymorphic sites (sites which vary in the population). How did they originally determine what was polymorphic? Unfortunately, they had to rely on European populations, so the original analyses were using a quite skewed measuring stick. . . .

The Neanderthals who mixed into early humans were quite homogeneous, or, there were not many of them. The haplotypes are not too numerous, and, they don’t exhibit the patterns you’d expect from different admixtures and source populations. The diversity is too great to be a single individual, but it could have been a small number. The main caution I would suggest here is that Neanderthals seem to often be quite homogeneous on the local scale. 
The Denisovans are a different story. They detect the difference between Oceanian and non-Oceanian Denisovan ancestry (the Oceanian source Denisovans were quite distinct from the Altai Denisovans). But they also detect a different Denisovan contribution to the genomes of the Cambodians. The indigenous people of the Philippines also harbor different Denisovan ancestry (not in this paper). The “Denisovans” seem to have been a cluster of different lineages that persisted in parallel for a long time.
From Razib Khan

The abstract of the new paper is as follows:
INTRODUCTION 
Large-scale human genome-sequencing studies to date have been limited to large, metropolitan populations or to small numbers of genomes from each group. Much remains to be understood about the extent and structure of genetic variation in our species and how it was shaped by past population separations, admixture, adaptation, size changes, and gene flow from archaic human groups. Larger numbers of genome sequences from more diverse populations are needed to answer these questions. 
RATIONALE 
We sequenced 929 genomes from 54 geographically, linguistically, and culturally diverse human populations to an average of 35× coverage and analyzed the variation among them. We also physically resolved the haplotype phase of 26 of these genomes using linked-read sequencing. 
RESULTS 
We identified 67.3 million single-nucleotide polymorphisms, 8.8 million small insertions or deletions (indels), and 40,736 copy number variants. This includes hundreds of thousands of variants that had not been discovered by previous sequencing efforts, but which are common in one or more population. We demonstrate benefits to the study of population relationships of genome sequences over ascertained array genotypes, particularly when involving African populations. 
Populations in central and southern Africa, the Americas, and Oceania each harbor tens to hundreds of thousands of private, common genetic variants. Most of these variants arose as new mutations rather than through archaic introgression, except in Oceanian populations, where many private variants derive from Denisovan admixture. Although some reach high frequencies, no variants are fixed between major geographical regions.
We estimate that the genetic separation between present-day human populations occurred mostly within the past 250,000 years. However, these early separations were gradual in nature and shaped by protracted gene flow. All populations thus still had some genetic contact more recently than this, but there is also evidence that a small fraction of present-day structure might be hundreds of thousands of years older. Most populations expanded in size over the past 10,000 years, but hunter-gatherer groups did not.
The low diversity among the Neanderthal haplotypes segregating in present-day populations indicates that, while more than one Neanderthal individual must have contributed genetic material to modern humans, there was likely only one major episode of admixture. By contrast, Denisovan haplotype diversity reflects a more complex history involving more than one episode of admixture.
We found small amounts of Neanderthal ancestry in West African genomes, most likely reflecting Eurasian admixture. Despite their very low levels or absence of archaic ancestry, African populations share many Neanderthal and Denisovan variants that are absent from Eurasia, reflecting how a larger proportion of the ancestral human variation has been maintained in Africa.

CONCLUSION 
The discovery of substantial amounts of common genetic variation that was previously undocumented and is geographically restricted highlights the continued value of anthropologically informed study designs for understanding human diversity. The genome sequences presented here are a freely available resource with relevance to population history, medical genetics, anthropology, and linguistics.

Friday, March 20, 2020

Ancient Tibetan mtDNA

A new paper based upon ancient Tibetan mtDNA provides some insight into the population history of Tibet in terms of timing and proportions of matrilineal ancestry. It isn't really ground breaking, but it is notable, particularly in light of the fact that Tibetans have high altitude genetic adaptations derived from Denivosan introgression which would be expected to be very old. 
The clarification of the genetic origins of present-day Tibetans requires an understanding of their past relationships with the ancient populations of the Tibetan Plateau. Here we successfully sequenced 67 complete mitochondrial DNA genomes of 5200 to 300-year-old humans from the plateau. Apart from identifying two ancient plateau lineages (haplogroups D4j1b and M9a1a1c1b1a) that suggest some ancestors of Tibetans came from low-altitude areas 4750 to 2775 years ago and that some were involved in an expansion of people moving between high-altitude areas 2125 to 1100 years ago, we found limited evidence of recent matrilineal continuity on the plateau. Furthermore, deep learning of the ancient data incorporated into simulation models with an accuracy of 97% supports that present-day Tibetan matrilineal ancestry received partial contribution rather than complete continuity from the plateau populations of the last 5200 years.

Hat tip: Bernard's Blog.

Tuesday, March 17, 2020

Glueball Physics

Background

Hadrons in the Standard Model of Particle Physics are first order composite particles bound by the strong force primarily governed by a part of the Standard Model known as QCD for quantum chromodynamics. Most hadrons, such as protons, neutrons, pions and kaons, are made up of quarks bound by the the strong force which is mediated by gluons.

But, in theory, hadrons that don't have quarks and are gluons bound to each other in first order composite structures are also possible and are called "glueballs". If they exist, glueballs are always bosons, because gluons are vector bosons (i.e. they have spin-1) which always have integer spins. The rules of QCD allow glueballs to exist, so they should be possible to create, although they might be ephemeral. The properties of glueballs in a pure state are among the easiest things to calculate analytically with QCD because the only physical constant involved in the calculation at the first order is the strong force coupling constant (currently known to about 1% accuracy), with all other properties determinable strictly from first principles.

But, no glueballs have ever been observed directly in experiments in the many billions of particle collider experiment collisions energetic enough to create them in theory that have been conducted and carefully measured with state of the art equipment since the 1970s when their existence was first predicted. 

The non-observation of this hadron which is a basic prediction of QCD could be because some bosons, including hypothetical glueballs, are tricky to observe because bosons can (to oversimplify) blend into each other and may never appear in a free state (some pions and kaons are also "blended" boson states). In contrast, baryons (fermions with three valence quarks) and most mesons (usually with two valence quarks) that are either pseudo-scalar and vector in quantum numbers, tend to be fairly simple and don't mix with other kinds of bosons. 

Scalar and axial vector mesons as well as a few light pseudo-scalar mesons, tend to mix with other mesons with the same quantum numbers and similar masses. There are observed scalar and axial vector hadrons, but they don't have structures that can be explained with simple valance quark structures like simpler quark-antiquark mesons, or three valence quark baryons. 

This analysis is complicated because there are many different hadrons with similar properties in a given mass range and because in addition to the "ground" state of every single hadron there are, in principle, an infinite number of higher mass excited states of the same hadron with similar energies. The f0 meson family of scalar bosons are among them (they have the same quantum numbers, and the parenthetical number in their symbols is their approximate mass in MeV units) and might be explained with a glueball component. This is a possibility which the new paper below tries to evaluate. 

The Results Of The New Paper

The study finds that the two lowest mass examples of this family of mesons, which are f0(500) and f0(980), have essentially no glueball contribution and are ground states with a quarkonium (a quark-antiquark pair or blend of pairs of the same flavor of quark in a bound structure that does not decay to pure energy) structure, the next two mesons in the family, which are f0(1370) and f0(1500), are excited states with only small glueball contributions, and that the f0 (1710) meson in this family is mostly a 0++ to be 1637 ∼ 1698 MeV with the balance of its mass due to mixing with quarkonium.

The data used to make these determinations comes from the decays of the J/ψ meson, which is a spin-1 vector meson with a charm quark and an anti-charm quark as valence quarks, with an experimentally measured mass of approximately, 3,098.92 MeV, which has a mean lifetime of about 7.09*10−21 seconds, into one or more photons and also hadrons, which are a subset of the "radiative decays" (i.e. decays with photons) of this meson.

Revisiting the topic of determining fraction of glueball component in 
f
0
 mesons via radiative decays of J/ψ

The QCD theory predicts existence of glueballs, but so far all experimental endeavor fails to identify any of such states. To remedy the obvious discrepancy between the QCD which is proved to be a successful theory for strong interaction and the failure of experimentally searching for glueballs, one is tempted to accept the most favorable interpretation that the glueballs mix with regular 

q
q¯
 states of the same quantum numbers. The lattice estimate on the masses of the pure 0++
 glueballs ranges from 1 to 2 GeV which is the region of the 
f
0
 family. Thus many authors suggest that the 
f
0
 mesonic series is an ideal place to study possible mixtures of glueballs and 
q
q¯
. In this paper following the strategy proposed by Close, Farrar and Li, we try to determine the fraction of glueball components in 
f
0
 mesons using the measured branching ratios of 
J/ψ
 radiative decays into 
f
0
 mesons. Since the pioneer papers by Close et al. more than 20 years elapsed and more accurate measurements have been done by several experimental collaborations, so that based on the new data, it is time to revisit this interesting topic. Our numerical results show that 
f
0
(500)
f
0
(980)
, are almost pure quark anti-quark bound states, while for 
f
0
(1370)
f
0
(1500)
 and 
f
0
(1710)
, to fit both the experimental data of 
J/ψ
 radiative decay and their mass spectra, glueball components are needed. Moreover, the mass of the pure 0++
 glueball is phenomenologically determined.
Comments:14 pages, 1 figure
Subjects:High Energy Physics - Phenomenology (hep-ph)
Cite as:arXiv:2003.07116 [hep-ph]
(or arXiv:2003.07116v1 [hep-ph] for this version)

The results indicate that the experimentally measured masses of f0(500), f0(980) can correspond to the qq¯ states (ground states of uu¯+dd¯ √ 2 and ss¯), so can be considered as pure bound states of quark-antiquark. Whereas there are no values corresponding to the masses of f0(1370), f0(1500) and f0(1710). It signifies that they cannot be pure qq¯ states and extra components should be involved. To evaluate the fractions of glueballs in those states, diagonalizing the mass matrix whose eigen-values correspond to the masses of the physical states and the transformation unitary matrix determine the fractions of qq¯ and glueball in the mixtures. . . . After this introduction, in section II we calculate the mass spectra of qq¯ states of 0++ by solving the relativistic Schr¨odinger equations. The rest 0++ qq¯ states would have negligible probability to mix with glueballs because their masses are relatively apart from that of pure glueball. Now we propose that the physical states f0(1370), f0(1500) and f(1710) are mixtures of the second excited states of |Ni = | uu¯+dd¯ √ 2 i and |Si = |ss¯i with glueball state |Gi. . . . The solutions show that for f0(1370) and f0(1500), the main components are qq¯ bound states, whereas the glueball component in f0(1710) is overwhelmingly dominant. It also suggests the mass of a pure glueball of 0++ to be 1637 ∼ 1698 MeV.