Pages

Friday, January 29, 2021

String Theory Is Still Vaporware

String Theory has many problems. 

It has six (or in some accounts seven) space-time dimensions too many. It predicts lots of new supersymmetric particles and forces (even without adding in gravity) that we haven't observed (experimental exclusions for such particles were reviewed at this post). It needs to explain why we don't observe proton decay. It predicts a scalar-tensor theory of gravity that isn't observed. It is happiest in an "anti-deSitter" universe which is roughly equivalent to a negative cosmological constant when the measured value of the cosmological constant of General Relativity is positive. The physics journalist for Forbes magazine recaps these issues.

Ethan Siegel has an excellent piece on the basic problem with string theory (to the extent it’s well-defined, it has too large a (super)symmetry group and too many dimensions, no explanation for how to recover 4 space-time dimensions and observed symmetry groups).
Here’s why the hope of String Theory, when you get right down to it, is nothing more than a broken box of dreams.

From Not Even Wrong

[T]here are a lot of symmetries that you could imagine would be respected, but simply aren’t. You could imagine that the three forces of the Standard Model would unify into a single one at high energies in some sort of grand unification. You could imagine that for every fermion, there would be a corresponding boson, as in supersymmetry. And you can imagine that, at the highest energies of all, that even gravity gets unified with the other forces in a so-called “theory of everything.”

That’s the brilliant, beautiful, and compelling idea at the core of String Theory. It’s also has absolutely no experimental or observational evidence in favor of it at all. . . .  
Many ideas — such as grand unification and supersymmetry — would involve adding new particles and interactions, but would also lead to experimental consequences like proton decay or the presence of additional particles or decay pathways not seen at colliders. The fact that these predictions haven’t panned out helps us place constraints on both of these ideas. 

String theory, though, goes many steps farther than either grand unification or what we know as supersymmetry does.

For grand unification, the idea is to take the three forces in the Standard Model and embed them into a larger, more symmetric structure. Instead of the particles we know with the interactions we know — with multiple disjoint frameworks corresponding to each of the forces — grand unification tries to fit the Standard Model inside a larger structure.

This might just sound like words to you, but the group theory representation of the Standard Model is SU(3) × SU(2) × U(1), where the SU(3) is the color (strong force) part, the SU(2) is the weak (left-handed) part, and the U(1) is the electromagnetic part. If you want to unify these forces into a larger framework, you’ll need a bigger group.

You can take the route of Georgi-Glashow [SU(5)] unification, which predicts new, super-heavy bosons that couple to both quarks and leptons simultaneously. You can take the route of Pati-Salam [SU(4) × SU(2) × SU(2)] unification, which adds in the right-handed particles, making the Universe left-right symmetric instead of preferring a left-handed neutrino. Or you can go even larger: to SU(6), SO(10), or still larger groups, so long as they contain the Standard Model within them.

The problem, of course, is that the larger you go, the more stuff there is to get rid of, and the more explaining there is to do if we want to understand why these extra components to reality don’t show themselves, either directly or indirectly, in our experiments, measurements, and observations of the Universe. The proton doesn’t decay, so either the simplest model of grand unification is wrong, or you have to pick a more complicated model and find a way to evade the constraints that rule out the simpler models.

If you want to talk about unification and group theory in the context of String Theory, however, your group suddenly has to become enormous! You can fit it into one of the SO groups, but only if you go all the way up to SO(32). You can fit it into two of the exceptional groups crossed together — E(8) × E(8) — but that’s enormous, as each E(8) contains and is larger than SU(8), mathematically. This isn’t to say it’s impossible that String Theory is correct, but that these large groups are enormous, like a block of uncut marble, and we want to get just a tiny, perfect statuette (our Standard Model, and nothing else) out of it.

Similarly, there’s an analogous problem that arises with supersymmetry. Typically, the supersymmetry you hear about involves superpartner particles for every particle in existence in the Standard Model, which is an example of a supersymmetric Yang-Mills field theory where N=1. The biggest problem is that there should be additional particles that show up at the energy scales that reveal the heaviest Standard Model particles. There should be a second Higgs, at least, below 1,000 GeV. There should be a light, stable particle, but we haven’t observed it yet. Even without String Theory, there are many strikes against N=1 supersymmetry.

The Standard Model, without supersymmetry, is simply the N=0 case. But if we want String Theory to be correct, we need to make nature even more symmetric than standard supersymmetry predicts: String Theory contains a gauge theory known as N=4 supersymmetric Yang-Mills theory. There’s even more stuff to hand-wave away if we want String Theory to be correct, and it all has to disappear to not conflict with the observations we’ve already made of the Universe we have.

But one of the biggest challenges for String Theory is something that’s often touted as it’s big success: the incorporation of gravity. It’s true that String Theory does, in a sense, allow gravity to be merged with the other three forces into the same framework. But in the framework of String Theory, when you ask, “what is my theory of gravity,” you don’t get the answer that Einstein tells us is correct: a four-dimensional tensor theory of gravity. . . . 

So what does String Theory give you? Unfortunately, it doesn’t give you a four-dimensional tensor theory of gravity, but rather a 10-dimensional scalar-tensor theory of gravity. Somehow, you have to get rid of the scalar part, and also get rid of six extra (spatial) dimensions.

We had, as proposed 60 years ago, an alternative to Einstein’s General Relativity that did incorporate a scalar as well: Brans-Dicke gravity. According to Einstein’s original theory, General Relativity was needed to explain the orbit of Mercury, and why its perihelion (where it came closest to the Sun) precessed at the rate that it did. We observed a total precession of ~5600 arc-seconds per century, where ~5025 were due to the precession of the equinoxes and ~532 were due to the other planets. Einstein’s General Relativity predicted the other ~43, and that was the slam-dunk prediction he finally made in 1915 that catapulted the eclipse expedition into infamy. The 1919 revelation that light bent starlight was the ultimate confirmation of our new theory of gravity.

But by the late 1950s, some observations of the Sun had indicated that it wasn’t spherical, but rather was compressed along its poles into an oblate spheroid. If that were the case, Brans and Dicke argued, then that observed amount of departure from a perfect sphere would create an additional 5 arc-seconds of precession per century that differed from Einstein’s predictions. How to fix it? Add in a scalar component to the theory, and a new parameter: ω, the Brans-Dicke coupling constant. If ω was about 5, everything would still turn out right.

Of course, the Sun actually is a perfect sphere to a much better degree than even the Earth, and those observations were incorrect. Given the modern constraints that we have, we now know that ω must be greater than about 1000, where the limit as ω → ∞ gives you back standard General Relativity. For String Theory to be correct, we have to “break” this 10 dimensional Brans-Dicke theory down to a four dimensional Einsteinian theory, which means getting rid of six dimensions and this pesky scalar term and the coupling, ω, all of which must go away.

What all of this means is that if String Theory is correct, we have to start with a Universe that’s highly symmetric and very unlike the Universe we have today. This Universe, at some early time at very high energies, had 10 dimensions to it, had a scalar gravity component in addition to the tensor component, was unified into some very large group like SO(32) or E(8) × E(8), and was described by a maximally supersymmetric (N = 4) Yang-Mills theory.

If String Theory is correct, then somehow — and nobody knows how — this ultra-symmetric state broke, and it broke incredibly badly. Six of the dimensions disappeared, and the scalar gravity component stopped mattering. The large, unified group broke very badly, leaving only our relatively tiny Standard Model, SU(3) × SU(2) × U(1), behind. And that supersymmetric Yang-Mills Theory broke so badly that we don’t see any evidence for a single supersymmetric particle today: just the regular Standard Model. . . . 

It may be interesting and promising, but until we can solve String Theory in a meaningful way to get the Universe we observe out of it, we have to admit to ourselves what String Theory truly is: a large, unbroken box that must somehow crumble in this particular, intricate fashion, to recover the Universe we observe. Until we understand how this occurs, String Theory will only remain a speculative dream.

There are additional problems that aren't discussed in the article. 

One is the fact that we live in a universe with deSitter rather than anti-deSitter topology. Another is the fact that there are myriad possible versions called vacua, the vast majority of which have an anti-deSitter topology which are called the "swampland" because most of them are starkly incompatible with observed reality.

Another is the fact that experimental evidence has established that supersymmetry, which is a necessary sub-component of string theory, was conceived to solve a problem that we now know that it doesn't solve in the way that it was intended to, called the "hierarchy problem" because even if supersymmetric particles and extra Higgs bosons exist, the Large Hadron Collider has established that they are too heavy to solve the "problem" that they were devised to solve.

There may be glimmers of useful mathematical or physical insight that one can gain from studying it, but it is along the lines of the insights into English grammar and vocabulary that you get from studying French for a year or two, that has no meaningful connection to the real world and doesn't allow you to do anything worthwhile.

But in the end analysis, there is a substantial and growing faction of the fundamental physics community, including both professional physicists and educated laypeople like myself, who have concluded that String Theory and Supersymmetry are both dead ends that have wasted immense amounts of time and resources of a lot of very smart people.

The History of Chess

The map below shows the spread of chess out of India. Ironically, Russia, home of many of the greatest chess legends, was one of the last places that chess arrived.



Monday, January 25, 2021

The BBC On "Cracks In Cosmology"

A January 13, 2021 article in issue 358 of BBC Science Focus Magazine entitled "The Cracks in Cosmology: Why Our Universe Doesn't Add Up?" by Marcus Chown nicely sums of the LambdaCDM Standard Model of Cosmology and some key lines of observational evidence indicating that it is flaws.

The model, in his simplified terms in which he explains it, consists of the Big Bang, plus inflation, plus dark matter, plus dark energy. Inflation smooths out the universe, dark energy speeds its expansion, dark matter shapes the cosmic background radiation patterns, aids in galaxy formation, and leads to phenomena like galactic rotation curves that aren't Keplerian and galactic clusters that are much heavier than their visible matter.

He identifies three notable recent flaws in this model in observations newly made in 2020.

First, he points to the gravitational lensing of subhalos in galactic clusters recently observed to be much more compact and less "puffy" than LambdaCDM would predict.

Secondly, he points to a KIDS telescope observation of very large scale structure which shows it to be 8.3% smoother (i.e. less clumpy) than predicted by LambdaCDM.

Third, he points to the Hubble tension (see, e.g., here) that shows that Hubble's constant, which is a measure of the expansion rate of the universe, is about 10% smaller when measured via cosmic microwave background radiation (with a small margin of error) than when measured by a wide variety of measures at times much more removed from the Big Bang that the time at which the cosmic microwave background came into being.

He then provides a laundry list of ways that the model or the data collection could be flawed.

Other Problems With LambdaCDM

Honestly, these aren't even necessarily the most serious of the problems, and many of the big problems have been known for a long time. Among problems with the Cold Dark Matter model, especially at the galaxy and galactic cluster scale. 

* The halo shapes are usually wrong (too cuspy and not in the NFW distribution predicted by the theory). 

* The correspondence between the distribution of ordinary matter and inferred dark matter in galaxies is too tighttruly collisionless dark matter should have less of a tight fit in its distribution to ordinary matter distributions than is observed. This is also the case in galaxy clusters

* It doesn't explain systemic variation in the amount of apparent dark matter in elliptical galaxies, or why spiral galaxies have smaller proportions of ordinary matter than elliptical galaxies in same sized inferred dark matter halos, or why thick spiral galaxies have more inferred dark matter than thin ones.

* It doesn't explain why satellite galaxies are consistently located in a two dimensional plane relative to the core galaxy. 

* Not as many satellite galaxies are observed as predicted, or why the number of satellite galaxies is related to budge mass in spiral galaxies.

* The aggregate statistical distribution of galaxy types and shapes, called the "halo mass function" is wrong.

* Galaxies are observed sooner after the Big Bang than expected (see also here).

* The temperature of the universe measured by 21cm background radio signals is consistent with no dark matter and inconsistent with sufficient dark matter for LambdaCDM to work. 

* It doesn't explain strong statistical evidence of an external field effect that violates the strong equivalence principle. 

* Observations are inconsistent with the "Cosmological principle" that LambdaCDM predicts, which is "the notion that the spatial distribution of matter in the universe is homogeneous and isotropic when viewed on a large enough scale.

* It doesn't do a good job of explaining the rare dwarf galaxies (that are usually dark matter dominated) that seem to have no dark matter



* It gets globular cluster formation wrong (see also here).

* It doesn't explain evidence of stronger than expected gravitational effects in wide binary stars.


* It doesn't explain the "cosmic coincidence" problem (that the amount of ordinary matter, dark matter and dark energy are of the same order of magnitude at this moment in the history of the Universe since the Big Bang).

* There are potential unresolved system problems in current dark energy measurements.

* Every measure of detecting it directly has come up empty (including not just dedicated direct detection experiments but particle collider searches, searches for cosmic ray signals of dark matter annihilation, and indirect searches combined with direct searches and also here). But it requires particles and forces of types not present in the Standard Model or general relativity to fit what is observed.

* It has made very few ex ante predictions and those it has made have often been wrong, while MOND has a much better track record despite being far simpler (which should matter).

* There are alternative modified gravity theories to toy model MOND that explain pretty much everything that dark matter particle theories do (including, e.g., the cosmic coincidence problem, clusters, the Bullet Cluster, galaxy formation, the cosmic background radiation pattern observed), with fewer problems and anomalies.

Friday, January 22, 2021

Spinach

Linguistically, at least, the words for the leafy green vegetable known as "Spinach" in both Indo-European languages like English and French, Afro-Asiatic languages, and in Chinese, derives from a Persian word. This is consistent with what we know about the origins of this domesticated plant, which was domesticated there about two thousand years ago, and then dispersed globally over the last fifteen hundred years or so:

Spinach (Spinacia oleracea) is a leafy green flowering plant native to central and western Asia. It is of the order Caryophyllales, family Amaranthaceae, subfamily Chenopodioideae. . . . 
Spinach is thought to have originated about 2000 years ago in ancient Persia from which it was introduced to India and ancient China via Nepal in 647 AD as the "Persian vegetable". In AD 827, the Saracens introduced spinach to Sicily. The first written evidence of spinach in the Mediterranean was recorded in three 10th-century works: a medical work by al-Rāzī (known as Rhazes in the West) and in two agricultural treatises, one by Ibn Waḥshīyah and the other by Qusṭus al-Rūmī. Spinach became a popular vegetable in the Arab Mediterranean and arrived in Spain by the latter part of the 12th century, where Ibn al-ʻAwwām called it raʼīs al-buqūl, 'the chieftain of leafy greens'. Spinach was also the subject of a special treatise in the 11th century by Ibn Ḥajjāj.

Spinach first appeared in England and France in the 14th century, probably via Spain, and gained common use because it appeared in early spring when fresh local vegetables were not available. Spinach is mentioned in the first known English cookbook, the Forme of Cury (1390), where it is referred to as 'spinnedge' and/or 'spynoches'. During World War I, wine fortified with spinach juice was given to injured French soldiers with the intent to curtail their bleeding.

The Persian (i.e. Iranian) word for Spinach, however, has origins much deeper than its domestication about the year 0 CE (when "Middle Iranian" was one of the main languages spoken in Persia, with the cutoff between Middle Iranian and Old Iranian often put at around 400 BCE):

Kulturwort of Iranian origin. According to Asatrian, there were probably two forms in late Middle Iranian, *ispanāg (or *ispināg) and (the dialectal) *ispanāx (or *ispināx), yielding Arabized forms إِسْفَنَاج / إِسْفِنَاج‎ (ʾisfanāj / ʾisfināj) and إِسْفَنَاخ / إِسْفِنَاخ‎ (ʾisfanāḵ / ʾisfināḵ), which were popularized in Persian and Arabic, respectively (alternative forms with پ‎ (p) are directly from Middle Iranian). 
The Old Iranian form would be *spināka-, *spinaka- (compare Northern Kurdish sping), from the root *spin- (Northwestern Iranian), *sin- (Southwestern Iranian), ultimately from the Proto-Iranian *spai- (*spi-), from Proto-Indo-European *spey- (“thorn-like”) (*spi-), which are also reflected in Latin spinaPersian سنجد‎ (senjed), Ossetian сындз (synʒ), синдзӕ (sinʒæ, “thorn”), Baluchi (šinž), Central Iranian šeng, Kermani šank (“thorn”). Also akin to Semnani esbenāγa.

According to Cabolov, related to Northern Kurdish siping (“meadow salsify, possibly also spinach”) and Persian سپند‎ (sipand, “wild rue”).

The derivation of the root word for Spinach from words meaning thorny or spiky reflects the appearance of its seeds:

Spinach seeds are generally referred to as round – which is relatively smooth – or prickly, with seeds that are sharp and pointed borne in a capsule with several spines. If you have ever tried removing the seeds of prickly spinach from the stalk by hand, you quickly learned why it is called prickly. It hurts.

According to the UN Food & Agriculture Organization, in 2018, world production of spinach was 26.3 million tonnes, with China alone accounting for 90% of the total.

Monday, January 18, 2021

New Muon g-2 Measurement Expected February 2021

FNAL-E989 first announcement in February 2021
Per Asian Twitter via Physics Forums.

This is arguably the most important fundamental physics measurement since the discovery of the Higgs boson. It has the potential to either (1) unambiguously establish that the Standard Model of Particle Physics is missing undiscovered new physics of some well quantified type (still only enough to generate parts be million irregularities on the measured value), if the consensus theoretical estimate and the experimental measurement of muon g-2 differ by five sigma or more, or instead, (2) profoundly limit any form of low energy new physics, if the consensus theoretical estimate and the experimental measurement of muon g-2 differ by two sigma or less.

In the second case, any impact of new fundamental particles or new forces in the Standard Model on muon g-2 must be almost exactly offsetting, and/or any undiscovered new physics must be extremely slight.

This is because the theoretical value of muon g-2 is a function of the strong coupling constant, the weak coupling constant, the strong force coupling constant, and the properties of essentially all of the fundamental particles of the Standard Model, through intermediate loops included in the calculation of this derived muon property. Essentially anything that can interact with a muon in the Standard Model, and anything that interacts with something that interacts with a muon in the Standard Model, ever so slightly tweaks the value of muon g-2.

For example, as one June 2020 paper explains:
The longstanding muon g-2 anomaly may indicate the existence of new particles that couple to muons, which could either be light (< GeV) and weakly coupled, or heavy (>> 100 GeV) with large couplings. If light new states are responsible, upcoming intensity frontier experiments will discover further evidence of new physics. However, if heavy particles are responsible, many candidates are beyond the reach of existing colliders. We show that, if the g-2 anomaly is confirmed and no explanation is found at low-energy experiments, a high-energy muon collider program is guaranteed to make fundamental discoveries about our universe. New physics scenarios that account for the anomaly can be classified as either "Singlet" or "Electroweak" (EW) models, involving only EW singlets or new EW-charged states respectively. We argue that a TeV-scale future muon collider will discover all possible singlet model solutions to the anomaly. If this does not yield a discovery, the next step would be a O(10 TeV) muon collider. Such a machine would either discover new particles associated with high-scale EW model solutions to the anomaly, or empirically prove that nature is fine-tuned, both of which would have profound consequences for fundamental physics.
Rodolfo Capdevilla, David Curtin, Yonatan Kahn, Gordan Krnjaic, "A Guaranteed Discovery at Future Muon Colliders" arXiv (June 29, 2020).

In contrast, if the theoretical prediction for muon g-2 is confirmed, even very weakly coupled sub-GeV particles, and also high mass new particles (above 100 GeV up to tens of TeV) with anything more than extremely weak couplings are largely ruled out. Thus, any new fundamental particles would have to be confined to a scale that would not be detectable at a next generation collider.

In the intermediate range, where a tension between the new experimentally measured value of muon g-2 and the theoretically predicted value of muon g-2 continues to be more than two sigma but less than five sigma, the case for new physics relative to merely something like underestimated error bars in either the experimental measurement or the theoretical prediction or both, is still diminished. 

This is because that can only happen if the best fit value of the experimental measurement becomes significantly closer to the theoretical prediction than it was fifteen years ago, even though a tension remains.

If that happened, the source of any new physics leading to a tension would also be expected to be smaller than the previous tension would have predicted, implying either a much weaker coupling to Standard Model physics, or a much higher mass scale for new physics, than previously surmised. 

An ongoing tension would still motivate searches for new physics by providing some observational motivation for them. But a lot of new physics models proposed to explain the tension observed fifteen years ago would end up on the cutting room floor. 

Anything other than a definitive confirmation of the necessity of new physics to explain the muon g-2 anomaly would, in particular, be a huge blow to the prospects of supersymmetry (SUSY) models (see, e.g., here) or models with additional types of Higgs bosons (see, e.g. here) at scales potentially discoverable by a next generation particle collider (i.e. with new particles having masses of tens of TeVs or less).

UPDATE January 31, 2021:

A periodical reports a March 2021 date, which would be another modest postponement.
Locked cabinets, sealed envelopes, and secret codes surround a big question in particle physics: Could the magnetism of a particle called the muon point to new vistas in physics?
Behind the scenes drama here.

Wednesday, January 13, 2021

The Meaning Of The Name Of The Founder Of Zoroastrianism

The founder of the Zoroastrian religion is known as Zoroaster and also the variant form Zarathustra. What did his name mean?
In Avestan, Zaraϑuštra is generally accepted to derive from an Old Iranian *Zaratuštra-; The element half of the name (-uštra-) is thought to be the Indo-Iranian root for "camel", with the entire name meaning "he who can manage camels".

Not terribly profound, but suggestive of Central Asian affinities, just like the Indo-Iranian people of whom he was a part. 

Tuesday, January 12, 2021

Belle Experiment At LHCb Finds No Evidence Of Lepton Flavor Universality Violations

Executive Summary: No New Physics

A compilation of all of the Belle collaboration data to date shows no statistically significant evidence of lepton flavor universality violations in B meson decays, which are prohibited by the Standard Model. Evidence of anomalies from earlier data with a smaller sample size has grown less significant as more data has been collected.

This doesn't quite, by itself, but the nail in the coffin of this hint of beyond the Standard Model physics, but it comes close. Upcoming work by a successor Belle II collaboration, measuring the previously anomalous quantities to greater precision, will be more definitive.

One Small Anomaly In Something Else Was Seen

There was one modest statistical tension in the latest Belle data between a Standard Model expectation that is distinct from a lepton universality violation, which is that there would be no semi-leptonic decays of B mesons to kaons with a muon-electron mix, rather than a lepton and anti-lepton of the same flavor as expected, out of four channels tested, with no signal in the other three channels of semi-leptonic B meson decays studied. 

But given look elsewhere effects, the marginal strength of the signal near the boundary of what the experiment can detect (the best fit to its frequency was one per 20 million events in one decay channel), and the small absolute number of events involved, this result is not very notable and is probably just a statistical fluke. 

Unlike the lepton universality violations looked for in the new study, there was no statistically significant evidence of this anomaly from prior experiments at Belle or in other collaborations before the study was done, further heightening the look elsewhere effect impact on its significant. This and a lack of replication casts doubt about the reality of this marginally statistically significant tension with the Standard Model. 

Background

In the Standard Model of Particle Physics, electrons, muons and tau leptons should have exactly the same properties apart from their masses, something called "lepton flavor universality" subject to distinctions to slight to measure caused by interactions in intermediate loops of decays and interactions with oscillating neutrinos whose different types do not behave identically (with their differences described by the PMNS matrix).

One of the hottest areas of fundamental physics in recent years has been the detection of anomalies in B meson decays (a B meson is a two valence quark particle in which at least one of the valence quarks is a b quark) into leptons, that seems to deviation from lepton universality. Individually, none of these anomalies is significant enough to amount to a discovery of new physics, although it is possible (although challenging) to imagine new physics that could explains the anomalies seen in multiple channels which start to look very significant if they are all evidence of the same new physics phenomena. 

A Headache Avoided

This new result from Belle is reassuring, because previously observed anomalies were hard to explain. This is because the decays in which the anomalies were observed are generally believed to arise from a process (W boson decays) that are shared with many other kinds of decays where larger data sets have shown no evidence for the same anomaly, and the Standard Model process assumed to be at work in B meson decays otherwise comes very close to accurately explaining the B meson decays that were observed (e.g. it predicted the overall number of decays accurately).

Lepton universality violations are not seen in W boson decays at the LHC, are not found in tau lepton decays or pion decays (also here), and are not found in anti-B meson and D* meson decays or in Z boson decays, even though all of those examples involve the same kind of weak force decay believed to be involved in B meson decays.

If the old lepton universality anomaly was real, something very big had to have been wrong with our understanding of hadron decays in the Standard Model, even though the Standard Model works basically perfectly in all other hadron decay contexts.

But you would expect statistical flukes (possibly amplified by understated systemic errors) to be seen in early B meson decays data, if they are seen anywhere. This is because the high energies necessary to create B mesons means that the data sets for these decays are the smallest of the various decays where lepton universality could be studied. So the fact that there was an early anomaly in early B meson decay data that has faded as more data has been collected, makes sense.

If the lepton flavor universality violation anomaly seen in prior studies was a real "new physics" effect, it should have gotten stronger, not weaker, as the size of the data set increased. 

The New Results In Detail

A new pre-print is entitled "Test of lepton family universality and search for lepton and baryon number violation at Belle" (and incidentally follows the desirable practice of listing only the corresponding author the the collaboration as authors, rather than comprehensively listing every scientist in the collaboration individually).

The Lepton Flavor Universality Violation Measurement 

The headline result is its test of lepton flavor universality. This updated result analyzes: "the results obtained from a multidimensional fit performed on the full Υ(4𝑆) data sample of Belle. . . .  Following four channels are studied: 𝐵 + → 𝐾+ 𝑒 + 𝑒 − , 𝐵 + → 𝐾+𝜇 +𝜇 − , 𝐵 0 → 𝐾 0 𝑆 𝑒 + 𝑒 − , and 𝐵 0 → 𝐾 0 𝑆 𝜇 +𝜇 − based on 711 fb−1 Υ(4𝑆) data corresponding to 772 × 10^6 𝐵𝐵 events."

In other words, it looked the semi-leptonic decays of two kinds of B mesons, (1) the decay of charged B mesons into a charged kaon and either an electron-positron pair, or a muon-antimuon pair, and (2) the decay of neutral B mesons into a neutral kaon and either an electron-positron pair, or a muon-antimuon pair. In all it analyzed 772 million B meson decays, the complete set of B meson decays from excited Y(4,S) resonances into B meson pairs from the experiment to date.

If the Standard Model is correct, the ratio of semi-leptonic B meson decays to muon-antimuon pairs to semi-leptonic B meson decays to electron-positron pairs (called R(K) should be equal to exactly one, subject only to random statistical errors (since the laws of quantum physics govern probabilities rather than being deterministic), to asymmetries induced from neutrino interaction loops which should be much smaller than the statistical uncertainties, and to systemic experimental measurement errors that are well quantified in heavily used experimental setup.

The results were also segregated into "q squared" bins that reflect the energy scale of the interaction produce the B mesons in each decay. The number of events in each bin has an uncertainty in it because most of the 772 million decays don't involve the semi-leptonic decays a B mesons to kaons studied, and the scientists had to segregate out background events from the decays that they were actually studying, which can't be distinguished with perfect accuracy. In the end, there were only about 275 charged B meson decays and 49 neutral B meson decays in the sample analyzed.

From the fit we obtain 137 ± 14 and 138 ± 15 events in 𝐵 + → 𝐾+𝜇 +𝜇 − and 𝐵 + → 𝐾+ 𝑒 + 𝑒 − decays, respectively. Similarly, the yields for the neutral channels 𝐵 0 → 𝐾 0 𝑆 𝜇 +𝜇 − and 𝐵 0 → 𝐾 0 𝑆 𝑒 + 𝑒 − are 27.3 + 6.6 − 5.8 and 21.8 + 7.0 − 6.1 events. 

These results were consistent with the Standard Model expectation of a ratio of one to one at the two sigma level used to distinguish between normal statistical variation in experiments like this and tensions that are considered anomalies that deviate from the Standard Model expectation. 

The previous data had shown a 2.4 sigma tension with the Standard Model in the neutral kaon decay channel and a 2.5 sigma tension in the charged kaon decay channels studied by Belle.  Moreover, the direction of the deviation from a perfect one to one ratio in the neutral B meson decay where it was stronger in the current study in terms of the deviation of the best fit ratio from one to one (although with more uncertainty) than in the charged B meson decay in the current study, was in the opposite direction of the previous anomalies. The previous data showed too few muon events in neutral B meson decays relative to the electron events, while this study showed too many. 

Another 3.4 sigma tension seen in the prior data was restricted to a somewhat odd ball measurement, that normally wouldn't attract attention, in a narrow energy scale bin, and can be viewed as a physicists version of p-hacking to find a statistical fluke in a way that doesn't properly account for look elsewhere effects. 

This particular study from Belle didn't examine the fourth channel in which a 3.7 sigma tension with the Standard Model was observed in previous data, involving a different kind of semi-leptonic B meson decay than the decays to kaons reviewed in this pre-print.

Semi-Leptonic Mixed Lepton Flavor Decays

The search also looked for semi-leptonic decays to kaons with a mix of electrons and muons which should not happen in the Standard Model. 

The maximum frequency of such decays consistent with observation was constrained by an additional order of magnitude relative to prior studies to not more than parts per 100 million in three of four channels.

But, in the only notable result of the study, there was 3.2 sigma evidence (a result in tension with the Standard Model) of some anomalous decays in the charged B meson to charged kaon together with a muon and a positron channel at an apparent rate of one such decay per 20 billion charged B meson decays, but with considerable uncertainty in the magnitude of the anomaly that at the low end could be very close to zero. Once look elsewhere effects are considered, this results is somewhat less notable.

Baryon Number Violating Tau Decays

Baryon number violating decays in about 841 million tau lepton decays (in which the net number of quarks less antiquarks before and after the decay changes) which are not allowed by the Standard Model, were not observed. Their frequency, if they happen at all, was constrained to be fewer than something on the order of single digit numbers of events per 10 million tau lepton decays.

Monday, January 4, 2021

Physics In 2020

I made 90 posts about physics at this blog in 2020. A few topics dominated the discussion, and I collect some of those posts.

There were many increasingly accurate measurements of Standard Model physical constants. I tracked these results against some speculative theoretical expectations, but the uncertainties in the top quark mass measurement which won't be dramatically better for the foreseeable future, and to a lesser extent, some of the other measurements limited the extent to which these expectations could be meaningfully confirmed.


The neutrino data, in particular, strongly disfavors the sterile neutrino hypothesis although it hasn't quite put a nail in the coffin of that conjecture and continues to favor a "normal mass hierarchy" for neutrinos with no evidence of Majorana neutrinos such a neutrinoless double beta decay. Efforts to determine the CP violating phase in neutrino oscillation have confirmed that there is some CP violation in this process and favors near maximal CP violation in neutrino oscillation, but has large margins of error. These measurements are likely to improvement meaningfully in the coming year.


Modified gravity approaches to explaining dark matter and dark energy continued to be successful, while the paradigmatic lambdaCDM theory of cosmology continued to fall short. Demonstration of violations of the strong equivalence principle towards the end of the year topped the list. Advances of modified gravity and dings for lambdaCDM in the area of early cosmology were also significant.

  
A variety of experimental anomalies and measurement tensions were explored. 

The biggest one that is that there will be two new muon g-2 measurements, the last of which was fifteen years ago and the next of which will be announced early this year. The new measurement (and the new theoretical prediction) will be much more accurate than the last, in which the measurement which differed by about three sigma (about 2 parts per million) from the theoretically expected value. The calculation of muon g-2 is sensitive in a global way to almost all aspects of Standard Model physics and can be calculated and measured with extreme precision. The closer that the experimentally measured value of muon g-2 is, the less room there is for new physics beyond the Standard Model. On the other hand, big differences would be strong evidence that scientists are missing something in the Standard Model.


Another area where there have been tensions is in suggestions that premise of charged lepton universality (i.e., that the electron, muon and tau lepton are identical apart from mass) is violated. The tensions have appeared in B meson decays, but not in other kinds of decays that should implicate the same properties.


The Xenon1T experiment reported some anomalous outcomes that were almost immediately determined to be meaningless because the results failed to consider an important source of background contamination in its results, although that hasn't prevented theorists for coming up with esoteric and unlikely theoretical explanations for it. 

A Hungarian group has argued that there is a beyond the Standard Model X17 particle, but that hasn't been panning out either (see also a lengthy discussion with citations to journal articles in the comments to this post).