Thursday, April 18, 2019

The Neutron Lifetime Discrepancy

Neutrons are electrically neutral particles that usually combine with protons to make up atomic nuclei. Some neutrons are not bound up in atoms; these free-floating neutrons decay radioactively into other particles in a matter of minutes. 
But physicists can’t agree on precisely how long it takes a neutron to die. Using one laboratory approach, they measure the average neutron lifetime as 14 minutes 39 seconds. Using a different approach, they get 8 seconds longer. The discrepancy has bedevilled researchers for nearly 15 years.
From here.

This Is A Notable Discrepancy Even Thought It Is Too Small To Be Really Alarming

According to Nature the beam experiment result from 2013 is 887.7 +/- 3.1 seconds and the bottle experiment result is 878.5 +/- 1 seconds (actually 9.1 seconds different not 8 as reported in the mass media story), so the tension between the measurements is about 2.8 sigma.

But, two experimental results in physics are commonly only called consistent with each other if they are within 2 sigma of each other. When two experimental measurements of what should be the same thing are more than 2 sigma apart and less than 5 sigma apart, we call that a "tension" between the measurements. When the two experimental measurements are 5 or more sigma apart, physics usually consider this to be the standard for having discovered "new physics".

Still, even though this is just a "tension" between two measurements, a discrepancy (and uncertainty) this big (more than 1%) in something so fundamental (and something so seemingly easy to measure in a reasonable size and cost laboratory experiment) is quite remarkable.

By comparison, the muon g-2 discrepancy between theory and experiment, which has attracted so much attention, is 2.3 parts per million, the mass of the proton is known to 11 significant digits and the neutron mass is known to eight significant digits.

QCD Calculations Still Aren't Quite Precise Enough To Be Helpful

The decay time of a free neutron is something that can, in theory, be calculated from first principles using QCD, a part of the Standard Model where we think we know the exact equations involved and have meaningful measurements of all of the fundamental physical constants that are needed to do those questions.

But, it turns out to be very hard to do that math, and the relevant physical constants aren't known with much precision. So, even for an experimental measurement this imprecise, the experimental measurements are generally significantly more precise than the theoretical predictions done that way.

Still 1% precision starts to approach what is possible with QCD calculations from first principles.

New Physics Is An Unlikely Explanation

There is no reason to think that the fundamental properties of a neutron or its components (valance up and down quarks and gluon, and a variety of virtual particles) should be any different in one situation than in any other situation. A core principle of particle physics is that every fundamental particle is exactly identical to every other fundamental particle with the same quantum numbers, and that identical combinations of the same fundamental particles produce the identical and interchangeable composite particles.

Everything we know about experimental tests of the Standard Model strongly suggests that there should be no "new physics" at the very low energy scale of basically free floating neutrons, the second lightest baryon after the proton, that are ubiquitous in Nature. The properties of neutrons have been tested exhaustively in collider experiments and been found to conform to the Standard Model in that context.

Could There Be A Conceptual Difference That Is Not Accounted For?

Presumably, the problem is a conceptual one, with scientists modeling one or both experiments incorrectly in some subtle manner that fails to take into account some material fact that impacts the neutron lifetime. But, a great deal of discussion and analysis by a lot of very smart people hasn't determined what could be amiss.

For example, maybe there is a small probability that the neutrons in the beam briefly form an unstable nucleus-like structure, which prevents them from decaying for the brief period that they are bound to each other (just as a neutron in a real atomic nucleus does not decay), and that is something that can't happen in the bottle experiment.

If that possibility was not accounted for, then there would be an unexplained discrepancy.

Of course, I'm not a nuclear physicist, and I presume that all of the obvious possibilities for sources of the discrepancy have already been considered and ruled out. But, at any rate, that is what a conceptual issue leading to the discrepancy would look like if there was one.

Could A Source Of Systemic Error Have Been Overlooked Or Underestimated?

Another plausible explanation could be that the margin of systemic error in one of the experiments is significantly underestimated, perhaps because the source of that systemic error was not identified by the scientists involved.

For example, if the true combined margin of error in the beam experiments was 4.4 seconds instead of 3.1, and the true combined margin of error in the bottle experiments was and the margin of error in the bottle experiments was 1.3 seconds instead of 1 second (about 30% larger in each case), then the two experimental results would be consistent with each other at the 2 sigma level, and no one would be worried about anything.

Since part of the margin of error (the statistical component) is known almost exactly, however, the systemic error has to be wrong by quite a bit more than the difference in the amount of combined error to explain the discrepancy. One big omission of a source of error in the beam experiment would be the easiest way for a single oversight to give rise to the tension between the two kinds of experimental measurements if that is the source of them.

For example, suppose that there was an impurity of the source of nucleons in the bottle experiments that wasn't detected which would make it look like nucleons were decaying faster when there were actually fewer of them than there should be going into the bottle, leading to a small, but notable, potential source of error that might not have been considered by the experimenters.

The virtue of the possibility that the margins of error are understated is that the pool of people who might deduct the potential source of systemic errors not included in the error budget for a particular kind of experimental setup, is much smaller than the pool of people capable of evaluating conceptual flaws in the experimental designs. Also, many of those people work closely with each other leading to group think, and, the fewer independent minds you have thinking about a problem, the more likely it is to go unsolved.

Again, I have no idea if there is in fact an error in the error estimation done in these experiments, a crucial but tedious exercise relegated to the fine print of most physics papers that ordinary readers and even reviewers often assume is correct without really considering closely. But, if that was the problem, these examples explain what they might look like in that case.

2 comments:

Mitchell said...

Lubos blogged about this a year ago, and I wrote "My guess? The proton trap in the beam experiment doesn't capture all of them."

andrew said...

Not a bad theory.