Prior to this measurement, the Particle Data Group value for the mass of the tau lepton was 1776.82 +/- 0.16 MeV/c^2. Thus, the new result is about 0.9 MeV higher than the old value and is consistent at a 0.5 sigma level with the old world average.
[Update on May 8, 2014: actually, the new BESIII Collaboration data point could even bring down the PDG world average, because it would probably replace the existing BES data point from 1996 in that calculation which is a higher "1776.96" MeV/c^2, although with a significantly higher margin of error of about +/- 0.29 MeV/c^2 (this older BES data point is also the closest of any of the existing data points to the original Koide's rule prediction discussed below). But, on the other hand, the low margin of error in the BESIII data point is the lowest ever of any single measurement (roughly tied with two previous measurements in statistical error, and far and away the best ever in terms of systemic error and combined margin of error), so the new BESIII data point will get more weight in the world average than the old BES data point did and is almost as high as the old BES data point, relative to the other data points in the world average. The highest number in the current PDG fit is a 1978 data point from DCLO of 1783 MeV which is just barely within two sigma of the current world average and is fourteen years older than the next oldest data point in the world average, but it gets little weight at all in the current calculation of the world average because its margin of error of +3/-4 MeV/c^2 is so large. The only other result higher than the BES data point among the eight data points that contribute to the current world average is a CLEO experiment result from 1997 which is 1778.2 +/- 0.8 +/- 1.2 MeV, which also doesn't get much weight due to its large margin of error and is still within one sigma of the current world average.]
[Update on May 8, 2014: actually, the new BESIII Collaboration data point could even bring down the PDG world average, because it would probably replace the existing BES data point from 1996 in that calculation which is a higher "1776.96" MeV/c^2, although with a significantly higher margin of error of about +/- 0.29 MeV/c^2 (this older BES data point is also the closest of any of the existing data points to the original Koide's rule prediction discussed below). But, on the other hand, the low margin of error in the BESIII data point is the lowest ever of any single measurement (roughly tied with two previous measurements in statistical error, and far and away the best ever in terms of systemic error and combined margin of error), so the new BESIII data point will get more weight in the world average than the old BES data point did and is almost as high as the old BES data point, relative to the other data points in the world average. The highest number in the current PDG fit is a 1978 data point from DCLO of 1783 MeV which is just barely within two sigma of the current world average and is fourteen years older than the next oldest data point in the world average, but it gets little weight at all in the current calculation of the world average because its margin of error of +3/-4 MeV/c^2 is so large. The only other result higher than the BES data point among the eight data points that contribute to the current world average is a CLEO experiment result from 1997 which is 1778.2 +/- 0.8 +/- 1.2 MeV, which also doesn't get much weight due to its large margin of error and is still within one sigma of the current world average.]
The Particle Data Group's world averages weight data points based upon their margin of error, while using only the most precise measurement from each experiment. Thus, the new PDG world average after the BESIII experiment will be roughly 1776.86 MeV/c^2 give or take about 0.1 MeV/c^2, and may have a slightly lower margin of error since a less precise result will be displaced by a more precise result.
The Original Koide's Rule Prediction
The original Koide's rule is a hypothesis about the relationship between the masses of the charged leptons. It predicts that the sum of the three charged lepton masses, divided by the square of the sum of the square roots of the charged lepton masses, is equal to exactly 2/3rds.
Since the electron and muon masses are known much more precisely than the tau lepton mass, it is possible to use the original Koide's rule (proposed by Yoshio Koide in 1981 just six years after the tau lepton was first discovered experimentally) with the electron mass and muon mass as inputs to predict the tau lepton mass with a precision much greater than current experimental measurements permit us to directly.
The Particle Data Group world average value for the mass of the electron is 0.510998928 +/- 0.000000011 MeV/c^2. The Particle Data Group world average value for the mass of the muon is 105.6583715 +/- 0.0000035 MeV/c^2. These values are both precise to roughly one part per 100 million.
If the original Koide's Rule is true, then the predicted mass of the tau lepton to ten significant digits is 1776.968959 MeV/c^2 (a value which probably slightly overstates the precision of the prediction by one or two significant digits, but is vastly more precise than the current experimental measurements which are accurate to less than six significant digits; I'm too lazy at the moment to work out the precise margin of error for the predicted value). [Updated May 8, 2014: arivero at Physics Forum states that the original Koide's rule prediction is 1776.96894(7) which is consistent with the result that I calculated from first principles using PDG data and the margin of error that I guessed was present, but calculated rather than guessed the margin of error. Given the limited precision of current experimental measurements of the tau lepton mass, the two statements of the prediction are effectively identical and exact for all practical purposes.]
The pre-BESIII paper PDG value for the tau lepton mass is 0.93 sigma less than the original Koide's rule value. The new BESIII value for the tau lepton mass is 0.33 sigma less than the original Koide's rule value.
An Abbreviated History of the Original Koide's Rule
In 1983, using the then best available measurements of the electron mass and muon mass, the original Koide's rule predicted a tau lepton mass of 1786.66 MeV/c^2. But, increased precision in the measurement of the electron and muon masses soon tweaked that prediction to something close to the current 1776.968959 MeV/c^2 predicted value, which is, to the same level of precision as current experimental measurements, 1776.97 Mev/c^2. By 1994 (and probably somewhat sooner than that), the prediction of the original Koide's rule had shifted to 1776.97 MeV/c^2. Thus, the prediction of the original Koide's rule has been essentially unchanged for more than twenty years.
Both the new BESIII measurement and the PDG world average values are consistent experimentally with the original Koide's rule prediction (as updated sometime between 1981 and 1994), but the new most precise measurement ever of the tau lepton mass is almost three times closer to the original Koide's rule prediction than the old world average and will eventually also bring up the PDG world average significantly in the direction of the original Koide's rule prediction.
Koide's 33 year old formula is still an excellent tool for accurately predicting the direction that new more precise experimental measurements of the tau lepton mass will shift from old world average values. This prediction has shifted only very slightly as the electron and muon mass have been more accurately measured.
We don't really know why the original Koide's rule works, or why extensions of it also seem to provide reasonably accurate first order estimates of the other fermion masses as well. But, the fact that it still does work with exquisite precision 33 years later, suggests that there is some underlying phenomena that gives rise to this relationship and that this relationship is not simply a coincidence.
[Update Below Made On May 8, 2014:]
Prospects For An Improved Experimental Tau Lepton Mass Value
Recall that the state of the art BESIII measurement is 1776.91 +/- 0.12 +0.10/-0.13 MeV/c^2, with a combined margin of error of about +/- 0.17 MeV/c^2. As usual, the first component of the margin of error is statistical and the second is systemic.
Statistical error and systemic error make roughly equal contributions to the total uncertainty in the measurement.
Our theoretical expectation, if the original Koide's rule is an accurate conjecture, is that the actual error in the BESIII measurement is about -0.06 MeV, which is about 0.33 sigma from the 20+ year old original Koide's rule expectation.
Given the small size of this discrepancy, it is entirely possible, for example, that there is in fact zero systemic error since those conservative estimates of systemic error are overstated or cancel each other out more completely than we would expect from random chance, and that all of that error arises from statistical variation. Alternately, it is also entirely possible that we have gotten lucky and the sample is, in fact, closer than usual to the true mean, and that all or most of the discrepancy between the true value and the measured one is due to systemic error.
Statistical Error
If the error was truly Gaussian (i.e. distributed according to the "normal" Bell curve distribution), we would expect experiment to differ from the true result by an average amount of 1 sigma. And, since the expected amount of statistical error flows purely from sample size (which is not reasonably subject to question in this case) and from mathematics.
The only possible reasonable objection to the mathematics is that the statistical error may not actually be distributed in a Gaussian manner. Indeed, the true statistical error in these measurements in any particular case probably isn't really distributed in a precisely Gaussian manner. Nature probably isn't that convenient.
But, this is not a very strong concern because, according to the law of averages, the combined statistical error from the sum of statistical error distributions for individual events, that meet certain minimum criteria that are present in this kind of context, will asymptotically approximate a Gaussian distribution as the sample size increases to infinity. And, the law of averages allows us to quantify the extent of the discrepancy between a Gaussian distribution of statistical error and the actual distribution of statistical error given the size of the sample.
Thus, the estimated statistical error in the BESIII experiment or any future experiment is probably very nearly perfectly accurate, and neither too optimistic, nor too conservative, relative to the actual amount of statistical error that can be expected from all available evidence (other than the "true value" of the measured quantity) in this experiment.
For practical purposes, the statistical error is purely a function of sample size, with a few twists thrown in arising from the facts that several sub-samples with different characteristics need to be combined. BESIII applied a variety of sophisticated and theoretically driven data cuts to a raw sample of about 56,000,000 collision events to produce a final sample of events from which information about the tau lepton mass could be obtained of 1171 events.
Statistical error is conceptually easy to reduce. The longer you run your experiment, and the more events you can collect, the more events you have in your sample, and the smaller your statistical error will be in the end. As a good rule of thumb for the raw number of events in the sample, the hypothetically infinite total sample of possible events that could be generated, and event frequencies on the order of magnitude seen in this kind of experiment, an increase in sample size by about a hundred reduces the statistical margin of error by a factor of about ten.
Thus, in order to reduce the margin of error for the tau lepton mass from +/- 0.12 MeV to +/- 0.012 MeV, experimenters will need about 117,100 post-cut events derived from 5,600,000,000 pre-cut events, assuming that the experimental setup is otherwise similar to BESIII. With the same sized experimental apparatus as BESIII, this would take several centuries. But, it would take a long time to gather that much more data even in an experiment significantly larger than BESIII.
Even with meta-analysis of all experiments conducted to date and all experimental data that will be possible to obtain over the next decade or so from experiments that are still actively collecting data, or that are currently in the planning stages and will have collected meaningful amounts of data by then, it is not realistic to expect that it will be possible to increase the aggregate the post-cut event sample by a factor of one hundred.
The total sample size from every experiment ever conducted to date combined at this time is probably less than five or ten times the size of the BESIII sample standing alone (based on the references to previous less precise measurements of tau lepton masses in other experiments referenced in the review of the literature in the BESIII paper). We'll be lucky to get a total combined sample size of more than three or four times the size of the existing sample in another decades, if that. This translates into an optimistic estimate of the potential improvement in statistical error over the next decade on the order of 45% to 50%. Thus, the statistical error might fall from +/- 0.12 MeV to +/- 0.06 MeV to +/- 0.07 MeV, at best, by ca. the year 2024.
This amount of improvement in statistical error, without any improvement in systemic error, could reduce the combined total margin of error in the tau lepton mass measurement to about +/- 0.13 MeV by 2024 from the existing +/- 0.17 MeV, an improvement of about 20%, but not enough to cross any threshold necessary to cross a five sigma discovery threshold with regard to, or to rule out any particular plausible phenomenological prediction regarding the tau lepton mass in a way that isn't already possible with existing data. So, any breakthroughs arising from the tau lepton mass measurement will require not just a larger sample size, but also major advances in reducing systemic error.
Systemic Error
The BESIII experimenters considered nine different potential sources of systemic error in arriving at their final total estimated systemic error. None of them are dominant in the overall total. The biggest single source of estimated systemic error which is a technically obscure issue, contributed a systemic error of +/- 0.05 MeV to the final result.
All of the multiple independent experimental measurements of the tau lepton mass have been designed by people in the same high energy experimental physics community who talk to each other and design experiments in similar ways.
Despite the fact that these experiments are independent of each other, they seem to have consistently understated the theoretically expected mass of the tau lepton under the original Koide's rule. And, over time new more precise experimental measurements have gradually converged towards values closer to that predicted by the original Koide's rule.
This doesn't prove that systemic error is indeed pulling down the experimentally measured tau lepton mass, or that systemic error is present in the estimated amounts at all. But, the circumstantial evidence does point to that possibility.
It is hard to know without really being immersed in the technical details, how stubborn the systemic errors in this measurement will be over the next decade or so. They could improve only slightly as this is a fairly mature type of experimental apparatus. Or, there could be major advances that slash this part of the total error.
Still, it would be probably unreasonably optimistic to expect a total systemic error with so many independent causes to be reduced by anything more than 50% in the next decade.
[Update Below Made On May 8, 2014:]
Prospects For An Improved Experimental Tau Lepton Mass Value
Recall that the state of the art BESIII measurement is 1776.91 +/- 0.12 +0.10/-0.13 MeV/c^2, with a combined margin of error of about +/- 0.17 MeV/c^2. As usual, the first component of the margin of error is statistical and the second is systemic.
Statistical error and systemic error make roughly equal contributions to the total uncertainty in the measurement.
Our theoretical expectation, if the original Koide's rule is an accurate conjecture, is that the actual error in the BESIII measurement is about -0.06 MeV, which is about 0.33 sigma from the 20+ year old original Koide's rule expectation.
Given the small size of this discrepancy, it is entirely possible, for example, that there is in fact zero systemic error since those conservative estimates of systemic error are overstated or cancel each other out more completely than we would expect from random chance, and that all of that error arises from statistical variation. Alternately, it is also entirely possible that we have gotten lucky and the sample is, in fact, closer than usual to the true mean, and that all or most of the discrepancy between the true value and the measured one is due to systemic error.
Statistical Error
If the error was truly Gaussian (i.e. distributed according to the "normal" Bell curve distribution), we would expect experiment to differ from the true result by an average amount of 1 sigma. And, since the expected amount of statistical error flows purely from sample size (which is not reasonably subject to question in this case) and from mathematics.
The only possible reasonable objection to the mathematics is that the statistical error may not actually be distributed in a Gaussian manner. Indeed, the true statistical error in these measurements in any particular case probably isn't really distributed in a precisely Gaussian manner. Nature probably isn't that convenient.
But, this is not a very strong concern because, according to the law of averages, the combined statistical error from the sum of statistical error distributions for individual events, that meet certain minimum criteria that are present in this kind of context, will asymptotically approximate a Gaussian distribution as the sample size increases to infinity. And, the law of averages allows us to quantify the extent of the discrepancy between a Gaussian distribution of statistical error and the actual distribution of statistical error given the size of the sample.
Thus, the estimated statistical error in the BESIII experiment or any future experiment is probably very nearly perfectly accurate, and neither too optimistic, nor too conservative, relative to the actual amount of statistical error that can be expected from all available evidence (other than the "true value" of the measured quantity) in this experiment.
For practical purposes, the statistical error is purely a function of sample size, with a few twists thrown in arising from the facts that several sub-samples with different characteristics need to be combined. BESIII applied a variety of sophisticated and theoretically driven data cuts to a raw sample of about 56,000,000 collision events to produce a final sample of events from which information about the tau lepton mass could be obtained of 1171 events.
Statistical error is conceptually easy to reduce. The longer you run your experiment, and the more events you can collect, the more events you have in your sample, and the smaller your statistical error will be in the end. As a good rule of thumb for the raw number of events in the sample, the hypothetically infinite total sample of possible events that could be generated, and event frequencies on the order of magnitude seen in this kind of experiment, an increase in sample size by about a hundred reduces the statistical margin of error by a factor of about ten.
Thus, in order to reduce the margin of error for the tau lepton mass from +/- 0.12 MeV to +/- 0.012 MeV, experimenters will need about 117,100 post-cut events derived from 5,600,000,000 pre-cut events, assuming that the experimental setup is otherwise similar to BESIII. With the same sized experimental apparatus as BESIII, this would take several centuries. But, it would take a long time to gather that much more data even in an experiment significantly larger than BESIII.
Even with meta-analysis of all experiments conducted to date and all experimental data that will be possible to obtain over the next decade or so from experiments that are still actively collecting data, or that are currently in the planning stages and will have collected meaningful amounts of data by then, it is not realistic to expect that it will be possible to increase the aggregate the post-cut event sample by a factor of one hundred.
The total sample size from every experiment ever conducted to date combined at this time is probably less than five or ten times the size of the BESIII sample standing alone (based on the references to previous less precise measurements of tau lepton masses in other experiments referenced in the review of the literature in the BESIII paper). We'll be lucky to get a total combined sample size of more than three or four times the size of the existing sample in another decades, if that. This translates into an optimistic estimate of the potential improvement in statistical error over the next decade on the order of 45% to 50%. Thus, the statistical error might fall from +/- 0.12 MeV to +/- 0.06 MeV to +/- 0.07 MeV, at best, by ca. the year 2024.
This amount of improvement in statistical error, without any improvement in systemic error, could reduce the combined total margin of error in the tau lepton mass measurement to about +/- 0.13 MeV by 2024 from the existing +/- 0.17 MeV, an improvement of about 20%, but not enough to cross any threshold necessary to cross a five sigma discovery threshold with regard to, or to rule out any particular plausible phenomenological prediction regarding the tau lepton mass in a way that isn't already possible with existing data. So, any breakthroughs arising from the tau lepton mass measurement will require not just a larger sample size, but also major advances in reducing systemic error.
Systemic Error
The BESIII experimenters considered nine different potential sources of systemic error in arriving at their final total estimated systemic error. None of them are dominant in the overall total. The biggest single source of estimated systemic error which is a technically obscure issue, contributed a systemic error of +/- 0.05 MeV to the final result.
All of the multiple independent experimental measurements of the tau lepton mass have been designed by people in the same high energy experimental physics community who talk to each other and design experiments in similar ways.
Despite the fact that these experiments are independent of each other, they seem to have consistently understated the theoretically expected mass of the tau lepton under the original Koide's rule. And, over time new more precise experimental measurements have gradually converged towards values closer to that predicted by the original Koide's rule.
This doesn't prove that systemic error is indeed pulling down the experimentally measured tau lepton mass, or that systemic error is present in the estimated amounts at all. But, the circumstantial evidence does point to that possibility.
It is hard to know without really being immersed in the technical details, how stubborn the systemic errors in this measurement will be over the next decade or so. They could improve only slightly as this is a fairly mature type of experimental apparatus. Or, there could be major advances that slash this part of the total error.
Still, it would be probably unreasonably optimistic to expect a total systemic error with so many independent causes to be reduced by anything more than 50% in the next decade.
To recap the Koide's rule prediction is 1776.96894(7) MeV.
ReplyDeleteThe current (2016 edition) particle data group value of the tau lepton mass is 1776.86 +/- 0.12 MeV.
The PDG value differs from the Koide's rule prediction by 0.10894 which consistent at the 0.9 sigma level.