A measurement of total and fiducial inclusive W and Z boson production cross sections in pp collisions at sqrt(s)=8 TeV is presented. Electron and muon final states are analyzed in a data sample collected with the CMS detector corresponding to an integrated luminosity of 18.2 +/- 0.5 inverse picobarns. The measured total inclusive cross sections times branching fractions are sigma(pp to W X) B(W to l nu ) = 12.21 +/- 0.03 (stat.) +/- 0.24 (syst.) +/- 0.32 (lum.) nb and sigma(pp to Z X) B(Z to l+ l-) = 1.15 +/- 0.01 (stat.) +/- 0.02 (syst.) +/- 0.03 (lum.) nb for the dilepton mass in the range of 60 to 120 GeV. The measured values agree with next-to-next-to-leading-order QCD cross section calculations. Ratios of cross sections are reported with a precision of 2%. This is the first measurement of inclusive W and Z boson production in proton-proton collisions at sqrt(s)=8 TeV.
CMS Collaboration, Measurement of inclusive W and Z boson production cross sections in pp collisions at sqrt(s) = 8 TeV (2014).
The paper itself, which makes precision measurements of the W and Z boson decay fractions to charged leptons at the highest energies yet explored by the LHC is unexciting and its conclusions confirming the Standard Model are unsurprising. I discuss it here to make one important point that the paper doesn't even discuss in as many words.
Both the theoretical calculation of the W and Z boson decay fractions that were measured, and the systemic errors included in the study's estimates of the experimental measurement's uncertainties are significant.
This is a lie that comes from following honestly the procedural protocols for making these estimates.
In reality, the true systemic, luminosity, and theoretical errors reported in the paper are all vastly overstated. All of the half dozen measurements made in this study are within the theoretically irreducible purely statistical uncertainty in the experimental results (since QED predicts probabilities of events occurring rather than being deterministic) from the theoretically calculated predicted results.
Thus, it is entirely reasonable to infer based upon that result (which isn't atypical of LHC's precision electroweak measurements) that the true systemic error in the measurement, and the true theoretical error in the theoretical predictions, are in fact negligible relative to the statistical error involved, which implies that they are actually dozens of times smaller than the officially stated values for those uncertainties in the result.
In other words, all of the jobs of the scientists at the LHC, the calculations of theoretical Standard Model predictions and the process of conducting the measurements in question, are actually perfect to the full extent that we can observe them in the face of fundamental quantum mechanical random statistical noise.
Now that, perfect science, is really impressive.
The fact that systemic errors and luminosity errors are, in fact, negligible in this kind of experiment at the LHC is a testament to the practical virtuosity with which the team of thousands of scientists at CERN have done their jobs and to good calibration practices in light of their earlier efforts.
The fact that the theoretical errors are, in fact, negligible in this kind of experiment, in which the theoretical prediction is based upon a truncation of an infinite series exactly equal to the true result if the Standard Model is correct, implies two things:
First, the next few truncated terms in the infinite series that is truncated to make the theoretical prediction (each of which get smaller and smaller in any case), cancel each other out to a much greater extent than the theoretical physicists who do these calculations actually feel safe in assuming that these terms do since they can't actually calculate them.
Second, the Standard Model is correct to any measurable degree of precision desired in this area.
The second point is a big one.
Suppose that we are (and I think we should be) comfortable using the observed distribution of discrepancies between theoretical predictions and experimental results for a class of experiments, rather than bottom up predictions of the size of those discrepancies from first principles and theory, to set error bars for classes of experiments like these. If we do, then the error bars in some kinds of experiments (like precision tests of QED and basic high energy hadronic perturbative QCD calculations like these) should be reduced dramatically, while the error bars in other kinds of experiments (e.g. lattice QCD computations of low energy QCD observables) should be increased significantly.
This kind of analysis has the potential to dramatically tighten experimental constraints on some kinds of beyond the Standard Model physics, foreclosing large numbers of theories that predict slight deviations from Standard Model predictions in areas that we observe to be very accurately predicted by current theory, while loosening apparent exclusions of new physics theories in situations where the match between experimental results and theoretical predictions fails to perform as well as we would expect it too.
For example, I suspect that tighter error bars on W and Z boson decays at the highest energies ever measured probably disfavor theories that hypothesize the existence of additional neutral electroweak bosons (often called Z'). Now, the LHC is predicted to have the capacity to exclude the existence of such bosons up to masses of about 5 TeV at the completion of its run at full power. But, this exclusions could be substantially enhanced if overstated systemic and theoretical uncertainties were reduced. This is because exclusion ranges are typically set at the high end of a 95% confidence interval. But, this 95% confidence interval for the conclusion that there are no new physics observed would be much larger if the systemic and theoretical uncertainties believed to exist in the experiments were smaller.
Honestly, there are a lot more cases of the former (i.e. excessively large systemic errors) than the latter (i.e. understated systemic errors) due to the way that the scientific process works in a context like the LHC.
Nobody is systematically flagging experiments that confirm Standard Model predictions at consistently better than 1 SD on average (1 SD is defined in a way that makes it the average expected deviation between prediction and experiment), even though outside of statistical errors, there is no absolutely definitively optimal principle for quantifying these errors. So, when systemic error or theoretical error is consistently overstated in a class of experiments, nobody notices it or incorporates the empirically demonstrated levels of systemic or theoretical error into future estimates.
In contrast, at the LHC where a team of thousands of physics is available to analyze its experimental results, almost every case in which there is more than a two or three sigma deviation from the theoretical prediction is flagged for further analysis to determine if the observed result is a function of new physics, or merely of underestimated systemic error. When understated systemic error is found to be the problem, these underestimated systemic errors tend to be identified and corrected in future experiments.
On the other hand, if cases where there is great precision than currently realized have their error bars revised accordingly, this will make new physics effects in future experiments manifest themselves much more rapidly because the statistical power of future experiments of the type where systemic and theoretical errors are revised downward will be increased.
A careful statistical review of the empirically observed magnitude of deviations from theoretically expected values in excess of statistical error at the LHC could dramatically reduce theoretical estimates of systemic and theoretical error. This would more definitively experimentally rule out many beyond the Standard Model theories, while simultaneously increasing the statistical power of LHC experiments to discern new physics where they are present.
Post a Comment