The disparity between a no Higgs boson hypothesis and the data observed in WW decay channels alone is significant at a 4.3 sigma level, almost enough data to constitute a Higgs boson discovery even in the absence of evidence of a Higgs boson's existence found in other decay channels. Both ATLAS and CMS had discovered a Higgs boson a more than the five sigma threshold of discovery in data through 2012 (which is what was considered in this paper - the LHC is currently off line and being renovated).
The observed decay products counts were somewhat less than the Standard Model expectation in four out of five WW decay channels, although four individual W decay channels were individually within 1 sigma of the expected value and the fifth was within 1.2 sigma of the expected value. These subsets, however, had very large degrees of uncertainty due to smaller sample sizes.
This data also further confirm prior conclusions that the Higgs boson's spin and parity (its neutral electric charge has never been in doubt and its mass has been known to considerable precision for about a year now). The data show that this partricle is a spin-0 scalar particle, rather than a spin-0 pseudo-scalar particle or a spin-2 particle, in accord with the Standard Model Higgs Boson expectation. The data excluding of a spin-2 hypothesis is quite definitive (two to three sigma depending on your assumptions), but while the data favor a scalar over a pseudoscalar hypothesis by about a 2-1 margin, both are consistent with this CMS data at a 1 sigma level.
The CMS data on both points is consistent with the data from the ATLAS experiment at the LHC.
Basically, the methodology consists of calculating how many events of a five different types would be predicted by the Standard Model with a Standard Model Higgs Boson of the appropriate mass including background events from other processes that have the same decay products, and then counting the number of those events that were actually observed in Higgs boson decays seen at the LHC. Of course, finding and counting these exceedingly rare events in the remnants of billions of collisions requires amazing devices to create these energetic collisions en masse, sublimely accurately detectors to measure what happens in them, and incredible computer power and scientifically and statistically informed software to use Big data techniques to cull the events you are looking for completedly and accurately from the raw detector data which must in the first instance, reconstruct every single collision event's decay product into a complete decay story an an automated basis. The "boring" pages of the paper explain the myriad procedures, assumptions and techniques that went into getting their result.
* A combination of new lattice QCD calculations and the latest data on semileptonic kaon decays has made possible a new record level of precision in measuring the up-strange element of the CKM matrix. The new value is 0.22290(90) which now is approximately the same level of precision as the up-down element of the CKM matrix (the up-down element's current measured value is 0.97425+/- 0.00022, although I have seen the global average value reported as 0.97427(15)). This study should tweak down the mean value of and reduce the margin of error in the old global average value of 0.22534(65).
The up-strange element divided by the up-down element of the CKM matrix is the tangent of the Cabibbio angle, which is slightly more than 13.0 degrees. In the Wolfenstein parameterization of the CKM matrix the up-strange element of the CKM matrix is defined to be equal to the lambda parameter, which is used to calculate all but one of the other CKM matrix entries.
Since the CKM matrix is unitary (i.e. the sum of the squares of the up-down, up-strange and up-bottom element combined equals exactly 1), this measurement also improves the accuracy with which the tiny up-bottom element of the CKM matrix is known. The up-bottom element of the CKM matrix is approximately 0.003395 (the old global average value for this element is 0.00351+0.00015-0.00014). [Updated June 14, 2014 to correct typo.]
In English, this means that when a up quark emits a W- boson that it has approximately an 11 in 10,000 chance of becoming a bottom quark, a 496 in 10,000 chance of becoming a strange quark and a 9,493 in 10,000 chance of becoming a down quark, subject to a small margin of error. This study reduced the experimentally margin of error in the number of strange quarks produced from about 2-3 per 10,000 up quark decays to about 1 per 10,000 up quark decays and tweaked the expected number of decays by one or two.
Until now, the error in the up-strange element was about one part in 200. Now, the error in both the up-down and up-strange elements is closer to one part in 300 to one part in 400. Using direct measurements of each of the three elements separately, the deviation from unitarity is about two standard deviations, or more precisely: -0.00115(40)(43) with the first error from the up-strange element and the second from the up-down element.
The nine elements of the CKM matrix are described by four experimentally measured Standard Model parameters (there is actually more than one scheme by which the matrix can be reduced to four parameters and three parameterizations are in wide usage). The up-strange element is the product of two out of four parameters in two of the main schemes for parameterization and of one parameter in another.
* New and improved measurements of the mean lifetime of the antiparticle of a strange B meson have also been made at the LHC. An anti-B0s meson is a composite particle composed of an anti-strange quark and a bottom quark. This is an utterly routine paper that confirms Standard Model expectations, but I'll take a little time today to explain this result and how it fits into a larger context, since it is a good example of the routine, every day work of experimental high energy physicists, so that the purpose and importance of this kind of work can be better understood.
The paper is:
Measurement of the B¯0s→D−sD+s and B¯0s→D−D+s effective lifetimesWhat is being measured?
LHCb collaboration: R. Aaij, et al. (674 additional authors not shown)
(Submitted on 4 Dec 2013)
Background on hadron decay
Atomic nuclei are made up of protons and neutrons. Protons and neutrons are by far the most stable examples of more than a hundred kinds of composite particles made up quarks called hadrons. Almost always hadrons are made up of either two quarks, called mesons, or of three quarks, which are called baryons such as the proton and neutron.
Like every hadron (except the proton and the neutron when confined in an atomic nucleus that is small enough), the antiparticle of a strange B meson, the hadron whose decays are examined in this paper, is unstable.
Hadrons can always decay in more than one way, with some decay channels often being much more common than others. The possible decay products are limited by a variety of conservation laws in quantum physics - conservation of electric charge, conservation of baryon number, conservation of mass-energy, and so on. But, every decay path that observes those conservation laws will happen in a way that can be calculated from first principles using the equations of the Standard Model including a number of its key parameters.
Some Standard Model parameters like the Z boson mass, the electromagnetic coupling constant, the neutrino masses, the PMNS matrix elements, the charged lepton masses, the Higgs boson mass, and the top quark mass are irrelevant to the process or have such a tiny effect that they can be disregarded when making calculations. But, the weak force coupling constant and CKM matrix elements for the quarks in the source hadron and decay product hadrons are critical to making the calculation.
Each possible decay route occurs with a probability whose likelihood can be calculated. This likelihood is the decay width of that particular decay path. This probability can be combined width of all of the other decay modes give rise to the total decay width of the particle. In the Standard Model, decay width and mean lifetime of a decay are simple functions of each other, so the effective lifetime of a particular fundamental or composite particle for each decay channel can be determined by knowing what proportion of particles of a particular type decay into decay products of a particular type.
Interestingly, in the Standard Model, you calculate the probabilities related to each decay channel separately, rather than as part of the whole. The fact that these probabilities in fact both match experimental data and also always add up to 100% (apart from uncertainties like rounding errors and uncertainty in the measured value of the Standard Model parameters) is itself a property of the Standard Model equations that lends support to the correctness or near correctness of the model and profoundly limits room for modifications of it that can be consistent with the data.
The Standard Model predicts the existence of more than a hundred different kinds of mesons (made of two quarks) and baryon (made of three quarks) (collectively hadrons). All but a dozen or two of them have been observed experimentally, and those that have not been observed experimentally are precisely the very heavy ones that are expected to be created only infrequently. The observed hadrons, collectively, have several hundred decay channels that are frequent enough to have been measured accurately.
Consistent with the Standard Model, every single hadronic decay product that has been observed is made up of just five kinds of quarks (up, down, strange, charm and bottom; the sixth kind of quark, the top quark does not generally form hadrons) and none of the observed decay products every violate the Standard Model's conservation laws.
Even more remarkably, out of the many hundreds of decay channels that have been observed from the scores of hadrons that have been observed, the relative frequency of the quark content of the decay products in every single one of them is consistent to within the boundaries of measurement error with a simple nine element CKM matrix (which can be fully described with just four parameters) that provides the probability of a quark of one type being transformed into a quark of another type when it emits a W boson.
This consistency is powerful proof of the Standard Model which is cross-checked again and again every single time that a paper like this one measures the properties of a particular hadronic decay channel from a particular kind of hadron.
This paper's findings.
This paper looks at decays of the antiparticle of the strange B meson, aka the anti-B0s meson, an electromagnetically neutral particle made up of two quarks confined into a composite particle by the strong force of QCD in which the constituent quarks are a strange quark and the antiparticle of a charm quark (aka an anti-charm quark).
The paper looks at data regarding two common decay channels for the anti-B0s meson.
One common decay channels for the anti-B0s meson is a decay to a pair a strange D mesons, one positively charged and one negatively charged (i.e. one composite particle made up of a charm quark and an anti-strange quark, and one composite particle made up of an anti-charm quark and a strange quark).
Another common decay channel is to a negatively charged D meson and a positively charged strange D meson (i.e. one composite particle made up of a down quark and an anti-charm quark and another composite particle made up of an anti-charm quark and a strange quark).
The paper reports that the mean lifetimes for each of these decay paths have been measured with great precision - about 1.379 picoseconds for the pair of strange mesons decay path (with about a 3% margin of error) and 1.52 picoseconds for the charged D meson and strange D meson decay path (with about a 10% margin of error).
Recap of background as related to the larger context of this experiment
In the Standard Model, the probability that a meson will decay via a particular decay path is largely a function of the three CKM matrix entries for each of the two quarks in the meson, along with the weak and strong force coupling constants and the equations of the weak force and QCD.
Since quarks (other than top quarks) are always confined, the CKM matrix elements are determined by observing the decay of mesons and baryons which are composite particles into other mesons and baryons (in addition to any resulting leptons and bosons emitted by the W boson involved in the decay) and using data from a variety of different scenarios where this happens to back out the properties of individual constituent quarks in these composite particles from the experimentally measured properties of hadrons that contain them. Meson decays are particularly attractive for these kinds of studies because two quark systems are usually simpler to analyze than three quark systems.
Research these days focuses on heavy mesons like B mesons (which contain bottom quarks) and D mesons (which contain charm quarks), particular the strange B mesons and strange D mesons (which have no "plain vanilla" up and down quarks), because these are produced far less often than lighter mesons made up only of up, down and strange quarks, so experimental data on their decays is more sparse and hence less precise. Also, it is harder to do QCD calculations for for these systems because one cannot use the much easier to calculate with three quark type approximations - one must do more difficult four or five quark type calculations (including the impact of the existence of top quarks in the QCD calculations usually adds less accuracy than the uncertainty in the underlying physical parameter inputs for calculations involving hadrons).
Thus, while individually, this is just another ho-hum measurement, but collectively, the measurements of the mean lifetimes of all mesons and baryons for which measurements can be made, collectively, allows us to calculate a variety of Standard Model parameters such as CKM matrix elements. Moreover, if the Standard Model is correct, the CKM matrix elements for any given quark have to produce results consistent with experiment in all of the scores of different hadrons that include quarks of that type.
To oversimplify, there are half a dozen collider experiments in the world. They analyze the products of vast numbers of collisions and use a variety of data points about each one to determine from the decay products what the hadron at the start of the decay chain was and by what path it broke down into the decay products that were observed. This data is put into files for each kind of original hadron with subfiles for each decay type from that kind of hadron that is observed with the precisely measured properties and frequency of that kind of decay. The data in each subfile is analyzed and reduced to a paper like the one linked above.
Then, every year or two, somebody at the Particle Data Group compiles the conclusions of all of these papers into a consistent format and calculates a weighted average of all data every collected for that decay channel.
On about the same time frame, somebody does a meta-analysis of all of the different papers with data relevant to the Standard Model parameter that they are studying from the published literature with help from the compilation at the Particle Data Group that indexes these reports as footnotes to its entries.
In this analysis they back out, for example, best estimates of the CKM matrix bottom to charm quark element from the hadronic measurements and the relevant formulas, review the data for tensions between the values predicted in different kinds of decays (reconciling them with further analysis when possible) and then uses weighted averages supplemented by Standard Model specific theoretical analysis to revise the estimates for these parameters. The paper on semileptonic kaon decays earlier in this post nicely illustrates this fact.
The kaon decay paper also illustrates that, without state of the art lattice QCD calculations, the process of backing CKM elements out of the experimental data can't be done with any great precision from first principles, although mere knowledge of the QCD equations together with data from similar hadrons can make it possible to predict the properties of new hadrons by extrapolating from those whose properties have already been measured in sensible ways. This is the state of QCD generally. Many hadron observables have predicted values calculated with QCD from first principles that are far less precise than the experimental observations of those hadron observables. The most precise QCD calculations, at best, rival the accuracy of the experimental observations to date (in part, however, because the accuracy of the QCD input parameters limits the accuracy of its predictions).
This is an ongoing labor involving something on the order of tens of thousands of highly skilled physicists and technicians every year that has gone on steadily (with varying levels of people committed to the effort from year to year, gradually going from the hundreds to the thousands to the tens of thousands over time with some bumps in the road as experiments are shut down before new ones are opened) for the last fifty years or so.
The experimentally Standard Model parameters whose measurements to date can be summed up on a single page of paper have cost something on the order of hundreds of billions of 2013 dollars over more than five decades and a meaningful share of the brightest scientific minds in the world to determine. Many of those values are still known only imprecisely and will take hundreds of billions of additional dollars to refine.
In a fine illustration that the Marxist labor theory of value is not true, however, that page of data that cost hundreds of billions of dollars to create can be obtained for free from reliable sources on the Internet. The same information, had it been available to scientists in the late 1930s, could easily have made it possible for however had been in possession of it to have won World War II.
* A paper claiming to have used lattice QCD methods to establish the charm quark mass with a precision of less than 1% is worth noting. The claimed value is 1.273(6) GeV. Continuum QCD methods have apparently reached similar precision.
Notably, this means that the Koide triple value for charm quark mass from the t-b-c triple which is 1.356 GeV is now off by about 8% which more than twelve sigma from the new precision value.
The author of this paper is also a co-author of the Kaon decay paper described above. Earlier this year this author and others concluded using similar methods that the b quark mass was 4.166(43) GeV. See also an early paper on these calculations here. These results still leave the masses of the three light quarks (u, d and s) quite uncertain. But, the three heavy quark masses are all now quite precisely known. Another paper with this author as a co-author estimates the charm-down quark element of the CKM matrix.
Other investigators have made great progress in determining the strange quark mass, concluding in May of this year that it was 94 +/- 9 MeV, a reduction in the previous uncertainty of about two-thirds (the Koide ladder prediction based on the top and bottom masses had been 92 MeV).