Tuesday, December 17, 2013

Korea's Paekchong

Korea's Paekchong minority ethnicity is an "outcast" group, somewhat comparable to the untouchables of India, although in many ways they are more akin to Europe's gypsies (aka Roma), because they are made up of a population that made a mass migration to Korea, in the historic era, from abroad, but has retained an ethnic identity at society's fringes rather than integrating into society's mainstream.  A similar underclass in Japan is called the Burakumin.

According to Korean historical sources, they have origins as several waves of tribes of nomadic people who primarily relied upon hunting, selling crafts and entertaining, the first wave of which arrived in Korea around 1217 CE.  King Sejong (1419-1450) attempted to integrate them into Korean society, but this effort failed over the next five hundred years, until that integration was fairly fully effected after the Korean War.  Instead, they eventually settled into a lifestyle on the fringe of Korean society in ghettos where they carried out slaughtering trades in additional to their traditional means of self-support.

A lengthy blog post provides a sentence of who these people are and where they came from historically, and muses on why it is (with genetic reasons among those considered) that this cultural legacy endured so long.

To the extent that Korea's Paekchong represent one of the late integrated hunter-gatherer populations of Northern Asia into Korean society, a study of their genetics just as the ethnicity is ceasing to be a cultural reality, could provide a time capsule of the genetics of such groups 800 years ago.

Monday, December 16, 2013

Rare Top Quark Decays And Why There Are No Top Quark Hadrons

Fermilab produced about 150,000 top quarks in its entire run (the first one was observed in 1995); while the LHC has so far produced about 2,000,000 top quarks as of June 2012.

Top quarks almost always decay into a W+ boson and a bottom quark.  But, sometimes, they decay into a W+ boson and a strange quark, or into a W+ boson and a down quark.

Based on the current Particle Data Group CKM matrix values (based on a global fit of the four CKM matrix parameters), top quarks decayed into strange quarks about 240 times at Fermilab and about 3200 times at the LHC, while top quarks decayed into down quarks about 10-11 times at Fermilab and about 140 times at the LHC.  Of course, since the numbers are small there is considerable statistical sampling error in these values in addition to experimental error of other types (systemic error).

Top quarks have much shorter lifetimes than bottom or charm quarks (which have a mean lifetime of  a quadrillionth of a second), by a factor of about 10^11 or 10^12 (lighter quarks have even longer mean lifetimes).  Top quarks are still much longer lived than a hypothetical minimum unit of time called Planck time, by a factor of 10^20.

While the actual lifetime of any given bottom quark or top quark varies, there is virtually no overlap between the two in mean lifetimes.  Since the mean time necessary to form a meson (two quark composite particle) or baryon (three quark composite particle) is much longer than the mean lifetime of a top quark, and much shorter than the mean lifetime of all other quark types, top quarks generally do not form composite particles prior to decaying, while other quarks almost always form composite particles before decaying (i.e. they are confined).

What does that mean in terms that are understandable?

It means that if a bottom quark's mean lifetime were set equal to 70 years like a typical human, that a top quark's mean lifetime would be on the order of a hundredth of a second.  If the top quark's mean lifetime were actually a hundredth of a second, the longest lived top quark ever out of 2,150,000 top quarks ever made on Earth would probably be about 15 seconds.  Imagine that the mean time necessary to form a meson was about 2 and three-quarters hours in the example above.  The mean lifetime of a bottom or charm quark would be about a million times longer, while the mean lifetime a top quark would be about a million times shorter.

The number of non-top quarks that don't hadronize before decaying in that situation would be on the order of one per 2^9239 for bottom and charm quarks, and would be much less common for the three lighter flavors of quarks.  This is roughly one per 10^2781 charm and bottom quarks (and a much smaller share of up and down quarks).  Yet, there are only about 10^80 quarks is the entire universe and the vast, vast majority of them are up and down quarks.

Thus, while strictly speaking, a composite particle containing a top quark is not forbidden by the Standard Model, it is so vanishingly unlikely that there is no good reason to believe that such a composite particle will ever be observed, or that an unconfined quark of some other type will ever be observed.

The age of the universe is seconds is about 4*10^17, only a tiny proportion of the 10^80 quarks in the universe at any given time are top quarks (a number that should be roughly constant over time in the universe after its first few instants due to extreme baryon asymmetry and conservation of baryon number), and the probability of a top quark having a mean lifetime long enough to hadronize is vanishing low.  If that probability is less than 1 in 10^99 for example, it is an event that has almost surely never taken place even once in the entire universe, ever (in hypothetical baryongenesis which would last a few moments to a few hundreds of thousands of years, at most, the total number of quarks in the universe would be even smaller for the vast majority of that time although the proportion of top quarks would probably be quite high in that high energy environment). The real probability of a top quark living long enough to hadronize is probably closer to 10^2000 than it is to 10^99.

UPDATE 12/17/13:  The statements I made above about the mean lifetimes of the relevant quarks, the age of the universe, the number of quarks in the universe, Planck time, the number of top quarks every discovered, and the estimated number of rare top quark decays are all correct.  But, one number I used, and the inferences that flow from it, was misleading.

Going back to the scenario where a bottom quark mean lifetime is 70 years and a top quark mean lifetime is a hundreth of a second.  So far so good.  But, the average time to form a hadron is something like 1 second to one minute on that scale, not two and three-quarters hours.

This makes confinement of non-top quarks even more absolute than I suggested.  1 in 10^2781 charm or bottom quarks decaying before hadronizing was a gross overestimate of the likelihood of that happening.  In fact, it might be closer to 10^4000 or so.  These quarks are just not going to decay before hadronizing ever.

But, I have grossly overstated the rarity of a top quark living long enough to hadronize, in theory, anyway.  This is more like one in 2^4 to 2^7.  Thus, while top quark hadronization might be suppressed somewhere on the order of 90%-99.99% (the low end estimate assuming that two top quarks need to persist that long since no other quarks can get into their proximity that quickly) relative to all other quarks which hadronize 100% of the time, it shouldn't be suppressed on that basis alone so much that one would expect top quark mesons to have never formed in 2,150,000 cases where top quarks have been created.  One would expect something on the order of 200 top quark mesons to have been created.  (See a previous post discussing the possibility less rigorously here).

It may simply be that a t-t' meson decay is indistinguishable from the decay products of isolated decays of a t and a t' pair that never hadronize with each other, since the odds are overwhelming that the t will decay to a b and that the t' will decay to a b' in both cases and that this would happen very rapidly after hadronization (possibly leading to a brief intermediate period in which there is a top-bottom meson with a mass on the order of 178-180 GeV).  But, I would think that there would be at least some difference in experimental signature between the two cases.  Any t-t' decay is quite spectacular, with or without hadronization, as it is a 346 GeV+ event.  So, almost all such decays have probably been noted and studied in detail.  By comparison, the heaviest conceivable hadron with no top quarks (a spin-3/2 baryon, with electric charge -1, made of three b quarks), something that has not yet been definitively observed, would have a mass of more than 12.6 GeV, but probably less than 15 GeV.  A top-antitop meson would have a mass twenty-three times or more as great.

The characteristic time period required to hadronize is estimated, roughly speaking, on the assumption that two quarks moving at the speed of light relative to each other will hadronize if they are within 10^-15 meters of each other.  If that estimate is to brief, for some reason, then the actual time required for hadronization to take place could be longer than the 10^-23 seconds give or take a substantial margin of error that is often cited as an estimate, and that could explain the deficit of top quark mesons observed.

If this estimate were too short by a factor of ten, then the expected number of top quark mesons produced to date might be closer to 2 than 200 and this small number of outlier events might be too statistically insignificant to be noticed.

The Extended Koide's Formula Two Years Later

Almost exactly two years ago, I blogged about a paper by A. Rivero that used a very small number of inputs which have been measured with precision and a couple of key generalizations of Koide's formula for charged leptons to predict the quark masses.

Koide's formula provides that the square of the sum of the square roots of the three respective charged lepton masses, divided by the sum of the charged lepton masses, was equal to exactly two-thirds, a formula that has held true for decades, despite every more precise measurements of the charged lepton masses, which are now known to about seven significant digits of accuracy.

A generalization of the formula for quarks proposed that:

(1) the sum of the masses of the three charged leptons was equal to precisely three times the sum of the strange quark, charm quark and bottom quark (a natural multiple in light of the fact that each quark comes in three different colors while each charged lepton comes in only one, a fact reflected in W and Z boson decays); and

(2) each three quarks which are sequential in mass (u-s-c, s-c-b, c-b-t) form a Koide triple that obey the rule for quarks.

These assumptions produced the following results, to which I add the current Particle Data Group values for the state of the art experimentally measured values for the quark masses (in the same units of the original blog post), and a conversion of the difference between the extended Koide formula calculation and the experimentally measured values into standard deviations from the measured value to two significant digits (ignoring the negligible margin of the error due to uncertainty in the electron and muon masses in Koide's formula calculations which are for all practical purposes exact).

Inputs
me = 0.510998910 MeV ± 0.000000013 (i.e. one part per 39,307,608)
mμ = 105.6583668 MeV ± 0.0000038 (i.e. one part per 2,780,483)

Outputs
m = 1776.96894(7) MeV (Tau) - PDG 1776.82 +/- 0.16 (i.e. one part per 11,105) (0.93 SD)
mt = 173.263947(6) GeV (top) - PDG 173.070 +/- 0.888 (i.e. one part per 194) (0.22 SD)
mb = 4197.57589(15) MeV (bottom) - PDG 4180 +/- 30 (i.e. one part per 139) (0.58 SD)
mc = 1359.56428(5) MeV (charm) - PDG 1275 +/- 25 (i.e. one part per 51) (3.38 SD)
ms = 92.274758(3) MeV (strange) - PDG 95 +/- 5 (i.e. one part per 19) (0.55 SD)
md = 5.32 MeV (down) - PDG 4.8 +/- 0.4 (i.e. one part per 12) (1.3 SD)
mu = 0.0356 MeV (up) - PDG 2.3 +/- 0.6 (i.e. one part per 4) (2.26 SD)

Koide ratios of PDG mean values of selected triples is as follows:
t-b-c   0.6695
b-c-s  0.4578
c-s-u  0.622
s-c-d  0.60563
s-u-d  0.564

Best fits of any given quark triple to Koide's ratio given the range of experimental error for each input is considerable better.

Successes

The original Koide's formula for charged leptons remains true for decades without modification to within the margin of experimental error despite the fact that the least accurately known of the masses, the tau mass, is known to a one part per 11,105 precision and was known far less accurately when Koide's formula was proposed.

The extended formula, with just two experimentally measured inputs, post-dicted the mass of the tau to within one standard deviation, and the three down type quarks to within 1.3 standard deviations with an average standard deviation difference from the current mean experimental value of 0.81.  The formula also predicted the top quark mass very accurately.  The predicted ratio of the strange quark mass to the down quark mass is also within one standard deviation of the experimentally value.  All of these post-dictions would be treated as experimental confirmations of the theory if it were part of the Standard Model.

The accuracy of the Koide's formula prediction on the basis above of the tau, top, strange and down quark masses, moreover, has grown greater rather than less accurate as the precision of the experimental measurements of these quantities has improved.

The triple with the best fit of the mean value PDG data to the extended Koide's formula is the t-b-c triple which has the virtue of being the most precisely measured on a percentage basis and of being a decay path that is extremely dominant because almost every top becomes a bottom and a very high percentage of bottom's in turn become charms rather than ups or tops.

The runner up, the charm-strange-up decay chain is also a quite consistent one.  Charm to strange to up is far more common than charm to strange to top, or charm to strange to charm.

Tensions Between Koide's Formula and The Measured Results

There is more tension between the post-diction in the masses of the charm quark and up quark.

The Charm Quark Mass

The charm quark measurement of absolute charm quark mass is off by the greatest amount in terms of standard deviations of error (and even more if the precision of new charm quark measurements are accepted).  The deviation from the experimental value in absolute terms is about 6.6%.

Some of this deviation may flow from the way that the 3-1 mass ratio of charged leptons to the s-c-b triple is implemented which may be an imperfect and merely coincidental relationship.  A global fit to the experimental data can be almost perfectly consistent with the extended Koide's formula from the PDG data can be made by using a 1 SD low top mass, a 1 SD high bottom quark mass, and a 1 SD high charm quark mass.

Similarly, the u-s-c triple can fit values within 1 SD (high) of the charm quark mass and about 0.5 SD (low) of the strange quark mass, although it again produces a negligible up quark mass.

The Up Quark Mass

The absolute up quark mass measurement is off by fewer standard deviations (almost within the two standard deviation theory confirmation range) and just 2.3 MeV - less of an error in terms of absolute number of eVs of error than the accuracy to which any of the second or third generation quarks are known.

But, the magnitude of the difference between the Koide's formula predicted value and the measured value is sixty-five fold.  Likewise, the ratio to the down quark mass to the up quark mass (0.38-0.58 according to the PDG with margins of error of 5%-20% in the various measurements that contribute to the global average) is off by 747 standard deviations (the similarity to the model number of the largest commercial aircraft in regular service is just a coincidence).

The average of the up and down quark mass is about 4 standard deviations below the PDG summary of the experimental data, due to the low up quark mass prediction.  If the up quark mass were the experimentally measured value, the average of the up and down quark masses would be 0.44 SD from the mean).

The experimental measurement of the up quark mass is the least accurately measured of the experimentally measured masses of the Standard Model other than the neutrino masses, and while the absolute neutrino mass and mass hierarchy of the neutrino masses aren't known with certainty, the differences in mass between the three neutrino mass eigenvalues is known much more precisely than the differences in mass between quark flavors.

It is tempting to think that the negligible but non-zero up quark mass is correct.  The techniques used to estimate the up quark mass are rather crude relative to the quantity measured.  And, this would provide the added benefit of solving the strong CP problem because CP violations in strong force QCD interactions are naturally suppressed by a negligible up quark mass with resorting to fine tuning of the chiral quark mass phase in the QCD Lagrangian or requiring the introduction of new particles such as axions.

Where do we stand?

The very simple extended Koide's formula closely approximates all of the charged fermion masses from just two charged lepton masses which have been measured with great precision.  Even the two masses it gets wrong are tolerably close to the experimentally measured values for many purposes.

For example, the experimentally measured absolute values of the up quark and charm quark mass have only been known precisely enough to contradict the extended Koide's formula at a more than two standard deviation level for less than two or three years.  No other theory predicts the charged fermion masses so accurately with so little fine tuning.

But, Koide's formula is also wrong in these two cases.

The fact that the simple extended Koide's formula reasonably approximates the texture of the Standard Model quark mass matrix, without any quark mass inputs at all, suggests that it is at least more or less on the right track, as a first order approximation.

Is there a way to make the extended Koide's formula more accurate where it errs without sacrificing throwing off currently correct predictions or sacrificing the elegance of the concept?

Assume that the extended Koide's formula does such a good job of predicting the charged fermion masses because it is doing something right as a first order approximation, but that the omission of next to leading order corrections is throwing off the result for the charm and up quarks materially, and my slightly influence the formula's predictions for the other four masses.

Could terms that are well motivated theoretically be added to the formula to refine it in a way that would not throw off the other terms?

I think that the answer is yes.  But, I can't say that I'm confident that I've found it.

Heuristically, I think that what Koide's formula reflects in the quark sector is a process whereby charged fermion masses (or charged fermion Yukawas, if you would prefer that level of analysis) are the emergent result of a dynamic balancing of the masses of particles that produce a quark of a particular type in W boson interactions, and the masses of particles that a quark of a particular type produces in W boson interactions.

For example, top quarks almost always decay into bottom quarks which overwhelmingly tend to decay into charm quarks.  Hence, the bottom quark mass represents a balanced average (in the general sense of intermediate value that can be computed by any of a number of means) between the top quark mass and the charm quark mass.  Likewise, bottom quarks tend to decay into charm quark which in turn tend to decay into strange quarks.

In the case of the charged leptons, the decay chain is particularly uncluttered.  Taus decay into muons or electrons (with almost equal probability), muons decay into electrons, and the reverse (an electron that becomes a muon or a tau, or a muon that becomes a tau) almost never happens.

Quark flavors mix much more readily than charged lepton flavors.  For example, while about 95% of the time, charm quarks decay into strange quarks, about 4.9% of the time they decay into down quarks and about 0.1% of the time a charm quark emits a W+ boson and becomes a bottom quark (conservation of energy permitting).

The mass of a down quark relative to that of a strange quark is negligible, and a somewhat less than 4.9% downward adjustment in the charm quark mass due to second order terms in an extended Koide's formula terms would bring the predicted value much closer to the experimentally measured value.

Similarly, using a u-s-c triple to determine the up quark mass omits the roughly 1.1 in a 1000 chance that an up quark will become a bottom quark (PDG mass 4,180 MeV), and the dominant possibility that an up quark will become a down quark.  Using a u-d-s triple likewise omits the bottom quark impact.  Crudely, this probability times the bottom quark mass would suggest an upward adjustment on the order of 4.6 MeV which is much closer in order of magnitude to the PDG value.

Both of these examples suggest that the next to leading order term adjustment ought to have a value roughly on the order of the most important particle mass that the triple omits times the probability of a transition to that kind of particle in the CKM matrix.

Now, the precise formulas to use to implement these changes is hard to work out.  Conceptually, for example, in the b-c-s triple, the notion would be to replace the bottom quark mass in the extended Koide triple formula with a probability weighted average of particles that could be transformed by a W- boson emission into charm quarks, and to replace the strange quark mass in the formula with the probability weighted average of particles that a charm quark could be transformed into by a W+ emission from a charm quark.

There is a fairly straight forward way to do this using Standard CKM matrix elements from the charm decays.  But, the way to do this for decays of particles that become a charm quark is less obvious (since the probability matrix into a charm quark isn't necessarily exactly unitary like the elements coming out of it in CKM matrix form), and less easy to get the proper inputs for since the Standard CKM matrix covers only up type to down type quark transitions and one has to properly determine the inverse down type to up type quark transition probability matrix to get it right.

I'm also not necessarily comfortable that it is correct to simply disregard conservation of energy considerations in doing the analysis, but I'm not sure how to integrate conservation of energy considerations if I didn't disregard them.

The other vexing aspect of the next to leading order terms is that since the extended Koide's formula sets forth a non-linear relationship between the three terms in the Koide triple, using the naive weighted average approach that I have suggested seems to unduly dilute the impact of the most important missing mass term.  Some trial and error efforts on my part suggests that the weighted average approach that I suggest, while it seems to makes sense, is not the right way to integrate the information about the CKM matrix probabilities and omitted masses that it should.

Using the extended Koide's formula in the original form to determine all quark masses, and adding in each case an adjustment equal to something like the mass of the most important quark not included in the extended Koide triple (i.e. the omitted quark) for the Koide triple used to determine the mass of the quark you are solving for, and then multiplying that mass times the square of the CKM matrix element which represents the probability of a transition from the quark you are solving for in the Koide triple to the omitted quark, produces a result closer to the experimentally measured values than a weighted average.

Something like the average of all of the possible adjustments could be used as the NLO term (two one possible, one when there is only one to make), rather than putting weighted averages in the extended Koide's formula itself, and seems to work even better.

This would give:

mt=172.743 GeV PDG 173.070 +/- 0.888 per t-b-c avg adj down with ts (0.16%) and td (7.52*10^-5)
mb=4193 MeV PDG 4180 +/- 30 per b-c-s adjusted down with ub (0.11%)
mc=1293 MeV PDG 1275 +/- 25 per b-c-s adjusted down with cd (4.9%)
ms=92.55 MeV PDG 95 +/- 5 per b-c-s avg adj up with ts (0.16%) and down with us (4.97%)
md= 5.12 MeV PDG 4.8 +/- 0.4 per s-c-d avg adj of up with td (7.52*10^-5) and down with ud (94.9%)
mu=4.60 MeV PDG 2.3 +/- 0.6 per s-u-d avg adj up with ub (0.11%) and up with us (4.97%)

Now, I'll be the first to admit that this seems to take too much art and too little science, and that it produces an up quark value that is a bit too high.   It ought to be possible to iterate the process so that adjusted values are then used to readjust the predictions numerically (or analytically), and to make the adjustments more elegantly.

But, the adjusted values do bring all of the formula values for quark masses (and the tau lepton) except the up quark to within 0.8 standard deviations of the experimental values and to the right order of magnitude in the case of the up quark - now off by a factor of 2 rather than a factor of 64.6 - much closer to the mark on a percentage basis - without any experimental inputs other than the electron mass, muon mass and several of the four parameter CKM matrix element values!  Thus, the formula comes very close to reproducing the Standard Model values despite dispensing with 7 of the experimentally measured parameters of the Standard Model (seven more of which, assuming the Dirac neutrino scenario, pertain only to neutrinos).

Before v. After Adjustments Experimental Standard Deviations Between Theory and Experimental Value
top  0.22 v. 0.368 SD
bottom 0.58 v. 0.433 SD
charm 3.38 v.  0.72 SD
strange 0.55 v. 0.49 SD
down 1.3 v. 0.8 SD
up 2.26 v. 3.83 SD

Since this adjustment approach does seem to be bringing the predictions closer to the experimental values overall in a way that has some sort of heuristic theoretical motivation, it may be on the right track.

It also supports the underlying theoretical notion that Standard Model fermion masses represent a balancing of source masses and decay product masses of a particle in a manner that reflects the relative likelihood of various possibilities as reflected in the CKM matrix.  In other words, fermion masses seem to fit a pattern that makes sense if they arise dynamically via W boson interactions.

UPDATE:  There is a new Koide paper out of New Zealand noted in this thread.  It has a preon hypothesis.

Friday, December 13, 2013

Is Our Particle Set Complete?

Is our set of discovered particles complete?

All of the particles predicted by the Standard Model have been found, and none that haven't been predicted by the Standard Model, have been discovered at this point.  So, the Standard Model's set of discovered particles is complete.

But, is the Standard Model set of particles all that there is?  A variety of different approaches suggest that the answer is yes, or at least, almost yes, with some possible exceptions driven most strongly by data from cosmology.

With some not terribly ambitious assumptions that are well motivated experimentally, it is possible to conclude that the undiscovered fundamental particle spectrum is limited to light, sterile fermions of less than 8 GeV (combined for all undiscovered particles), and to massless, or stable and sub-8 GeV, light bosons that do not interact via any of the three fundamental Standard Model forces.

Counting Standard Model Particles

One way to assess that would be to consider whether there are an equal number of fermion and boson types?

But, it depends upon how you count them.

There are twelve Standard Model particles of spin-1/2 with distinct masses.  There are twelve Standard Model bosons of spin-1 (photons, W+, W-, Z and eight kinds of gluons), plus the Higgs boson (spin-0), which brings the total to thirteen Standard Model bosons.

If this is the correct way to county and  there are an equal number of fermions and bosons, then this "numerology" approach would suggest that since there are twelve fermions and thirteen bosons, that there needs to be one more fermion (perhaps a sterile neutrino singlet to be a warm dark matter candidate) to even the count.  However, if gravitons exist, we might need two fermions to even the count (perhaps a sterile neutrino singlet to address the reactor anomaly data and a spin-3/2 gravitino of 2 keV to be a warm dark matter candidate and counterpart to the graviton in the gravitational sector).

But, does this approach to counting make any sense?  While there are thirteen Standard Model bosons, if you consistently treat particles that differ only in color charge, or that are antiparticles of each other, as one particle, as you do when you say that there are twelve Standard Model fermions, then there are really only five Standard Model bosons.

Then again, if you treat up-type quarks, down-type quarks, charged leptons, and neutrinos, each of which is a set of three particles identical in all respects except mass, then you only have four kinds of fermions instead of five, and the number of kinds of spin-1/2 fermions and the number of kinds of spin-1 bosons is identical (four each), with the spin-0 Higgs boson in an intermediate role (perhaps offset by a spin-3/2 fermion like gravitino dark matter that also doesn't fit the pattern).

Of course, it isn't at all obvious that it is fair, for example, to count the twelve possible kinds of up quarks (left v. right parity, times red-green-blue color, times particle v. antiparticle) just once, while counting gluons which differ only by having eight different kinds of color charge as eight different kinds of particles.  Similarly, why should electron and positron count as one particle, when the W+ boson and the W- boson which are also a particle and antiparticle pair as two particles.

A fuller count would conclude that there are 72 different kinds of quarks, 12 kinds of charged leptons, and six kinds of neutrinos for a total of 90 fermions, compared to 13 bosons, for a total of 103 different kinds of Standard Model particles (treating only particle properties that vary in discrete quantum properties as different).

If there were a graviton and a dark matter particle (a bare minimum in a fully particle based TOE with non-baryonic dark matter), you would need a minimum of 105 different particles to have a complete set.

Many people are very tempted to add right handed neutrinos to the mix which would bring the total to 110 particles (assuming that one of the right handed neutrinos was a dark matter particle, that they come in three generations and have distinct antiparticles, and that these interact only via Higgs boson interactions), and often theorists to add at least one gauge boson to govern interactions in the dark sector bringing the total to 111 particles.

Particles Ruled Out By Electroweak Data.

We have a complete set of particles lighter than the W and Z that can be produced directly in their decays.  Any missing particle would either have to be sterile (i.e. right handed or otherwise not weak force interacting), or if it was a "fertile" particle, would have to be more than 45 GeV in mass.

Higgs boson decay analysis work is likely to bring this cutoff to 63 GeV in a few years (unless there is a new particle with a mass between 45 GeV and 63 GeV that interacts with the Higgs boson, which seems unlikely given how well the preliminary decay data fit a Standard Model Higgs boson).

Possible Patterns Of Interaction

The Standard Model particles have a hierarchical pattern of interactions with four different kinds of bosons, and bosons are the only way that particles interact with each other.

Fermions:
* Left handed quarks couple of gluons, photons, W/Z bosons and Higgs bosons (4)
* Left handed charged leptons couple to photons, W/Z bosons and Higgs bosons (3)
* Neutrinos couple to W/Z bosons and Higgs bosons (2)
* Right handed quarks couple to gluons, photons and Higgs bosons (3)
* Right handed charged leptons couple to photons and Higgs bosons (2)

Bosons:
* Photons do not couple to gluons, photons, W/Z bosons or Higgs bosons (0).
* Gluons couple to gluons, but not to photons, W/Z bosons or Higgs bosons (1).
* W bosons couple to photons and W/Z bosons and Higgs bosons, but not to gluons (3).
* Z bosons couple to W/Z bosons and Higgs bosons, but not to photons and gluons (2).
* Higgs bosons couple to W/Z bosons and Higgs bosons, but not to photons and gluons (2).

Thus, there are 16 possible combinations of interaction patterns with the four kinds of Standard Model bosons, of which 7 are realized in the Standard Model:

* All four (1 possibility) - Left handed quarks.
* Three out of four (4 possibilities) - Left handed charged leptons and W bosons (both interact with photons, W/Z bosons and Higgs boson), and Right handed quarks (gluons, photons and Higgs boson)
* Two out of four (6 possibilities) - Neutrinos, Higgs bosons and Z bosons (each interact with W/Z and Higgs boson), and right handed charged leptons (photons and Higgs boson).
* One out of four (4 possibilities) - Gluons (gluons)
* None of the four (1 possibility) - Photons.

(Note that this analysis assumes that the Standard Model Higgs boson, unlike the weak force gauge boson, interacts with both right parity and left parity particles, which is true so far as I know, even though I don't know enough to be sure that this is the case.  I would welcome comments on this nuance that can confirm or deny the fact from readers who do know.)

Interaction Pattern Gaps:
The 9 combinations of interactions which are not realized in the Standard Model with a description of an example of each kind of hypothetical particle are:

* Three out of four - Left handed electrically neutral quarks (gluons, W/Z bosons and Higgs bosons), and massless left handed quarks (gluons, photons, W/Z bosons)
* Two out of four - Charged gluons (gluons and photons), right handed electrically neutral quarks (gluons and Higgs bosons), massless left handed quarks (gluons, photons, W/Z bosons), and massless W bosons (photons and W/Z bosons).
* One out of four - Right handed neutrinos (Higgs boson only), charged photons (photons only), and massless Z bosons (W/Z only).

Analysis of the Gap Particles:
Of the 9 missing hypothetical particles interaction profiles:
* the apparent rule that particles that interact via W/Z bosons must have mass rules out 4,
* the apparent rule that particles charged particles must have mass rules out 2, and
* the absence of electrically neutral quarks rules out 2 (left and right handed electrically neutral quarks).

The first two rules illustrate the intimate connection between electroweak interactions and the Higgs boson.  It would be interesting to see what kind of experimental data rule out charged gluons and charged photons, however.  Since neither interact via the weak force, it is harder to rule them out with W and Z boson decays, for example, although charged particles are hard to miss (but might be long lived, making colider detectors less suited to seeing them).

Electrically neutral quarks are ruled out by precision electroweak data unless all three generations of them have masses of more than 45 GeV.  Composite neutrons, of course, are collectively electrically neutral, but an electrically neutral quark not ruled out by precision electroweak decays would have to be more than 45 times as heavy.  Electrically neutral quarks would still need to be confined (unless they were almost as heavy as the top quark), so they would form mesons of 91 GeV or more, and baryons of 136 GeV or more (for pure first generation ground states), with composite particles containing some second or third generation electrically neutral quarks being surely unstable, and a ground state of three electrically neutral quarks that might or might not be stable.  Also, they would make possible fractionally charged mesons and baryons which are not observed when bound to Standard Model quarks.  We can be pretty comfortable that these do not exist.

UPDATE 12-16-13: 

The absence of charged gluons and charged photons flows somewhat naturally from their zero rest mass and CPT conservation.  Charge and parity are clearly associated with a direction of time and CP violation is equivalent to T violation.  But, particles with zero rest mass always move at the speed of light at which time is effectively frozen and doesn't pass at all with respect to an observer in the particle itself.  When the particle lacks an internal sense of time direction, it can't have time direction dependent properties like charge and parity.

Conversely, fermions which have either charge and parity, or at least, parity, can't be massless, because a particle must have mass for parity to be well defined in its own reference frame given CPT conservation.  Since W/Z interactions are parity specific (only left handed particles interact via the weak force), only massive fermions can have either weak force interactions or electric charge.

This analysis would likewise explain the lack of CP violation in the electromagnetism and the strong force, both of which are carried by zero rest mass bosons.  However, the QCD suggestion that gluons may dynamically acquire momentum dependent mass in the IR limit within confined systems suggest that there might be strong force CP violation there.

Since only short range massive bosons can carry CP violating forces, "all CP violation is local."

If gravity is carried by a massless graviton, then that element of gravity must also have no CP violation.  But, if the cosmological constant is really dark energy mediated by a massive scalar boson (maybe even the Higgs boson) then there could be an arrow of time in dark energy.

END UPDATE

The ninth possibility, a particle that couples to the Higgs boson, but not photons, gluons and W/Z bosons, is not excluded experimentally (since the lack of interactions makes them so hard to detect), and indeed, is a natural dark matter candidate:
* If it is a spin-1/2 fermion, a "right handed neutrino", or more generically a "sterile neutrino" (as it need not have a correspondence to any of the Standard Model neutrinos).
* If it is a spin-3/2 fermion, a SUSY or non-SUSY gravitino.
* If it is a spin-0 boson, a "sterile Higgs boson", and
* If it is a spin-1 boson, "vector bosonic dark matter".

While there may be "sterile neutrinos" of spin-1/2, I doubt that there are true right handed counterparts to the left handed neutrinos because if there were they should have the same mass of their left handed counterparts, just like all other fermions that differ only in parity.  This is too light to fit any experimental signature driving the need for sterile neutrino-like particles.

Instead, I think that right handed neutrino counterparts to the Standard Model neutrinos do not exist because parity and anti-neutrino/neutrino state would otherwise be degenerate.  I also doubt that neutrinos are really Majorana particles because the whole logic of neutrinos in the Standard Model requires their particle/antiparticle status to be distinct to balance out lepton number conservation.

If there is a "sterile neutrino", I would suspect that it would be a singlet counterpart to the Higgs boson or graviton or both.

Higgs Boson Yukawa Hints

If my analysis above is correct, then there are lots of particles (42 right handed fermions out of 90 fermions in all to be exact) that interact with the Higg boson, but not the W and Z bosons, whose existence can't be ruled out to any extent by precision electroweak decays.

It is very reasonable to think that all fundamental particles with mass have Higgs boson couplings and that their masses are functions of these couplings.

But, there are strong hints from the fundamental particle masses which we have now confirmed appear to be a function of their couplings to the Higgs boson fit a very interesting pattern that profoundly limits the size of a complete set of particles.

The sum of the square of each of the twelve fermion rest masses, and the three boson rest masses (W, Z and Higgs), equals the sum of the square of the Higgs field vacuum expectation value.

This implies that the sum of the suitably equated Yukawa couplings (adjusting the Higgs self-coupling and gauge boson coupling accordingly) for all particles that are known to couple to the Higgs boson are unitary to a precision of about a few parts per thousand of experimental error.

Almost all of the uncertainty in this value comes from the uncertainty in the masses of the top quark and Higgs boson, and there is every reason to believe that the LHC will be able to reduce the Higgs boson mass uncertainty significantly, and the top quark mass somewhat, in the next few years, making this match (if indeed it is a law of nature) even tighter.

If indeed this set of interactions really is unitary (and I sincerely believe that this will be shown to be a law of nature sooner or later), then any and all undiscovered particles that couple to the Higgs boson (as all massive fundamental particles apparently must), can't have more than about 8 GeV of mass combined for all such Higgs boson interacting particles, given the precision of current measurements.

Therefore, it is very reasonable to think that there are no undiscovered massive fundamental particles with masses in excess of about 8 GeV (combined).  The maximum potential mass of any particles that have Higgs boson interactions would have to be small enough that it could not be missed in W and Z boson decays (a 45 GeV cutoff) if it also had any weak force interactions.

We can also say with considerable comfort that existing experimental evidence rules out unknown particles with strong force interactions or electric charge with masses of 8 GeV or less.

Four Standard Model particles weight far more than 8 GeV (the top quark at about 173 GeV, the Higgs boson at about 126 GeV, the Z boson at about 90 GeV, the W boson at about 80 GeV).  Three more Standard Model particles weight between 1 GeV and 8 GeV (the bottom quark at about 4.2 GeV, the tau lepton at about 1.776 GeV, and the charm quark at about 1.3 GeV).  A particle with electric charge or color charge in a similar mass range would have left an unmistakable signature.

Anything resembling a fourth generation Standard Model particle has been ruled out experimentally to the hundreds of GeVs and pretty much all possible SUSY particles (either super-partners or extra Higgs bosons) have been ruled out for masses of less than 95 GeV.  For example, even on December 5, 2011 (two years ago), ATLAS had published the following exclusions:
A limit at 95% confidence level is set excluding a cross-section times branching ratio of 1.1 pb for a top-partner mass of 420 GeV and a neutral particle mass less than 10 GeV. In a model of exotic fourth generation quarks, top-partner masses are excluded up to 420 GeV and neutral particle masses up to 140 GeV.
Less stringently (but less likely to be a law of nature), if one assumed that the squared mass of bosons and the squared mass of fermions were equal, when the bosons are currently greater than the fermions by about 2% of the total, allowing for about 29 GeV to balance the squared mass of the bosons and the squared mass of the fermions.

So, if these assumptions about Higgs Yukawa's being unitary and all massive fundamental particles having Higgs Yukawa's are correct, then any undiscovered massive particles are:

1.  Less than 8 GeV in mass.
2.  Are "sterile" in the sense that they do not interact via the weak force.
3.  Do not have electric charge.
4.  Do not interact via the strong nuclear force.

In other words, any undiscovered massive particles must be in the nature of moderately light sterile neutrinos (or perhaps gauge bosons of some newly discovered short range force), such as a warm dark matter candidate.  The Higgs Yukawa's, of course, themselves, place no boundaries on the universe of possible massless particles.

Supersymmetry is possible under these assumptions only if Standard Model superpartners couple exclusively to non-Standard Model extra Higgs bosons and have no almost no couplings to the Higgs bosons that has been observed at the LHC.  But, as I understand the matter, SUSY's reason for existence includes the desire to make the Higgs boson mass natural and to solve the hierarchy problem, something that a Higgs boson that has no relationship whatsoever to any SUSY particles would not seem to serve well.

This kind of analysis tends to strongly disfavor notions like a heavy sterile see-saw partner to the neutrinos that contribute to their mass.

Cosmology Hints

Cosmology assumptions also strongly disfavor the existence of two new flavors of sterile neutrinos with masses comparable to the three known neutrino flavors, but only mildly disfavor a single flavor of very light sterile neutrinos (with masses on the order of 1 eV or less) of the type suggested by nuclear reactor neutrino stream data discussed below.

In addition, cosmology suggests that we need a nearly collisionless particle significantly heavier than 1 eV to provide dark matter.  Cosmology has also left us with no clear way to understand the source of the asymmetry in the universe between matter and antimatter which might therefore require new physics.

Extensive astronomy research by multiple means has converged on a possible 2 keV mass sterile neutrino as a particularly promising warm dark matter candidate.

Cosmology, particularly in relation to dark matter and dark energy, provides the strongest evidence that the Standard Model is probably not complete.  We need at least one dark matter candidate or force modification (possibly with a new light or massless boson), at a minimum, unless we are truly clever and find some solution to the dark matter and dark energy questions that are currently not being widely discussed.

Also, should we find that there are indeed at least two sterile neutrino types - one of about 1 eV that may oscillate with conventional neutrinos, and a stable one of about 2 keV, the precedents of Standard Model encourage researchers to see if a final third generation light sterile neutrino, possibly itself unstable and possibly playing a role in leptogenesis, also exists adding a new column of leptons to the Standard Model.  A Standard Model extension that does just that and proposes three right handed neutrinos is receiving deserved serious consideration.

On the other hand, if dark matter phenomena turn out to be mostly not a matter of new fundamental dark matter particles, but of modifications to force laws, we may need some new bosons.  For example, it isn't unreasonable to imagine that a tensor-vector-scalar modification of gravity laws would require a tensor graviton (spin-2), a vector graviton (spin-1) and a scalar graviton (spin-0 and possibly identical to the Higgs boson or an inflaton).

Evidence of "inflation" phenomena in the early universe, probably best fit to some sort of slowly shifting scalar field also points to the possible need for an "inflaton" boson which might be coincident with the Higgs boson, but which might also be another spin-0 boson.

One could imagine, perhaps, a family of spin-0 bosons sufficient to make up a two Higgs doublet version of the Standard Model - the Standard Model Higgs boson, a dark energy Higgs boson, an inflaton Higgs boson, and a pair of charged Higgs bosons that demonstrate strong CP violation that help account for baryon asymmetry or baryogenesis or leptogenesis.  Only the first two might be stable enough to be present in the current universe.  Needless to say, any such family of spin-0 bosons would undermine the usefulness of what we know about the Standard Model Higgs boson that seems to sharply limit the undiscovered particle spectrum.  Extra Higgs bosons in the right kinds of theories, could facilitate an evasion of these bounds.  But, it isn't at all obvious that the proposed SUSY extra Higgs bosons have the right properties to achieve these ends.

Of course, it would be even more fantastic if someone could figure out a way in which the humble Standard Model Higgs boson by itself is actually the source of the empirically observed level of dark energy and was also the inflaton, without creating new particles.

The hints from other experimental data can easily accommodate the kinds of particles that the cosmology data (and the reactor data) seem to be hinting at right now.

W and Z Boson Mean Lifetime Hints

No particle has a wider resonance width than the W and Z bosons, which translates into a mean lifetime of about 3*10^-25 seconds.

Generally speaking, resonance width (and its inverse, mean particle lifetime) are strongly correlated with rest mass.  Heavier particles have greater resonance width and shorter mean particle lifetimes.

The heaviest fermion, the top quark, is also by far the shortest lived, with a mean lifetime of just 5*10^-25 seconds, which isn't even long enough, on average, for it to be confined by the strong force into a hadron, unlike all other quarks.  Despite being a strong force interacting particle with QCD color charge, top quarks as such an unstable form of a up-type quark that they simply immediately decay via what is for them the faster acting weak force.

It isn't obvious that it is even conceptually workable to have a fermion whose mean lifetime is less than the mean lifetime of the W boson by which it decays.

If there is some sort of fundamental reason for the link between fermion mass and resonance width, it may be that it is simply impossible for a fermion to have a mass much greater than a top quark as a result.

Thus, weakly interacting particles with masses many times greater than 173 GeV may simply be inconsistent with the laws of physics in some fundamental sense never firmly established or proven so far.  Since SUSY particles must be weakly interacting to play their intended role in electroweak symmetry breaking, it would follow that SUSY particles in the several hundreds of GeV to low TeV mass exclusion ranges that already exist for many parts of the minimum SUSY superpartner spectrum may be simply disallowed.

SUSY has many moving parts, but one non-negotiable element of SUSY is that there must be at least one superpartner for every Standard Model particle.  Every single one of these superpartners must exist at some mass.  And, if some superpartners have been excluded at light masses, then some superpartners must be heavy.  No serious SUSY or string theory advocate of whom I am aware currently claims, for example, that there are no SUSY superpartners with masses of at least 1 TeV, about six times as heavy as a top quark.  The emerging consensus in many SUSY discussions is that if SUSY is correct, that some SUSY superpartners have masses at least on the order of 10 TeV or so, about sixty times the top quark mass.

Given the already tiny difference between the top quark mean lifetime and the W and Z boson mean lifetimes, it is hard to imagine that a 10 TeV sparticle capable of interacting via the weak force that weighs sixty times as much would not have a mean lifetime of less than the W and Z bosons and hence might be ruled out as possible, even if the link between rest mass and mean lifetime is not terribly strict.

So, if no weakly interacting particle can have a mean lifetime of less than the W and Z bosons, and mean lifetime is roughly linked to a particle's rest mass, then it follows that existing experimental evidence effectively rules out SUSY and any other BSM theory with a substantial heavy particle spectrum.

Reactor Anomaly Hints

Some nuclear reactor neutrino data seems to hint at the possibility of a sterile neutrino of approximately 1 eV in mass that oscillates with fertile neutrinos to some extent.  The data aren't strong enough, however, to be conclusive.

LUX Hints

The LUX experiment's recent results the basically rule out the existence of weakly interacting dark matter in the vicinity of the solar system as well modeled densities with particle masses in the range from approximately 5 GeV to 1 TeV again supports the notion that the heavy weakly interacting particle spectrum is complete.

If stable WIMP dark matter was out there, LUX's extraordinary sensitivity should have seen them.

Other Hints

Increasingly long experimental minimum neutrinoless double beta decay frequencies (never convincingly observed), increasingly long minimum mean lifetimes of the proton (never observed to decay), strict lower thresholds on the electron dipole moment of the electron, and the modest amount of any anomalous magnetic moment of the muon (whose current value is within an order of magnitude of the Standard Model's expected value despite being hard to exactly match to the Standard Model), are all very sensitive to features of beyond the Standard Model physics, even at high energies.

Some of these approaches tend to disfavor BSM models including SUSY models with heavy BSM particles in particular, bounding these theories from above, rather than below.

While the limitations on BSM physics from this kind of evidence is not yet definitive, the fact that not even a strong hint of experimental support for major deviations from the Standard Model expectations have been found in any of these contexts tends to favor the conclusion that our set of heavy particles is complete.

Concluding Thoughts

The case that we have found almost all of the particles in the universe with the possible exception of some light sterile fermions and some massless bosons is increasingly a strong one.  All but a handful of the leading BSM physics proposals are disfavored by the analysis and data collected above.

This context supports a minimal approach to BSM theory building, and a need to focus on non-SUSY, non-String theoretic approaches as those approaches appear to me to be clearly doomed to the extent that they are trying to describe reality as opposed to merely serving as toy models upon which ideas that can be later generalized to more realistic Standard Model extensions.

Is the Higgs Field A Cause Or An Effect?

The concepts of vacuum energy aka zero point energy, dark energy, the Higgs vacuum expectation value, and the scalar inflation field of cosmology, are in principle distinct, but because all are conceptualized as properties of what is otherwise a vacuum, and the concepts are naively inconsistent with each other, the relationship between them is cryptic.

One of the unsolved problems of physics is why the "vacuum expectation value" of the Higgs field is so many orders of magnitude larger than the average density of dark energy implied by the cosmological constant, which seeks to reconcile a couple of these concepts.

Dark energy is, however, of the same order of magnitude as dark matter and of ordinary matter, based on the lamda CDM model of cosmology.  This is vastly less than the amount that would be expected if there was a Higgs field vev of 246 GeV throughout the entire universe.

The answer may be one of cause and effect.  It is common sense to think about quantum fields as being created by bosons emitted from fermions.

Perhaps we should think about the Higgs field as something that is created by the Standard Model fermions, just as we think about the electromagnetic field as created by quarks and charged leptons, and strong force QCD fields as created by quarks, and gravity is created by matter-energy particles in empty space (apart from the cosmological constant).

Dark energy seems like it is everywhere.  But, a model in which dark energy is present merely in the vicinity of Standard Model fermions (massive W and Z bosons must always be near the Standard Model fermions that emit them and absorb them because they are so short lived), would be virtually indistinguishable.

Why?

Because at the relevant scales at which we observe dark energy effects, the universe is essentially topologically flat and homogeneous.  Dark energy that was clumped around fermionic matter wouldn't look different from dark energy distributed uniformly at a rate of the expansion of the universe scale, but the fermionic matter distribution itself is quite smooth.

If Standard Model fermions generate the Higgs field, then it also makes perfect sense that the aggregate energy of the Higgs field should be on the same order of magnitude as the aggregate mass of the fermions in the universe.  Higgs fields would only exist in the vicinity of these particles.  The notion is similar in concept, although quite distinct from Mach's conception of gravity.

Phenomenologically, this would also suggest that empty space really and truly is empty, that vacuum energy is a flawed oversimplification that doesn't apply to truly empty space, and that the universe is particles all of the way down.  But, another way, while the Higgs vev in the vicinity of Standard Model fermions is 246 GeV because the Standard Model fermions generate this Higgs vev, in deep space far from any fermions, the Higgs vev is asymptotically 0 GeV has the distance from any fermionic matter grows great enough.

Also, if there was a high energy Higgs field in the vacuum, why wouldn't it give rise to Higgs bosons that would decay into fermions making that vicinity no longer a vacuum?

Footnote: The relationship between resonance width and mean lifetime is t=1/W.  The W boson, and Z boson have resonance widths of about 2.1 GeV and 2.5 GeV respectively corresponding to mean lifetimes on the order of 3 * 10^-25 seconds) which gives the weak force an effective range on the order of the size of an atom (the top quark has a mean lifetime of about 5 * 10^-25 seconds and a slightly smaller resonance width). The Standard Higgs boson resonance width is on the order of a MeV.  Thus, the mean lifetime of a Higgs boson is about 1000-2000 times that of a W or Z boson (about 10^-22 seconds, give or take), and naively a range for its interactions were it emitted on a similar basis on the order of 1500 times the size of an atom.


Wednesday, December 11, 2013

Natural SUSY Exclusions At LHC Summarized

Matt Strassler summarizes the extent to which the current LHC data exclude a broad class of "natural supersymmetry" theories (from a recent paper of which he is a co-author).  These are operationally defined as theories with a Higgsino of 400 GeV or less, and gluino decays of one of three theoretically preferred types.  (The post does not really explain why this operational definition is appropriate, but I respect his judgment on this score.) But, it is a much less constrained search than a test limited only to, for example, the Minimal Supersymmetric Model or something similar.



The extent to which such theories are disfavored as a function of gluino mass is displayed above.

This certainly doesn't exclude all SUSY theories, but since a more "natural" theory has been a long stated purpose for considering SUSY theories in lieu of the Standard Model, an "unnatural" SUSY theory is decidedly less interesting.

His analysis is largely limited to exclusions based on direct searches at the LHC and does not really integrate exclusions arising from electric dipole and magnetic moments of charged leptons, neutrinoless double beta decay, direct dark matter detection experiments, etc.

Similarly, he doesn't consider more speculative limitations like the tendency of heavier particles to decay more quickly which would give rise to decays that happen much more quickly than the known lifetime of the W boson that facilitates those decays without a well established mechanism for doing so.

He also doesn't discuss qualitative shifts in our knowledge that disfavor SUSY where it once seemed to be necessary, such as the fact that the current Higgs boson now makes the Standard Model equations unitary up to the Planck mass.  Until the Higgs boson mass was known, the possibility that the Standard Model equations would cease to produce physically meaningful results at high energies was considered a major flaw in the Standard Model which supersymmetry could solve.

His previous posts have made a key point, not emphasized in the current post, which is that one of the reasons that supersymmetry is attractive is that it is one of the few classes of modifications of the Standard Model that can simultaneously fit the data and make a major modification to the Standard Model at all.  Ruling out SUSY clears most of the decks of theory space of any meaningful Standard Model alternatives.  In particular, SUSY is a component of essentially all versions of string theory.  Ruling out natural SUSY also means ruling out natural string theory.

Tuesday, December 10, 2013

The Amplituhedron

We can calculate the probability of something happening in the Standard Model by doing the calculations of the probability of each of the Feynmann diagrams for each of the possible ways that a particle could have gotten from point A to point B, according to the rules of quantum mechanics.  But, it turns out that the number of possible ways that this could happen is infinite and that one has to do the calculations for a great many Feynmann diagrams simply to get a very close, but not exact, result.

Nina Arkani-Hamed, a leading string theorist, and Jaroslav Trnka, have discovered a way of doing the same calculations much more efficiently in a quantum mechanical toy model theory similar to the Standard Model called a maximally supersymmetric gauge theory in the Planar limit called Planar N=4 SYM, with the promise that many of their results will generalize to other quantum mechanical theories that lack some of the symmetries that make the calculations particularly easy, although not necessarily a generalization that could reach completely to the Standard Model.

In some situations, with the correctly chosen parameters, the toy model calculation results are very similar to those that would be produced in the Standard Model itself.

They do by constructing a polyhedron in a theoretical amplitude-space with certain properties that organize the many calculations that go into the conventional Feymann diagram approach such that the volume of the constructed polyhedron corresponds to the probability of a particle going from point A to point B according to the rules of quantum mechanics.  While this polyhedron is divorced from reality in its details, it turns out that this is done in a way that preserves locality and makes all the probabilities of every possible event add up to 100% even though it is not at all obvious from the method itself that this would work out to be the case.

At a minimum, it is a potential breakthrough in potential calculations.  It also arguably sheds light on underlying structure in amplitude calculations that had not previously been fully appreciated which may shed light on fundamental physics.

This is done from a supersymmetric/string theory perspective, but it may be that the assumptions can be fit to the more ideosyncratic case of the true Standard Model or to a SUSY theory that is effectively identical to the Standard Model at low energies.  A number of companion papers to flesh out the much hyped breakthrough have been promised.

Some Physics Conjectures Related To Gravitons And Mass

Observations

* If gravity is indeed conveyed via a graviton particle, we know that it does not couple merely to mass, because gravity bends light.  It must, instead, couple to mass-energy (with an E=mc^2 interaction).

It follows from the existence of Black Holes, however, that gravitons must not couple to other gravitons, just as photons don't couple to photons (this also seems to theoretically grossly disfavor massive graviton theories).  But, this is odd.

Photons couple to electric charge and lack electric charge themselves.  Gravitons couple to mass-energy.  Yet, surely if gravity is transmitted via bosons, they must have energy which is seemingly one of the things that they couple to.  The definition of energy is the capacity to apply force to move things, which gravitons, if the exist, can surely do.

FWIW, I'm not terribly clear on whether gravitational energy self-gravitates in GR itself.  If it doesn't it isn't clear to me how matter-energy conservation is not violated, however.  Discussions of this issues can be found here and here with seemingly with a contradictory conclusion here.  More discussion here (gravity self-gravitates but does not generate gravity the way that other matter-energy does in the equations).

* Both quantum gravity transmitted by gravitons and ordinary classical GR gravity, propogate at the speed of light, not instantaneously.  So, the gravitational interactions of two objects can be decomposed into two parts.  The generation of a gravitational pull from a source object that has an impact at its destination (both proportionate to mass-energy apparently), and the destination object's pull in the other direction.  This is sometimes called a distinction between active and passive gravity.  For objects in motion the source object and the destination object gravitational impacts on each other are at a time-gap to each other.

* The coupling of gravitons to mass-energy, if they exist, is also odd because it does not always couple with the same strength to a particle of a particular type.  The coupling of a photon to an electron or W boson is always identical.  The coupling of a photon to an up type quark is always identical.  The coupling of a photon to a down type quark is always identical.  The strong force coupling of a gluon of one of the eight types of gluons to any of the six flavors of quark with a particular color charge is always identical.  The coupling of a W boson or Z boson to a fermion is always a function of that kind of fermion's weak force charge.

* The Higgs boson coupling constants are such that the "sum of the square of each of the fundamental boson masses, plus the sum of the square of each of the fundamental fermion masses, equals the square of the Higgs vacuum expectation value to a precision of 0.012%."

In other words, the sum of the Yukawa's giving the proper definition of the Higgs boson self-coupling, equals one (empirically true, but not theoretically required by the Standard Model), i.e. 2λ +g2/4+(g2+g'2)/4+sum over all fermions(yf/2)= 1, where λ, g, g′ and yf being, respectively, the effective and renormalized scalar (i.e. Higgs self-coupling), gauge (i.e. W and Z boson couplings) and Yukawa couplings of the twelve Standard Model fermions to the Higgs boson.

But, while the Higgs boson coupling to a fundamental particle, i.e. it's "Higgs charge", also called its Yukawa,  does not come in integer or simple integer ratio units the way that electric charge, weak force charge, and strong force color charge do, the Yukawa does not.

* The electromagnetic coupling constant, the weak force coupling constant, the strong force coupling constant and the Higgs field properties, as well as the masses of the fundamental particles, all "run" with the energy level of the interaction.  The mass of a quark in a low energy interaction is not the same as the mass of that same quark in a high energy interaction.  But, while masses and bosonic coupling constants do run, electric charge, weak force charge, color charge do not (see e.g. here).

Still, any given kind of particle at any given energy scale, always has a particular mass.  Indeed, particle mass is indifferent to (1) whether a particle is ordinary matter or antimatter (something that flips the electric charge of a particle to the opposite charge), (2) it is indifferent to its parity (which impacts its weak force charge), and (3) it is indifferent to the color of a quark (all charm quarks, for example, have the same mass at a given energy level, without regard to whether it has red, blue or green strong force color charge).

This is not true in the case of gravitons.  For example, in general relativity, an electron travelling at 0.1 times the speed of light and an electron travelling at 0.5 times the speed of light give rise to different gravitational effects because they have differing amounts of kinetic energy, and the direction of the gravitational pull is not a function just of the location of the electron, as it would be in the case of Newtonian gravity who gravitons would be spin-0 bosons transmitting a scalar gravitational field, but also of the direction in which the electron is traveling.  The fact that a particle's vector momentum, vector angular momentum, and vector photon flux, as well as its scalar rest-mass and other elements all contribute to gravitational pulls in general relativity is why a general relativity graviton would have to be a spin-2 boson giving rise to a tensor field, rather than the spin-0 boson of a Newtonian gravity or the spin-1 vector bosons of electromagnetism, the weak force and the strong force.

Note that the cosmological constant of GR can be conceptualized as a scalar field, however, possibly with its own spin-0 boson.

* This particularly problematic because the absolute among of energy of a matter-energy field is not a well defined universal quantity even for a particular matter-energy field.  Kinetic energy is a function of velocity which is a function of the reference frame of the person describing it.  So is potential energy in a variety of fields.  Discussion of the arguable non-gravitation of potential energy is found here.  General relativity copes with this problem by being formulated mathematically in a background independent way that essentially dependents on the differences in energy between two points in the GR field, neatly cancelling out differences in intermediate quantities like absolute energy level that don't produce physical observables.

But, it isn't obvious to me how gravitons acting in isolation can do the same, although I suppose it can base its properties on it in reference to its source, and it in reference to its destination, using itself as an intermediate reference frame.

Maybe the Beta function running of particle masses and coupling constants with energy level solves some of these issues in the Standard Model, but it is my understanding that beta functions derive from the renormalization proceedure, and not from general relativity.

Still, a graviton, at a minimum, is engaged in a far more sophisticated interaction than any other force carrying boson.  The other force carrying bosons need only respond to one universal property of a particle.  The graviton must measure what we would ordinarily consider to be multiple properties of a particle at once as it interacts with it, some of which must be measured in a way that is relative to the graviton's source.  No other particle in the Standard Model has properties that depend upon the source of the particle in this way - the particle itself has a tiny number of individual properties that fully characterize it regardless of its source (apart from quantum entanglement).

What a GR graviton delivers is not merely a pull in a particular direction.  It can deliver elaborate twisting and turning.

* One way that a Higgs boson is often conceptualized is as something that gives rise to the inertial mass of fundamental particles.  But, in principle, at least, it seems as if the inertial mass of a fundamental particle due to its Higgs field interaction may differ from the gravitational mass of that same fundamental particle which derived from both the mass and the energy of the particle, seemingly violating the principle of equivalence (although perhaps equivalence merely means that the inertial mass component of a particle's mass-energy is identical to the gravitational mass component of a particle's mass-energy disregarding the energy component of  particle's mass-energy).

Presumably, the running of fundamental particle energies with energy scale also impacts the gravitational mass of those particles, although it isn't obvious to me how mass-energy conservation is maintained in this context.

* Particles that don't have weak or electric charge (i.e. photons and gluons) don't have rest mass, empirically, in the Standard Model.

* The mass of a composite particle, like a proton, is not simply the sum of the fundamental particles that make it up.  Those only make up about 1% of the total mass.  The other 99% of the mass comes from the energy of the strong force gluon fields between the quarks in the composite particle, although the amount of the composite particle mass isn't entirely independent of the mass of the fundamental particles that go into it in a non-linear way.  I've heard authoritative sources (maybe at the Of Particular Significance blog) state that even in the absence of fundamental particle mass, a proton would have mass, although it would be much less than it is in reality, which means that not all of the mass of composite particles is derivative in some way of the Higgs field interactions of the constituent fundamental particles.

* If particles in the Standard Model are truly point-like, they would be singularities in GR.  But, they only need to be smeared over a sub-Planck length distance by some means to escape this fate.  This is one basic issue that we would expect any quantum gravity theory to solve.

* The black hole firewall debate illustrates the deep problems involved in trying to mix classical GR and the quantum SM.

* Would hypothetical gravitons differ in energy like photons do via particle frequency (equivalent to particle wavelength), or do they have identical energy?  Arguing that they do see, e.g., here and here (with a caveat since energy is not localizable in GR) and here.

Analysis

The point of the observations is to reach some personal speculative conclusions and conjectures about quantum gravity.

1.  The transmission of gravity via a graviton if gravity reduced to general relativity in the classical limit, then the properties of a graviton and its interactions seem much more complex and non-straightforward in their fit to a particle model than the other Standard Model interactions.  This tends to disfavor particle oriented theories of quantum gravity (a la SUGRA and supersymmetry) over quantum gravity formulations that reside in an emergent space-time fabric (e.g. Loop Quantum Gravity) rather than a particle.

2.  The process by which fundamental particles are endowed with inertial mass via the Higgs field is not equivalent to or identical with gravitational mass.  There are gravitational mass-energies which do not have their source in the Higgs field (e.g. photons, gluon fields, kinetic energy) and the Higgs field does not generate everything that contributes to even a fundamental particle's gravitational impact.  This also suggests a need for stronger experimental tests of the equivalence of inertial and gravitational mass in systems in which Higgs field generated inertial mass is not overwhelmingly predominant.

3.  Is it sensible to imagine a fundamental particle (e.g. a sterile neutrino) that acquires inertial mass via the Higgs field, but lacks other Standard Model interactions?  There is certainty room, even given the precision of the fundamental particle Yukawa measurements and the conjecture the the sum of all Yukawa's equals one, for this to be the case for a keV mass particle, for example.  But, this doesn't fit well with a notion that the Higgs boson may in some sense be a combination of the four electroweak bosons as its mass and other properties seem to suggest (suggesting that each of the boson interactions contribute to its field and that particles that interact with none of the fields shouldn't interact with the Higgs field either, something that is otherwise true).

Note on Wiggle Room in GR Confirmation

Some of the nuances on what non-mass energy quantities are properly included in General Relativity calculations aren't very well confirmed experimentally relative to other experimentally well confirmed predictions of GR.  In many circumstances in astronomy observations, energy contributions are so modest relative to mass contributions that they can be effectively disregarded, and gravitational energy contributions, for example, would be too tiny to observe for the most part.



Friday, December 6, 2013

Confirmation of SM Character Of Higgs Continues

Peter Woit summarizes the latest Higgs boson results from the Large Hadron Collider:
Both ATLAS and CMS have announced new data on tau-tau decays of the Higgs, providing stronger evidence for this signal than was available earlier. ATLAS sees a signal with significance 4.1 sigma, CMS at 3.4 sigma. These results are consistent with the SM, and rule out some SUSY alternatives in which the Higgs would behave differently. The Register headlines this Exotic physics takes an arrow to the knee.
Relative to the Standard Model expectation, at ATLAS the signal seen is 140% of the Standard Model value with margins of error to fit values between 100% and 190% of it.  At CMS, the signal seen is 87% of the Standard Model value with margins of error to fit between 58% and 116% of it.

The opposite direction of deviation from the Standard Model expectation value at two experiments simultaneously measuring the same thing discourages speculation that the true value differs materially in one direction or the other from the Standard Model Higgs boson tau-tau decay rate.  Most importantly, this result strongly disfavors speculative theories like a "leptophobic" or "leptophilic" Higgs boson that couples with different strengths relative to mass to different kinds of particles.

There are five kinds of decays that should be observable from the Higgs bosons (there are other kinds as well but they are expected to be vanishingly rare).  The discovery of the Higgs boson was based upon three of these decay channels, but the existence of Higgs boson decays in the tau-tau decay channel (a particle-antiparticle pair of third generation heavy electrons) and bottom-bottom (third generation down type quarks) channel remained less clear.  There was a three sigma signal of bottom-bottom decays from Higgs bosons at Tevatron before it was shut down (this is actually the main decay channel of Higgs bosons, but is harder to identify due to large backgrounds from other processes), but there was less data on the tau-tau channel.

In absolute terms, particular channels of Higgs boson decays have mostly been within about 50% of the expected values, which is within two sigma due to large margins of error.  Now that we have some reasonably significant observations in every expected decay channel and so far have not seen any unexpected decay products, the room for a non-Standard Model-like Higgs boson result narrows greatly.

Now, there is five plus sigma evidence of a Standard Model Higgs in three channels, better than four sigma evidence in the tau-tau channel, and three sigma evidence in the b-b channel.  But, the relative uncertainty in the b-b channel makes it hard to pin down the total decay spectrum of the Higgs boson experimentally.  If the b-b decay channel, for example, made up 55% rather than the expected 60% or so of expected Higgs decays, that would provide "room" for all sorts of other unexpected decays, so long as they don't involve W bosons or Z bosons or photons or charged Standard Model leptons.

The Standard Model predicts the probabilities of each of the Standard Model Higgs boson decay channels to a precision of something on the order of 1% of less for a Higgs boson with a mass known to the precision that it has been measured to date, so the prediction that the Standard Model Higgs will decay in a particular way is eminently testable.

Impact on Supersymmetry

Fortunately, many of the leading alternatives to the Standard Model, like supersymmetry (SUSY), also make quite specific predictions about how a spin-0 Higgs boson with zero electromagnetic charge and the observed mass (one of three or more neutral Higgs bosons present in SUSY theories), will behave in particular kinds of decay channels like tau-tau.

These models have moving parts (adjustable parameters) that can be used to tweak the predictions to the observed result, but the class of SUSY theories in which there is a Higgs boson that looks exactly like the Standard Model one and there are no other light Higgs bosons that can be detected with the searches that LHC has done so far is quite narrow (see also here).

The CMSSM version of SUSY, for example, isn't quite ruled out yet (and has a Higgs boson almost indistinguishable from the Standard Model Higgs boson), but is left to a steadily shrinking parameter space as SUSY particle exclusions from LHC grow larger and the anomalous magnetic moment of the muon limits how heavy its particles can get to evade LHC lower mass sparticle exclusions.

In a nutshell, it is becoming harder and harder for SUSY proponents to explain why there is still no meaningful experimental evidence of any of the myriad new particles that the theory implies.  While it is easy to devise a SUSY theory in which most of the particles are too heavy to ever be detected, it is much harder to devise one where none of them are so light that they are observable or will soon be observable.

UPDATE December 10, 2013:

Another paper rules out a cascade of SUSY Higgs bosons into the observed phenomena with experimental evidence and also places the limits on the cross-sections o$f any SUSY Higgs boson phenomena at the LHC.




Thursday, December 5, 2013

Ancient mtDNA from Homo heidelbergensis rocks paradigms

A Siberian cave at Denisova yielded autosomal and mitochondrial DNA samples from several individuals who lived sometime in the Upper Paleolithic. This ancient DNA from an archaic hominin species was more similar to Neanderthals than to modern humans, but was clearly its own distinct archaic hominin species. Significant archaic admixture of Denisovan DNA was found in Papuans and aboriginal Australians, but no where else that could be sourced to these east of the Wallace line modern human populations. However, since the finger bones from which the DNA samples were extracted were not accompanied by sufficient skeletal remains, no identification of this rare ancient DNA could be made with any known archaic hominin species.

Now, the first sample of ancient mtDNA from Homo heidelbergensis bones in Northern Spain reveal that this archaic hominin species was more closely related to the Denisovans than to the Neanderthals and modern humans who share a common mtDNA ancestor more recent than the H. heidelbergensis departure from the the clade that they share with Denisovans.

This is notable for two reasons.

Implications For Hominin Evolution

First, the widely shared conventional wisdom, which I shared, was that Homo heidelbergensis was ancestral to the Neanderthals in Europe, a hypothesis that the ancient mtDNA sample disclosed yesterday disfavors. This is particularly surprising given the Homo heidelbergensis skeletal remains appear to have some Neanderthal derived features, although not all of them.

 It also strongly supports those who had argued that Homo heidelbergensis was a separate species that should not just be lumped into the range of variation within Neanderthal, for example. But, it is a seemingly clear defeat to those who had suspected that Homo heidelbergensis might have been a common ancestors of both modern humans and Neanderthals (although Maju, rightly retains some skepticism). He comments at his blog that:
So we could well ask, if H. heidelbergensis is not ancestral to Neanderthals, then where do Neanderthals come from?

It must be answered that we do not know yet if H. heidelbergensis is or not ancestral to Neanderthals or in what degree it is. The mitochodrial (maternal) lineage may well be misleading in this sense. Denisovans themselves were much more related to Neanderthals via autosomal (nuclear) DNA than the mtDNA, so it may also be the case with European Heidelbergensis.

In fact it is still possible that these individuals represent some sort of admixture between older and newer layers of human expansion. But there is no clear answer yet. What is clear is that no Neanderthals have these mitochondrial sequences but others closer to those of H. sapiens - and this is the most puzzling part in fact.
Who indeed?

In response to this new data point, John Hawks notes that the case for Neanderthals evolving in Western Europe, as H. heidelbergensis fossil evidence had strongly supported is now undermined. Considering this fact together with the presence of more genetic diversity in ancient DNA from Central European Neanderthals than in Western European ones he notes that:

From this perspective, the evolution of Neandertals looks less and less like a European phenomenon. Instead, Europe may have been invaded repeatedly by Neandertal populations that were much more numerous elsewhere, such as western or central Asia.
More generally, this evidence seems to support the notion that the hominin evolutionary tree was much bushier than previously suspected with many more unattested species that were not closely genetically related co-existing than we had previously believed.

Implications For Denisovan Species Identification

Second, this ancient mtDNA data positions the mysterious Denisovans much more clearly on the archaic hominin tree. They were closer to Homo heidelbergensis than to Neanderthals or modern humans. The Denisovan as Homo heidelbergensis hypothesis was one possibility discussed at this blog in late May and June of this year before anyone involved had this new ancient DNA data point.

On the other hand, the mtDNA split between Denisovans and Homo heidebergensis is actually older than the Neanderthal-modern human split by almost 50%.  Thus, while they share a common mtDNA clade relative to other hominins for whom we have ancient DNA, their common ancestry is actually very ancient.

This can be added to the observations of John Hawks from earlier this year that the mtDNA lineage of the Denisovans is probably too close to modern humans (as estimated by mtDNA mutation rates) for the Denisovans to be a direct ancestor of Homo Erectus, the first hominin species to leave Africa. Now, better informed, John Hawks, in his post on the new ancient DNA on the subject of Denisovan ancestry, emphasizes how little we really do know about the key issues:

[W]e know essentially nothing about the morphology of West or Central Asian hominins of 300,000 years ago. South Asia and Southeast Asia were likewise inhabited throughout this period but we have only the barest hints about the morphology of their inhabitants. These peoples existed just inside the range of archaeological visibility but we lack any but the most rudimentary fossil evidence of them.

To be sure, many people have been assuming that the Denisovans were some kind of East Asian population, for example in China or Southeast Asia. In the process, they have projected the characteristics of the Asian fossil record upon them. That idea has been supported by the existence of Neandertals to the west, and also the sharing of some Denisovan similarity in the genomes of living Australians and Melanesians.

But that's a big assumption. Let's explore an alternative: that the Denisovans we know are in part descendants of an earlier stratum of the western Eurasian population. Although they are on the same mtDNA clade, the difference between Sima and Denisova sequences is about as large as the difference between Neandertal and living human sequences. It would not be fair to say that Denisova and Sima represent a single population, any more than that Neandertals and living people do. But they could share a heritage within the Middle Pleistocene of western Eurasia, deriving their mtDNA from this earlier population.

Thus, we do now that this "sub-genus" identification for the Denisovan implies that the source of Denisovan mtDNA in modern humans must have been intrusive to Indonesia where it probably introgressed into modern human DNA rather than being a population that sprang out of Asian Erectus populations to a Siberian refugium. They replaced or coexisted with Homo Erectus.

This also leaves open the question of where in the genetic phylogeny Homo Florensis belongs. Their proximity to ground zero in modern human introgression of Denisovan DNA still makes H. Florensis a prime candidate for the source of that DNA in modern humans, until ancient DNA can rule them out. But, if Homo Florensis (aka hobbits) were the source of this archaic DNA introgression, then it follows that they must have been not pygmy Homo Erectus as many people have previously supposed, but pygmy Homo heidelbergensis clade member.

More HEP Experimental Data And Musings On The HEP Process

* The latest measurement of Higgs boson decays from CMS experiment at the Large Hadron Collider (LHC) reveals decays in the WW channel to be 72% of the amount expected in the Standard Model for a 125.6 GeV Higgs boson which is 1.3 sigma from the Standard Model expectation and hence is considered a confirmation that what has been seen at the LHS has precisely the properties of a Standard Model Higgs boson.

The disparity between a no Higgs boson hypothesis and the data observed in WW decay channels alone is significant at a 4.3 sigma level, almost enough data to constitute a Higgs boson discovery even in the absence of evidence of a Higgs boson's existence found in other decay channels.  Both ATLAS and CMS had discovered a Higgs boson a more than the five sigma threshold of discovery in data through 2012 (which is what was considered in this paper - the LHC is currently off line and being renovated).

The observed decay products counts were somewhat less than the Standard Model expectation in four out of five WW decay channels, although four individual W decay channels were individually within 1 sigma of the expected value and the fifth was within 1.2 sigma of the expected value.  These subsets, however, had very large degrees of uncertainty due to smaller sample sizes.

This data also further confirm prior conclusions that the Higgs boson's spin and parity (its neutral electric charge has never been in doubt and its mass has been known to considerable precision for about a year now).  The data show that this partricle is a spin-0 scalar particle, rather than a spin-0 pseudo-scalar particle or a spin-2 particle, in accord with the Standard Model Higgs Boson expectation.  The data excluding of a spin-2 hypothesis is quite definitive (two to three sigma depending on your assumptions), but while the data favor a scalar over a pseudoscalar hypothesis by about a 2-1 margin, both are consistent with this CMS data at a 1 sigma level.

The CMS data on both points is consistent with the data from the ATLAS experiment at the LHC.

Basically, the methodology consists of calculating how many events of a five different types would be predicted by the Standard Model with a Standard Model Higgs Boson of the appropriate mass including background events from other processes that have the same decay products, and then counting the number of those events that were actually observed in Higgs boson decays seen at the LHC.  Of course, finding and counting these exceedingly rare events in the remnants of billions of collisions requires amazing devices to create these energetic collisions en masse, sublimely accurately detectors to measure what happens in them, and incredible computer power and scientifically and statistically informed software to use Big data techniques to cull the events you are looking for completedly and accurately from the raw detector data which must in the first instance, reconstruct every single collision event's decay product into a complete decay story an an automated basis.  The "boring" pages of the paper explain the myriad procedures, assumptions and techniques that went into getting their result.

* A combination of new lattice QCD calculations and the latest data on semileptonic kaon decays has made possible a new record level of precision in measuring the up-strange element of the CKM matrix. The new value is 0.22290(90) which now is approximately the same level of precision as the up-down element of the CKM matrix (the up-down element's current measured value is 0.97425+/- 0.00022, although I have seen the global average value reported as 0.97427(15)).  This study should tweak down the mean value of and reduce the margin of error in the old global average value of 0.22534(65).

The up-strange element divided by the up-down element of the CKM matrix is the tangent of the Cabibbio angle, which is slightly more than 13.0 degrees.  In the Wolfenstein parameterization of the CKM matrix the up-strange element of the CKM matrix is defined to be equal to the lambda parameter, which is used to calculate all but one of the other CKM matrix entries.

Since the CKM matrix is unitary (i.e. the sum of the squares of the up-down, up-strange and up-bottom element combined equals exactly 1), this measurement also improves the accuracy with which the tiny up-bottom element of the CKM matrix is known. The up-bottom element of the CKM matrix is approximately 0.003395 (the old global average value for this element is 0.00351+0.00015-0.00014). [Updated June 14, 2014 to correct typo.]

In English, this means that when a up quark emits a W- boson that it has approximately an 11 in 10,000 chance of becoming a bottom quark, a 496 in 10,000 chance of becoming a strange quark and a 9,493 in 10,000 chance of becoming a down quark, subject to a small margin of error.  This study reduced the experimentally margin of error in the number of strange quarks produced from about 2-3 per 10,000 up quark decays to about 1 per 10,000 up quark decays and tweaked the expected number of decays by one or two.

Until now, the error in the up-strange element was about one part in 200. Now, the error in both the up-down and up-strange elements is closer to one part in 300 to one part in 400. Using direct measurements of each of the three elements separately, the deviation from unitarity is about two standard deviations, or more precisely: -0.00115(40)(43) with the first error from the up-strange element and the second from the up-down element.

The nine elements of the CKM matrix are described by four experimentally measured Standard Model parameters (there is actually more than one scheme by which the matrix can be reduced to four parameters and three parameterizations are in wide usage).  The up-strange element is the product of two out of four parameters in two of the main schemes for parameterization and of one parameter in another.

* New and improved measurements of the mean lifetime of the antiparticle of a strange B meson have also been made at the LHC.  An anti-B0s meson is a composite particle composed of an anti-strange quark and a bottom quark.  This is an utterly routine paper that confirms Standard Model expectations, but I'll take a little time today to explain this result and how it fits into a larger context, since it is a good example of the routine, every day work of experimental high energy physicists, so that the purpose and importance of this kind of work can be better understood.

The paper is:
Measurement of the B¯0s→D−sD+s and B¯0s→D−D+s effective lifetimes
LHCb collaboration: R. Aaij, et al. (674 additional authors not shown)
(Submitted on 4 Dec 2013)
What is being measured?

Background on hadron decay

Atomic nuclei are made up of protons and neutrons.  Protons and neutrons are by far the most stable examples of more than a hundred kinds of composite particles made up quarks called hadrons.  Almost always hadrons are made up of either two quarks, called mesons, or of three quarks, which are called baryons such as the proton and neutron.

Like every hadron (except the proton and the neutron when confined in an atomic nucleus that is small enough), the antiparticle of a strange B meson, the hadron whose decays are examined in this paper, is unstable.

Hadrons can always decay in more than one way, with some decay channels often being much more common than others.  The possible decay products are limited by a variety of conservation laws in quantum physics - conservation of electric charge, conservation of baryon number, conservation of mass-energy, and so on.  But, every decay path that observes those conservation laws will happen in a way that can be calculated from first principles using the equations of the Standard Model including a number of its key parameters.

Some Standard Model parameters like the Z boson mass, the electromagnetic coupling constant, the neutrino masses, the PMNS matrix elements, the charged lepton masses, the Higgs boson mass, and the top quark mass are irrelevant to the process or have such a tiny effect that they can be disregarded when making calculations.  But, the weak force coupling constant and CKM matrix elements for the quarks in the source hadron and decay product hadrons are critical to making the calculation.

Each possible decay route occurs with a probability whose likelihood can be calculated.  This likelihood is the decay width of that particular decay path.  This probability can be combined width of all of the other decay modes give rise to the total decay width of the particle.  In the Standard Model, decay width and mean lifetime of a decay are simple functions of each other, so the effective lifetime of a particular fundamental or composite particle for each decay channel can be determined by knowing what proportion of particles of a particular type decay into decay products of a particular type.

Interestingly, in the Standard Model, you calculate the probabilities related to each decay channel separately, rather than as part of the whole.  The fact that these probabilities in fact both match experimental data and also always add up to 100% (apart from uncertainties like rounding errors and uncertainty in the measured value of the Standard Model parameters) is itself a property of the Standard Model equations that lends support to the correctness or near correctness of the model and profoundly limits room for modifications of it that can be consistent with the data.

The Standard Model predicts the existence of more than a hundred different kinds of mesons (made of two quarks) and baryon (made of three quarks) (collectively hadrons). All but a dozen or two of them have been observed experimentally, and those that have not been observed experimentally are precisely the very heavy ones that are expected to be created only infrequently.  The observed hadrons, collectively, have several hundred decay channels that are frequent enough to have been measured accurately.

Consistent with the Standard Model, every single hadronic decay product that has been observed is made up of just five kinds of quarks (up, down, strange, charm and bottom; the sixth kind of quark, the top quark does not generally form hadrons) and none of the observed decay products every violate the Standard Model's conservation laws.

Even more remarkably, out of the many hundreds of decay channels that have been observed from the scores of hadrons that have been observed, the relative frequency of the quark content of the decay products in every single one of them is consistent to within the boundaries of measurement error with a simple nine element CKM matrix (which can be fully described with just four parameters) that provides the probability of a quark of one type being transformed into a quark of another type when it emits a W boson.

This consistency is powerful proof of the Standard Model which is cross-checked again and again every single time that a paper like this one measures the properties of a particular hadronic decay channel from a particular kind of hadron.

This paper's findings.

This paper looks at decays of the antiparticle of the strange B meson, aka the anti-B0s meson, an electromagnetically neutral particle made up of two quarks confined into a composite particle by the strong force of QCD in which the constituent quarks are a strange quark and the antiparticle of a charm quark (aka an anti-charm quark).

The paper looks at data regarding two common decay channels for the anti-B0s meson.

One common decay channels for the anti-B0s meson is a decay to a pair a strange D mesons, one positively charged and one negatively charged (i.e. one composite particle made up of a charm quark and an anti-strange quark, and one composite particle made up of an anti-charm quark and a strange quark).

Another common decay channel is to a negatively charged D meson and a positively charged strange D meson (i.e. one composite particle made up of a down quark and an anti-charm quark and another composite particle made up of an anti-charm quark and a strange quark).

The paper reports that the mean lifetimes for each of these decay paths have been measured with great precision - about 1.379 picoseconds for the pair of strange mesons decay path (with about a 3% margin of error) and 1.52 picoseconds for the charged D meson and strange D meson decay path (with about a 10% margin of error).

Recap of background as related to the larger context of this experiment

In the Standard Model, the probability that a meson will decay via a particular decay path is largely a function of the three CKM matrix entries for each of the two quarks in the meson, along with the weak and strong force coupling constants and the equations of the weak force and QCD.

Since quarks (other than top quarks) are always confined, the CKM matrix elements are determined by observing the decay of mesons and baryons which are composite particles into other mesons and baryons (in addition to any resulting leptons and bosons emitted by the W boson involved in the decay) and using data from a variety of different scenarios where this happens to back out the properties of individual constituent quarks in these composite particles from the experimentally measured properties of hadrons that contain them.  Meson decays are particularly attractive for these kinds of studies because two quark systems are usually simpler to analyze than three quark systems.

Research these days focuses on heavy mesons like B mesons (which contain bottom quarks) and D mesons (which contain charm quarks), particular the strange B mesons and strange D mesons (which have no "plain vanilla" up and down quarks), because these are produced far less often than lighter mesons made up only of up, down and strange quarks, so experimental data on their decays is more sparse and hence less precise.  Also, it is harder to do QCD calculations for for these systems because one cannot use the much easier to calculate with three quark type approximations - one must do more difficult four or five quark type calculations (including the impact of the existence of top quarks in the QCD calculations usually adds less accuracy than the uncertainty in the underlying physical parameter inputs for calculations involving hadrons).

Thus, while individually, this is just another ho-hum measurement, but collectively, the measurements of the mean lifetimes of all mesons and baryons for which measurements can be made, collectively, allows us to calculate a variety of Standard Model parameters such as CKM matrix elements.  Moreover, if the Standard Model is correct, the CKM matrix elements for any given quark have to produce results consistent with experiment in all of the scores of different hadrons that include quarks of that type.

To oversimplify, there are half a dozen collider experiments in the world.  They analyze the products of vast numbers of collisions and use a variety of data points about each one to determine from the decay products what the hadron at the start of the decay chain was and by what path it broke down into the decay products that were observed.  This data is put into files for each kind of original hadron with subfiles for each decay type from that kind of hadron that is observed with the precisely measured properties and frequency of that kind of decay.  The data in each subfile is analyzed and reduced to a paper like the one linked above.

Then, every year or two, somebody at the Particle Data Group compiles the conclusions of all of these papers into a consistent format and calculates a weighted average of all data every collected for that decay channel.

On about the same time frame, somebody does a meta-analysis of all of the different papers with data relevant to the Standard Model parameter that they are studying from the published literature with help from the compilation at the Particle Data Group that indexes these reports as footnotes to its entries.

In this analysis they back out, for example, best estimates of the CKM matrix bottom to charm quark element from the hadronic measurements and the relevant formulas, review the data for tensions between the values predicted in different kinds of decays (reconciling them with further analysis when possible) and then uses weighted averages supplemented by Standard Model specific theoretical analysis to revise the estimates for these parameters.  The paper on semileptonic kaon decays earlier in this post nicely illustrates this fact.

The kaon decay paper also illustrates that, without state of the art lattice QCD calculations, the process of backing CKM elements out of the experimental data can't be done with any great precision from first principles, although mere knowledge of the QCD equations together with data from similar hadrons can make it possible to predict the  properties of new hadrons by extrapolating from those whose properties have already been measured in sensible ways.  This is the state of QCD generally.  Many hadron observables have predicted values calculated with QCD from first principles that are far less precise than the experimental observations of those hadron observables.  The most precise QCD calculations, at best, rival the accuracy of the experimental observations to date (in part, however, because the accuracy of the QCD input parameters limits the accuracy of its predictions).

This is an ongoing labor involving something on the order of tens of thousands of highly skilled physicists and technicians every year that has gone on steadily (with varying levels of people committed to the effort from year to year, gradually going from the hundreds to the thousands to the tens of thousands over time with some bumps in the road as experiments are shut down before new ones are opened) for the last fifty years or so.

The experimentally Standard Model parameters whose measurements to date can be summed up on a single page of paper have cost something on the order of hundreds of billions of 2013 dollars over more than five decades and a meaningful share of the brightest scientific minds in the world to determine.  Many of those values are still known only imprecisely and will take hundreds of billions of additional dollars to refine.

In a fine illustration that the Marxist labor theory of value is not true, however, that page of data that cost hundreds of billions of dollars to create can be obtained for free from reliable sources on the Internet.  The same information, had it been available to scientists in the late 1930s, could easily have made it possible for however had been in possession of it to have won World War II.

UPDATE:

* A paper claiming to have used lattice QCD methods to establish the charm quark mass with a precision of less than 1% is worth noting.  The claimed value is 1.273(6) GeV.  Continuum QCD methods have apparently reached similar precision.

Notably, this means that the Koide triple value for charm quark mass from the t-b-c triple which is 1.356 GeV is now off by about 8% which more than twelve sigma from the new precision value.

The author of this paper is also a co-author of the Kaon decay paper described above. Earlier this year this author and others concluded using similar methods that the b quark mass was 4.166(43) GeV.  See also an early paper on these calculations here.  These results still leave the masses of the three light quarks (u, d and s) quite uncertain.  But, the three heavy quark masses are all now quite precisely known.  Another paper with this author as a co-author estimates the charm-down quark element of the CKM matrix.

Other investigators have made great progress in determining the strange quark mass, concluding in May of this year that it was 94 +/- 9 MeV, a reduction in the previous uncertainty of about two-thirds (the Koide ladder prediction based on the top and bottom masses had been 92 MeV).