Wednesday, January 29, 2014

Distinguishing the SM and SUSY With Running Coupling Constants At The LHC



The chart above via Lubos Motl's blog, illustrates the running of the the inverse of the Standard Model (SM) and Minimal Supersymmetric Model (MSSM) coupling constants with energy scale for the electromagnetic force, i.e. U(1), the weak force, i.e. SU(2) and the strong force, i.e. SU(3).

Distinguishing the Standard Model From SUSY Via The Running Of Gauge Coupling Constants

One of the clear and generic differences between supersymmetry theories and the Standard Model of particle physics is that the beta functions of the three fundamental forces, electromagnetism, the weak force and the strong force, are very different.

These differences cause the MSSM to experience gauge constant unification (i.e. a point at which all three forces have equal strength) at the grand unification theory (GUT) scale of about 10^15 GeV (in contrast, interactions at the LHC are in the general vicinity of 10^3 GeV aka 1 TeV).  Gauge coupling unification is one of the features of SUSY theories generically that makes them attractive and the GUT scale doesn't vary all that much between variations on the SUSY theme.

Supersymmetric theories generally presuppose that above the GUT scale spontaneously breaks the symmetries of the three gauge couplings that there is only a single unified force and that high energy physics above the GUT scale are essentially different in kind from the low energy effective SUSY theory - conveniently affording cosmology theorists freedom to effectively change the laws of physics during the first 10^-32 seconds or less after the Big Bang (by way of comparison it takes roughly a million times as long as that for hadronization of quarks to occur, or for a top quark, W boson or Z boson to decay once emitted).  By 10^-6 seconds after the Big Bang, the laws of physics would start to take approximately their current form.

In contrast, the three Standard Model beta functions for its gauge couplings, unmodified by new physics, do not converge at a single point.  The electroweak gauge coupling constants converge to identical values in the Standard Model at about 10^12 GeV, and at all energy scales before that point, the strong force coupling constant is larger than the weak force coupling constant which is in turn larger than the electromagnetic coupling constant, even though their relative strengths start to converge.

But, after that point, the strength of the electromagnetic coupling constant and weak force coupling constant are inverted, then the electromagnetic force coupling constant becomes identical to the strong force coupling constant after which their strengths are also inverted, and finally, the weak force coupling constant and strong force coupling constant converge around 10^18 GeV while the electromagnetic force is stronger than either of the other two forces.

In both the Standard Model and supersymmetric models, at higher energies the electromagnetic force gets stronger and the strong force gets weaker.  But, in supersymmetric models, rate of change in the electromagnetic force per unit change in the logarithm of energy is about 50% greater, and and the rate of change in the strong force per unit change in logarithm of energy is about 33% smaller.

The running of the weak force coupling constant differs even more dramatically between the two models. Starting around 1 TeV, i.e. at energies accessible before the LHC completes its run, the direction in which the weak force coupling constant runs is different between the SM and Supersymmetric models.  In the Standard Model, the weak force is weaker at higher energies, while in supersymmetric models, generically, the weak force is stronger at higher energies.  Also, the weak force coupling constant in the Standard Model runs about 500% as fast as in the MSSM.

Experimental Prospects For Discriminating Between SM and SUSY Gauge Coupling Constant Running At The LHC

Some analysis of our prospects for distinguishing experimentally between the running of particular gauge coupling constants as predicted by the Standard Model and the SUSY prediction, accompanied by some back of napkin estimates of the numbers involved based on the literature follows.

In a nutshell, distinguishing experimentally between SUSY and the SM via the running of the strong force coupling constant looks challenging, but prospects for making that distinction via the observed running of the fine structure constant and weak force coupling constant looks potentially quite fruitful.

Of course, SUSY models are not the only new physics models that would change the running of the gauge coupling constants at TeV scale energies relative to the Standard Model prediction, so experiments measuring these parameters at the LHC, in general, serve as a good model independent way to search for new physics at the LHC.  These measurements could quite possibly discern new physics at energies far lower than any of the other observable consequences of the new physics from the Standard Model prediction.

The Strong Force Coupling Constant - Prospects Weak At LHC

The running of the strong force is very difficult to measure with precision and the differences between its strength at easily attained energies like the Z boson mass, and its strength at the highest energies attainable by the LHC, which differ by a bit more than one order of magnitude, are easily calculated theoretically.

But, these differences are quite small relative the the precision with which the strong force coupling constant can be measured at all in a particular experiment.  And, the difference between the running of the strong force coupling constant in the SM and MSSM is smaller than the differences between the running of the other two coupling constants.  So, it is unlikely that the LHC will be able to use the running of the strong force coupling constant to distinguish between the Standard Model and supersymmetric models.

The strong force coupling constant, which is 0.1184(7) at the Z boson mass, would be about 0.0969 at 730 GeV and about 0.0872 at 1460 GeV, in the Standard Model and the highest energies at which the strong force coupling constant could be measured at the LHC is probably in this vicinity.

In contrast, in the MSSM, we would expect a strong force coupling constant of about 0.1024 at 730 GeV (about 5.7% stronger) and about 0.0952 at 1460 GeV (about 9% stronger).

Current individual measurements of the strong force coupling constant at energies of about 40 GeV and up (i.e. without global fitting or averaging over multiple experimental measurements at a variety of energy scales), have error bars of plus or minus 5% to 10% of the measured values.  But, even a two sigma distinction between the SM prediction and SUSY prediction would require a measurement precision of about twice the percentage difference between the predicted strength under the two models, and a five sigma discovery confidence would require the measurement to be made with 1%-2% precision (with somewhat less precision being tolerable at higher energy scales).

The Fine Structure Constant - Prospects Good At LHC

The differences between the running of the electromagnetic force coupling constant (aka the fine structure constant) in the Standard Model relative to supersymmetric models are also fairly modest over one order of magnitude, but they are still a moderate more distinct than the differences between the two models in the running of the strong force coupling constant.

But, because strength of electromagnetic interactions can be measured with a precision approximately 100,000 times as great as the strong force interactions, the prospects of being able to distinguish between the Standard Model and supersymmetric models based upon the running of this coupling constant at the LHC is much greater.

Even back in 2000, experimenters were able to measure differences in magnitude of the fine structure constant in experiments spanning energies ranges from about 2 GeV^2 to 3434 GeV^2 with a precision equal to about a third of the observed differences in force strength at the different energy levels (which was consistent with the Standard Model prediction).  The energies at the LHC are five to eight times as great as those made in 2000, and the precision of the measurements at the LHC on a percentage basis are probably at least somewhat improved from those made a decade and a half earlier.  The amount by which the fine structure constant should run under SUSY models at peak LHC energies should be on the same order as the amount by which it should run in the Standard Model at energies two and a half to four times as great as those made in 2000.

By 2011, measurements of the running of the fine structure constants at low GeV scale energies at the BES experiment were far more precise, measuring the differences in coupling strength due to its running with a precision of 1.2% or so.  The 2011 study also illustrates the capacity of relatively low energy scale precision electroweak measurements to shed light on phenomena that actually appear at one or two orders of magnitude greater energies.  The measurements in the 2011 paper increased the high end of the two sided one sigma confidence interval electroweak prediction of the Higgs boson mass from 115 GeV to 128 GeV, finally extending this range to masses that encompassed the ultimately discovered Higgs boson mass.  Yet, the study itself only measured events taking place at energies ranging from 2.6 GeV to 3.65 GeV.

Naively then, it ought to be possible to discriminate experimentally between the running of the fine structure constant in SUSY models and in the Standard Model at something on the order of 3-4 sigma by the time that the LHC's run is complete.

But, there may be material model dependence in an experiment based upon the running of the fine structure constant.

While the slope of the running of the fine structure constant in the Standard Model relative to the logarithm of the energy scale involved is fairly flat, in SUSY models that slope of the running of the fine structure constant is typically kinked, becoming more pronounced at masses ca. 1-2 TeV as the impact of supersymmetric particles somewhat below those masses kick in.  For whatever reason, visually at least, in charts of the running of SUSY gauge coupling constants, this kink appears more pronounced for the electroweak forces than it does for the strong force.

Thus, distinguishing between the SM and SUSY based upon the running of the fine structure constant may be more difficult than it seems if the lightest supersymmetric particle (LSP) has a mass that is close to or beyond the ability of the LHC to discern.  And, the fact that no supersymmetric particles have been observed so far at the LHC strongly favors SUSY models with a relatively heavy LSP, if indeed SUSY exists at all.

So, while the LHC may meaningfully constrain SUSY parameter space via its measurements of the running of the fine structure constant, this constraint may be less powerful than one would hope.

Perhaps the most important way in which constraints from the measured running of the fine structure constant at the LHC may constrain SUSY parameter space will be to rule out SUSY models in which there is a significantly sub-TeV superpartner that is not easily observed at the LHC.  For example, this could exclude SUSY models with an unexpectedly long lived LSP which passes beyond the range of existing detectors before it decays.  Searches for missing transverse energy already serve this purpose to some extent.  But, confirmation from the measured running of the fine structure constant at the LHC would make the conclusion drawn from missing transverse energy much more robust because the measurements in these two experimental tests are almost completely independent of each other.

The Weak Force Coupling Constants - Prospects Decent At LHC

The precision with which the weak force coupling constant's running can be measured at the LHC is intermediate between the precision with which the running of the electromagnetic fine structure constant, and the QCD strong force coupling constant can be measured.  And, as noted above, even distinguishing between Standard Model and SUSY predictions regarding the running of this constants may be challenging and model dependent.

But, the differences between the beta function of the weak force coupling constant in the Standard Model and in supersymmetric models is so great that the signal discriminating between the two theories should be so intense that it may reveal itself even if the measurements of the running of the weak force coupling constant aren't terribly precise and the theoretical differences between the Standard Model and SUSY values for it manifest only a few hundred GeV from the peak energy scales at which the LHC can measure these effects.

For example, suppose that the differences between the SM and a SUSY model's fine structure constant and weak force coupling constant both start to arise at 900 GeV and that the SUSY impact on the running of the fine structure constant is sufficiently slight that it is only definitively capable of being observed with current experimental apparatus at energy scales of 1900 GeV, at which the LHC may not be powerful enough to measure coupling constant strength.

The corresponding weak force coupling constant strength ought to change by an equal amount, and in a highly noticeable opposite direction from its previous running with the energy scale of the experiments at 1000 GeV.  Even if this running can't be measured as precisely as the running of the fine structure constant, by a presumed peak measureable energy scale at the LHC of 1400 GeV, the signal should be a strong as the fine structure constant running signal would be at 5900 GeV or more (since the difference in the direction of the running of the coupling constant would make it easier to see even when the precision is modest).

So, even if the measurements of the running of the weak force coupling constant at the LHC are a few orders of magnitude less precise than the measurements of the running of the fine structure constant at the LHC, there is still a very good chance that the LHC could measure this running with sufficient precision to discriminate between the predictions of the two models.

Slight Tweaks To The Standard Model Could Permit Gauge Coupling Unification

Of course, most people recognize that it is not at all reasonable to be confident that the low energy effective theory called the Standard Model really holds without modification all of the way up to energies in excess of 10^12 GeV which are far beyond those that can ever be created in man made experiments.

Even if we don't discover any entirely new forces or particles between the electroweak scale and the GUT scale, that doesn't necessarily imply that the Standard Model will perform perfectly over the additional nine orders of magnitude without even slight modifications.

As it happens, a very subtle tweak to one or more of the Standard Model beta functions could make a gauge coupling constant unification in the Standard Model possible.

For example, in the Standard Model the strong force coupling constant gets about 75% weaker between 1 TeV and 10^12 GeV.  But, if it instead it declined by 78% over those nine orders of magnitude, the three Standard Model coupling constants would converge at 10^12 GeV.  Given the immense complexity and numerous assumptions that go into the QCD and renormalization group calculations that ultimately help determine the strength of the strong force coupling constant at 10^12 GeV in the Standard Model, it would hardly be shocking to learn that some factor that could change its value by 4%-5% at such high energies was omitted or miscalculated using existing methodologies.

Similarly, the weak force coupling constant gets about 29% weaker between 1 TeV and 10^14 GeV.  But, if it got only 25% weaker over those eleven orders of magnitude, the three Standard Model coupling constants would converge at 10^14 GeV.  This would be a bigger percentage adjustment relative to the canonical Standard Model expectation, but would still be quite modest.

There are admittedly solid theoretical reasons for the beta functions of the Standard Model gauge couplings to have the forms that they do, and those equations have not been contradicted by experiments to date over energy scales that span three orders of magnitude.

But, one can easily imagine new physics that give rise to such subtle effects in the running of these gauge couplings at extremely high energy scales, such as quantum gravity considerations, in at least one or two of the gauge couplings, even if there is otherwise a new physics desert between the electroweak scale studies at the LHC and the GUT scale.

My personal conjectures

For what it is worth, my own personal conjecture as an educated layman is that by the end of its run, the LHC will not see any statistically significant deviation from the Standard Model expectation in the observed running of the three gauge coupling constants.  If this happens, this fact, as much as the mere non-discovery of superpartners at the LHC, may be the nail in the coffin of supersymmetry, and as a consequence of that, of string theory.  Pretty much every SUSY theory that serves any of the purposes that originally motivated it (e.g. the hierarchy problem and gauge unification), even if it has an LSP with a mass on the order of 10 TeV or so, for example, should have some observable consequences at the energy scales of 1 TeV or so probed at the LHC.

Failure to discover any deviations at all from the Standard Model at the LHC, even those like the running of coupling constants that may derive from key elements of the theory at much higher energies, may not rule out every last bit of the SUSY parameter space.  But, this may be sufficient to relegate SUSY theorists and M-theory to a role in the theoretical physics world comparable to other notable theories that are disfavored but have not been completely ruled out in every possible permutation such as Technicolor, preon theories, non-SUSY GUT theories, and the like.

However, I do personally believe that someday, not necessarily soon or within the time frame of the current LHC run, that it is likely that some new physics that leads to gauge coupling unification of the three Standard Model forces at an energy scale within an order of magnitude or three of the SUSY GUT scale will be discovered.  And, I further suspect that the value of the unified gauge coupling, should one be discovered, will be closer to the 0.0250 value around which the individual interactions of the Standard Model gauge coupling beta functions naively extrapolated appear, rather than around the 0.0400 value where the MSSM gauge couplings converge at around 10^15 GeV.  Simply put, the Standard Model comes so close to such a beautiful result that it is hard to believe that we aren't actually just missing a little something that prevents it from doing that.

Strong Force Coupling Constant Numerology



The data points confirming that the running of the strong force coupling constant (which is a dimensionless quantity) matches its Standard Model beta function for energies from 2 GeV to 200 GeV is illustrated above.  It is customary to quote the strength of this force at the 91 GeV energy scale corresponding to the mass of the Z boson.

The strong force coupling constant is equal to 1/9 + the fine structure constant to within the current limits of experimental accuracy at the Z boson mass.  The current measurement of the strong force is not sufficiently precise to distinguish the fine structure constant at the Z boson mass of about 1/128.886 (0.11887) and the fine structure constant at zero momentum transfer of about 1/137.036 (0.11841).  The current measured value of the strong force coupling constant at the Z boson mass is 0.1184(7), i.e. 0.1176-0.1191.

This relationship breaks down, however, at lower energies where the running of the strong force coupling constant makes it much stronger.  For example, at 2 GeV which is the energy scale typically used to quote the masses of the light quarks, the strong force coupling constant is about 0.33.  See also here.

There are many ways that 1/9 could be relevant to QCD.  There are three kinds of color charge, it is the produce of an SU(3) group, there are three generation of quarks, etc.

The fine structure constant (i.e. the coupling constant of the electromagnetic force), meanwhile, might be a factor generally applicable to a force carried by a zero mass vector boson, something that photons and gluons have in common.

Another way to derive the strong force coupling constant from first principles would be to observe that its value is approximately equal to 1 at its peak value.  If the value of 1 is exact at this boundary condition, the experimentally measured value of the strong force coupling constant at all other energies is entirely a product of its beta function from the integer value at the boundary.  Unlike the electromagnetic force, however, the strong force does not necessarily peak at zero momentum and distance.  It relaxes, giving rise to asymptotic freedom, at very short ranges, for quarks within hadrons.

Tuesday, January 28, 2014

A New Precision Estimate Of The Bottom Quark Mass

A new study estimates the mass of the bottom quark to be 4,169 +/- 8 MeV using QCD sum rules (0.2% precision).  Another recent study estimated the bottom quark mass at 4,166 +/- 43 MeV (consistent with the new result). The new bottom quark mass estimate compares to and is consistent with a Particle Data Group world average value of 4,180 +/- 30 MeV in 2013, but is about four times as precise.

Most of the the remaining uncertainty in the latest bottom quark mass estimate is attributable to uncertainty in hadron mass measurements and a very similar but slightly larger uncertainty arising from uncertainty in the strong force coupling constant which is currently about 0.1184 +/- 0.0007 (0.6% precision) at the W boson mass.

Other Recent Quark Mass Estimates

The best global fit estimate of the top quark mass (as opposed to the best direct measurement of this quantity) is 173,200 MeV (about 0.1% precision), and I suspect based upon Higgs vev considerations that it is probably closer to 173,180 MeV (if correct, about 0.01% precision).  Based upon direct measurements the world average of the top quark mass is 173,070 +/- 888 MeV (about 0.5% precision).

This is one of many recent claims of high precision quark mass measurements, including a charm quark mass of 1,273 +/- 6 MeV (0.5% precision).

The claims include a QCD sum ruled based strange quark mass determination of 94 +/- 9 MeV (10% precision).  The Particle Data group world average of the Strange quark mass is 95 +/- 5 MeV (5% precision).  Another recent lattice QCD calculation estimates the strange quark mass at 99.2 +/- 3.9 MeV.

The best estimates of the up and down quark masses are about 2.3-0.5+0.7 MeV (about 25% precision) and 4.8-0.3+0.5 MeV (about 8% precision) respectively.  The average of the up and down quark masses is 3.5 MeV -0.2+0.7 MeV (about 11% precision).  The estimated ratio of the up quark is estimated to be 38% to 58% of the down quark mass (about 20% precision).

The Charged Lepton Masses

The charged lepton masses are known with far greater precision for the most part (per the Particle Data Group):

The electron mass is 0.510998928 +/- 0.000000011 MeV (0.0000002% precision)
The muon mass is 105.6583715 +/- 0.0000035 MeV (0.000003% precision)
The tau mass is 1776.82 +/- 0.16 MeV (0.01% precision)

The Standard Model Massive Bosons

The masses of the massive bosons of the Standard Model are also known fairly precisely (per the Particle Data Group except for global fits and the Higgs boson mass):

The W boson has a best fit mass that is 80,385 +/- 15 MeV with a global fit value of about 80,376 MeV.
The Z boson has a mass of 91,187.6 MeV +/- 2.1 MeV.

The Higgs boson has a mass of about 125,600 MeV +/- 450 MeV (0.4% precision).  My strong personal conjecture is that it is in fact exactly equal to the W boson mass plus one half of the Z boson mass (i.e. about 125,979 MeV for the best fit isolated W boson mass estimate and 125,970 MeV for the global fit W boson mass estimate with precision on the order of 0.03%).

The Neutrino Masses

We know the following about the neutrino masses, but have not yet definitively determined their absolute masses or mass hierarchy.  But, the very plausible assumption of a normal hierarchy somewhat similar to that of the quarks and charged leptons allows us to make some pretty precise estimates of these masses.

The square of the difference between the first and second neutrino mass states is 7.50 +/- 0.20 * 10^-5 eV^2.  The square root of this squared difference is 8.7-0.2+0.1 meV (about 2% precision).

The square of the difference between the second and third neutrino mass states is 0.00232 - 0.00008 + 0.00012 eV^2   The square root of this squared difference is 48.2 - 0.5 +1.2 meV (about 2% precision) .  This tends to imply a difference between the first and third neutrino mass state in a normal hierarchy of 57 meV.

In a normal hierarchy, if the pattern of the other fundamental fermion masses are any guide, we expect the mass of the first neutrino mass state to be << 8 meV and that the sum of the three mass states of the neutrinos is less than 60 meV which is consistent with cosmology data to date.

Other Fundamental Constants

The electromagnetic force coupling constant alpha is 0.0072973525698(24).

The Fermi coupling constant Gf/(hc)^3 is 1.1663787(6) * 10^-5 GeV^-2 (a precision of 0.00006%).

The Higgs vacuum expectation value, weak mixing angle, and weak force coupling constant can be derived from the other constants already provided.  The relationship between the Fermi coupling constant Gf, the weak force coupling constant g, and the W boson mass Mw is Gf/sqrt(2)=g/8Mw.  The Higgs field vacuum expectation value is the square root of a quantity equal to the Fermi coupling times the square root of two.  The cosine of the weak mixing angle is equal to the mass of the W boson divided by the mass of the Z boson.

Planck's constant is 6.62606959(29)*10^-34 J*s.

The speed of light (by definition) is 299,702,458 meters per second.

The gravitational constant is 6.70837(80)* 10^-39 hc/(GeV/c^)^2 (a precision of about 0.01%).  Some aspects of general relativity have been tested to 0.1% precision, and others have been tested to up to 0.001% precision, with the greatest precision demonstrated with the equivalence principal tested to one part per 10^13 precision.

The cosmological constant of general relativity is 10^-47 GeV^4 and is known only to about one significant digit.

We have almost all of the pieces of the puzzle

We have observed all of the particles predicted by the Standard Model and none that are not predicted by it.  We now have either measured values or very well motivated good estimates of all of the mass constants in the Standard Model, to considerable precision that simply not available even a couple of years ago.  Each of these constants, except the Strange quark mass for which we have 5% and 5 MeV precision, is known either to a better than 1% precision or to a less than 1 MeV precision.  The realistic uncertainties in each of the neutrino masses are less than 3 meV.

We also have reasonably precise values for all three of the Standard Model coupling constants (to at least 0.1% precision), for all four of the Standard Model CKM parameters (to at least 15% precision), and tolerably accurate estimates for three of the four Standard Model PMNS matrix parameters (to at least 11% precision).

While not specific to the Standard Model, we also have precise measurements of the speed of light in a vacuum and Planck's constant.  For what it is worth we also have serviceable measurements of the two experimentally measured constants of general relativity: the gravitational coupling constant (to 0.01% precision), and the cosmological constant (to an order of magnitude).

Even the permitted parameter space for dark matter models is quite constrained compared to just a few years ago.  A great many dark matter models have been ruled out by observational evidence and the parameter spaces of many of the competing approaches is quite confined.  We know how much of it there is, what kind of velocity distribution it must have, and more or less how it is distributed within galaxies.  We know it is pressureless or very nearly so.  We know that it is close to collisionless, but probably not perfectly.  If it is a gravity modification, rather than a particle, we know its approximate form and have made an estimate of the key experimentally measured constant in such a theory to about 10% precision.

Testing Within The Standard Model Theories

The era in which it is possible to make true predictions of any of the Standard Model experimentally measured constants, untainted by preliminary measurements of them, other than arguably the CP violating phase of the PMNS matrix parameter for which only extremely inaccurate and preliminary measurements have been made to date, is gone.

But, the time has now come when it is possible to rigorously test possible phenomenological relationships between these constants, such a variations on Koide's formula, with a goal of piercing through to some deeper theory that underlies the Standard Model and makes these experimentally measured constants anything other than arbitrary.

I have yet to meet anyone who really believes that all of these Standard Model parameters are simply arbitrary.  There are many "coincidences" that flow from the particular values of these parameters.

If suspected relationships between the electroweak boson masses and the Higgs boson mass, and between the Higgs vev and the aggregate squared masses of the Standard Model are correct, this removes at least three of free parameters from the Standard Model - the Higgs boson mass, the electromagnetic coupling constant and the weak force coupling constant.  On these assumptions, the Higgs boson mass can be determined from the W and Z boson masses, the weak mixing angle can be determined from the W and Z boson masses, the weak force coupling constant can be determined from the total set of fundamental particle masses in the Standard Model, and the electromagnetic coupling constant can be determined from the weak force coupling constant and the weak mixing angle.

Koide's rule for charged leptons reduces the number of free fermion mass parameters by one.

The connection between the Higgs boson mass and the value of that mass that maximizes photon production in its decays, the value of that mass that makes the vacuum metastable, and the value of that mass that runs to zero at a GUT scale all seem to coincide and possibly sets a mass scale for all of the Standard Model masses.

This still leaves thirteen mass parameters, eight mixing matrix parameters, and the strong force coupling constant, for a total of twenty-two free experimentally measured parameters.  But, eliminating four experimentally measured parameters is progress.

The sense that the mass parameters are related to each other and to the mixing matrix parameters in some formulaic way, while elusive, seems so very likely.  The extended Koide's formula, for example, comes quite close to predicting the quark masses from the two most precisely measured charged lepton masses, and a variant on its comes quite close to predicting the relative masses of the neutrinos.  It wouldn't be at all surprising if one genius tweak to that formula could make all nine of the charged fermion masses determinable from just two fermion masses.  It also doesn't seem at all implausible that the CKM matrix elements could be derived in some way from the quark masses.  There is structure there, even if no one has puzzled out its exact nature yet, perhaps in part because no answer reached could ever be conclusive without the level of precision measurements that we have now achieved.

Constraints On The Hypothetical Axion Mass And Decay Constant (UPDATED)

What is an axion?

The axion is a hypothetical massive scalar boson with a zero electric charge whose existence would (1) suppress CP violation in strong force interactions despite the fact that there is naturally a term in the strong force equations that could introduce and quantify CP violation in these interactions, and (2) provide a non-thermal dark matter particle.

The axion was originally proposed by Peccei and Quinn in a 1977 paper.  The only experiment claiming to have observed an axion did so in 1986, but was subsequently discredited with the conclusion was retracted by the group that did the original experiment in 2007.  Other experimental efforts to observe an axion over the last three and a half decades have produced null results.

There are several other ways that the strong CP problem could be resolved (e.g. (i) the up quark could actually have a near zero mass or (ii) the physical constant related to CP violation by the strong force could be zero simply because that is the arbitrary value of that experimentally determined physical constant (an "unnatural" choice but not one that violates any fundamental principle of the Standard Model), or (iii) it is my own conjecture that the fact that this is related to the fact that the gluon has a zero rest mass).

There are also many other possible dark matter candidates.

Thus, neither of these considerations demand the axion as a solution, although the axion be a solution to one or both of these problems if indeed it does exist.

The authors of the new paper cited below also raise the possibility that the axion addresses tension between experiment and the Standard Model prediction related to the neutron electric dipole moment, although they are not correct on this point as I discuss in the footnote below.  After corresponding with the authors it has become clear that they did not intend to make the assertion that I had misunderstood them to be making based upon the way a couple of key sentences in the introductory section of the paper discussed in the footnote below had been phrased.

The new constraints from Big Bang Nucleosynthesis on Axion properties.

A new pre-print by Blum, et al., examines observational limits on the axion mass and axion decay constant due to Big Bang Nucleosynthesis, because the role that the axion plays in strong force interactions would impact the proportions of light atoms of different types created in the early universe.

The study concludes that (1) the product of the axion mass and axion decay constant must be approximately 1.8*10^-9 GeV^2, and (2) that in order to solve the strong CP problem and be consistent with astronomy observations, that axion mass must be between 10^-16 eV and 1 eV in mass (with a 10^-12 eV limitation likely due to the hypothesis that the decay constant is less than the Planck mass).  The future CASPEr2 experiment could place a lower bound on axion mass of 10^-12 eV to 10^-10 eV and would leave the 1 eV upper bound unchanged.

Other studies argue that the axion decay constant must be less than 10^9 GeV (due to constraints from observations of supernovae SN1987A) and propose an axion mass on the order of 6 meV (quite close to the muon neutrino mass if one assumes a normal hierarchy and a small electron neutrino mass relative to the muon neutrino-electron neutrino mass difference) or less.  Estimates of the axion mass in the case of non-thermal production of axions, which are favored if it is a dark matter particle, are on the order of 10^-4 to 10^-5 eV.  There are also order of magnitude estimates of the slight predicted coupling of axions to photons.

Other studies placing observational limitations on massive bosons as dark matter candidates apply only to bosons much heavier than the axion.

Cosmology implications of axion dark matter

The observational constraints on axion mass put it in the same vicinity as that of heavy neutrinos, which would be considered "hot dark matter," But, the mass-velocity dispersion relation used to distinguish "cold", "warm" and "hot" dark matter which refers to the velocity dispersion of a dark matter candidate, does not apply to dark matter candidates that are not thermal relics like axions.  Thus, axion dark matter could have a velocity dispersion consistent with "cold" or "warm" dark matter despite having a mass that would make it "hot dark matter" (which is ruled out by observational evidence) if it were a thermal relic.

However, since axion dark matter is not a thermal relic, it cannot be assumed to produce a cosmology consistent with the empirically validated six parameter lamda CDM cosmology model whose parameters were recently refined by the Planck satellite's cosmic background radiation observations.  This model assumes thermal relic dark matter, although it is not very specific regarding its properties.  The new paper does not address the cosmology implications of a non-thermal relic dark matter candidate, beyond its impact on Big Bang Nucleosynthesis if the axion mass and decay constant are outside a specified range.

Footnote on the Neutron EDM

The new paper states claims that there is tension between experimental measurement of the neutron electric dipole moment and the Standard Model expectation for its value at page 1 which it states that:
The QCD theta term . . . induces a neutron electric dipole moment (EDM) approximately equal to 2.4*10^-16 theta*e*cm [5 - Pospelov and Ritz, 83 Phys. Rev. Lett. 2526 (1999)] that is in tension with experiment for theta greater than 10^-10 [6 - Baker et al., 97 Phys.Rev.Lett. 131801 (2005)][7 - Harris, et al., 82 Phys.Rev.Lett. 904 (1999)].
In contrast, Wikipedia cites a current limit of the neutron EDM of less than 10^-24 e*cm, and a Standard Model expectation of 10^-32 e*cm. Wikipedia also states that the neutron EDM constrains the strong CP violation theta angle to be less than 10^-10 radians, which is not a tension between experiment and the Standard Model expectation.

Harris (1999) states that the neutron EDM is less than 6.3*10^-26 e*cm at the 90% confidence interval and does not set forth a minimum value for the neutron EDM as the Blum, et al. (2014) preprint claims.

Pospelov and Ritz (1999) does contain a 2.4*10^-16 theta*e*cm result to a 40% precision.  Their paper also states that this translates into a limit on the strong CP violation theta angle to be less than 3*10^-10, but does not support an experimental minimum for theta of 10^-10 as the Blum, et al. (2014) preprint claims.

Pospelov, Ritz and Huber, in a 2007 preprint, cite at endnote 28 to Harris and Baker papers above, just as the new paper by Blum, et al. does, for the proposition that the neutron EDM is less than 3*10^-26 e*cm, which again, fails to support that claim that there is experimental tension with the Standard Model prediction.  They also note that experiments are underway which could bound the neutron EDM experimentally to less than 10^-28 e*cm, closing two of the six orders of magnitude between the Standard Model prediction and the experimental constraint (and placing a considerably more strict limitation on the strong CP violating angle theta); of course, the new experiments could instead actually measured the neutron EDM and discover new physics.

Baker et al., 97 Phys.Rev.Lett. 131801 (2006) which is cited by both Wikipedia (with the correct year) and by the preprint (with the wrong year but the same journal page number reference) states that the 90% confidence interval upper limit on the neutron EDM is 2.9*10^-26 e*cm and does not undertake to convert that figure into a constraint on the strong CP angle theta or suggest any floor on either the neutron EDM or theta.  Baker, et al. responded to a comment on their paper and stood by their results in a Response of 2007.  Therefore, it appears that the new paper's assertion is simply incorrect.

Thus, Blum, et al. (2014) simply do not understand their source or the state of the research when they claim that there is experimental support for the existence of an axion based upon tension between the Standard Model prediction of the neutron EDM and experimental data showing that theta is not less than 10^-10 based upon Pospelov and Ritz (1999), Harris (1999) and Baker (sic. 2005), or alternately did not write what they meant to say in the sentences quoted above.

I would very much hope that this will be corrected between the preprint prior to publication and I have corresponded with the authors on this point asking them to correct the apparent error.  I will update this post if there are new developments on this point.

UPDATE January 29, 2014:  After exchanging e-mail with the authors, it has become clear that the issue is one of clarity of phrasing, rather than intended meaning.  They state in a prompt reply to my e-mail endorsed by the authors collectively that:

There seems to be a slight misunderstanding here: what we say in our preprint is that the QCD theta term would violate existing experimental upper bounds on the neutron EDM, if theta was larger than ~10^-10. We do not suggest that theta is larger than 10^-10. Thus, we seem to be in agreement with your point of view. We're sorry if this point was not sufficiently clear in the paper.

Monday, January 27, 2014

What's Left From The Junk Heap of Failed Dark Matter Models?

Simple Dark Matter Models Have Failed

Lots of phenomena in astronomy are assumed to be the product of dark matter not made out of the same kind of particles that are found in ordinary matter that interact with ordinary matter predominantly via gravity alone.  For example, dark matter is invoked to explain why galaxies rotate at speeds inconsistent with their visible mass.

Very simple collisionless warm dark matter (keV scale) and cold dark matter (many GeV scale) dark matter models are sufficient to explain the dark matter phenomena observed at the cosmology level.  Thermal relic "hot dark matter" with mass in the neutrino range has long been ruled out because this would create a universe that has far less large scale structure than is observed.  But, warm dark matter and cold dark matter models have historically been less accurate at describing phenomena at the scale of an individual galaxy and cold dark matter models predict that there would be far more satellite galaxies to galaxies like the Milky Way than are observed.

A pure collisionless cold dark matter model leads to a cuspy NFW dark matter halo shape (named with the initial of the three scientists to formulate it) which is inconsistent with observational evidence.

Much lighter dark matter particles called warm dark matter, solve the satellite galaxy problem and fit a variety of other observational constraints in a narrow mass window around 2-3 keV, and also produce a less severely cuspy dark matter halo shape.  But, improved observations of the movements of visible matter in galaxies used to more precisely infer the shape of the dark matter halos that would be necessary to produce the observed movements of visible matter still rule out even simple collisionless warm dark matter models.  These models still produce halos that are too cuspy.

But, there is basically no form of dark matter that is capable of being consistent with observational evidence and is a thermal relic that can fit the bill of cold dark matter for the highly successful lamda CDM cosmology model which is not also accompanied by a new force that acts between particles of dark matter and is mediated by an MeV scale mass force carrying boson that creates a Yukawa potential between dark matter particles.

Thus, without a major overhaul of the foundations of cosmology, both a new fundamental particle and a new force with its own particle are necessary, at a minimum.  A complete formulation of physics in a particle model, if it is possible at all, would also require a hypothetical graviton tensor boson, and possible a dark energy scalar particle and an inflaton scalar particle as well - although it is possible that the Higgs field, dark energy phenomena, and cosmological inflation (or at least a couple of them) could have a common scalar particle source.

Finally, A Dark Matter Halo Model That Works

On Friday, I reviewed a new paper by Toky Randriamampandry and Claude Carignan, entitled "Galaxy Mass Models: MOND versus Dark Matter Halos" which recapped two important developments in dark matter research and put it to the test by consistently fitting galactic rotation curves better than MOND models, which modify the laws of gravity in weak gravitational fields to make them weaken more slowly than they do in Newtonian gravity and General Relativity.

This wasn't an entirely fair comparison. The dark matter halo model using two parameters, while the MOND model used just one. The test assumed an empirically fixed mass to light ratio which is not part of the MOND paradigm for which M/L equivalent is an output rather than an input of the model (indeed, predictions regarding mass to light ratio equivalents were key valdations of the early MOND models). And, the galaxies examined in which there were significant discrepancies from the MOND prediction were dwarf galaxies with a very spatially extended region in which there was a transition from Newtonian to non-Newtonian behavior with respect to visible mass. This raises the possibility that the MOND interpolation function, an ad hoc formula for transitioning between the Newtonian and non-Newtonian MOND regime, rather than the core MOND concept may have been at fault for the poor fits. In most galaxies, the transition from the MOND to non-MOND regime is sufficiently abrupt that rotation curve fits are indifferent to the exact form of the interpolation function. But, in these kinds of dwarf galaxies, where the transition is much more gradual, the interpolation function, which there has been little empirical work to refine because rotation curve fits aren't influenced much by it in other kinds of galaxies, is much more important. So, a poor MOND fit could be due to problems with a part of the theory that even its supporters acknowledge needs more work.

But, what are the two important developments?

First, there is a dark matter halo model that can produce consistently accurate rotation curve. In a nut shell this is a pseudo-isothermal dark matter halo model in which core radius and central halo density, are not independent of each other and the surface density of the halo has a constant value which is calculated as the product of these two parameters and is approximately 120 Mp/c2 (two previous papers estimated the value of this constant at 100 and 150, respectively).

We actually know, from precision observations of the rotation curve of the Milky Way galaxy that this spherically symmetric model isn't correct. Instead, the major axis of the dark matter halo which coincides with the galaxy's axis of rotation is a bit longer than the radial axis of the dark matter halo that is parallel to the spiral arms of a spiral galaxy. But, the deformation of the halo shape from purely spherical to an ellipsoid that is modestly deformed relative to a spherical shape, is pretty modest (e.g. the ratio of the major axis to the minor axis of the halo is probably less than 2.5-1 although it is more than 1-1).

Of course, this isn't a complete dark matter model. It doesn't tell you what kind of dark matter particles and dark matter particle interactions are necessary to produce that kind of halo. But, it provides a mathematically well defined and empirically validated intermediate target for people trying to determine what kind of dark matter particle models could produce such halos. And, of course, any particle model that can produce the right halo in N-body simulations also needs to be able to fit other constraints from scales larger than individual galaxies, satellite galaxy formation rate limitations, and particle physics. For example, the mass density of dark matter in the universe needs to be a bit more than five times that of ordinary baryonic matter, it must have neutral electric charge, and it must not appear in W or Z boson decays at energies possible to generate at the LHC so far.

Second, it is possible to reduce this dark matter halo model to a single parameter without undue loss of accuracy. One of the basic consequences of the fact that MOND can make a reasonable fit to the rotation curve of almost any galaxy with a single parameter is that any dark matter halo model must also be able to do so.

Trying To Fit The Right Halo To Specific Kinds Of Dark Model Particles

Finding a particle with the right mass and interactions to produce this kind of halo is another problem.

No Known Forces Give Rise To Cold Dark Matter-Ordinary Matter Interactions

We know that no particles that could produce the right kind of dark matter halo are produced in the decays of W and Z bosons, ruling out, for example, any neutrino-like particle with a mass of 45 GeV or less.  In other words, no light dark matter candidate can be "weakly interacting".

We also know that direct detection of dark matter experiments such as XENON rule out essentially all of the cold dark matter mass parameter space (below 10 GeV to several hundreds of GeVs with the exclusion most definitive at 50 GeV) through cross-sections of interaction on the order of 10^-43 to 10^-45 which is a far weaker cross section of interaction than the neutrino has via the weak force.  If dark matter does interact via the weak force, it differs from all other weakly interacting particles which has much higher weak force cross sections of interactions and integer or simply fraction of integer weak force charges.

XENON also places strong limits on interactions between ordinary photons and "dark photons".

We know that dark matter has zero net electric charge (it dark matter is composite, its components could have electric charge) and is not produced or inferred in any strong force interactions observed to date in collider experiments.

Taken together these facts rule out any kind of interaction between between cold dark matter and ordinary matter via any recognizable version of the three Standard Model forces (electromagnetic, weak and strong).

Of course, by hypothesis, dark matter and ordinary matter interact via gravity just like any other massive particles.

Thus, interactions between dark matter and ordinary matter other than via gravity are strongly disfavored.

Any non-gravitational interactions between ordinary matter and dark matter must be very, very weak and involve some kind of new force, or must involve radically new manifestations of an existing force.  For example, they might be transmitted via some kind of composite particle by analogy to the nuclear binding force in atoms carried mostly by pions that is derivative of, but much weaker in this manifestation than, the strong force that binds quarks within hadrons.

There are not currently any models for such forces that produce the observed dark matter halos of which I am aware.  The trick is to figure out how to make the interaction with ordinary matter weak enough to make it "nearly collisionless" as it must be to fit cosmology models, while strong enough to in some manner track the distribution of ordinary matter to a greater extent than gravity requires it to match that distribution.  Theoretical interest has focused on interactions between the sectors via the Higgs boson and interactions between "dark photons" and ordinary Standard Model photons, and on very slight weak force interactions that are somehow suppressed when they involve dark matter.

Completely Collisionless Dark Matter Is Inconsistent With Observational Evidence 

We know that purely collisionless dark matter (i.e. dark matter that interacts with other dark matter only via the gravitational force), that has a particular mass anywhere from the keV range to the TeV+ range produces cuspy halos inconsistent with observational evidence.

We know that multiple kinds of collisionless dark matter simultaneously present in the universe at the same time produce worse fits to the data than single variety of collisionless dark matter models.

Collisionless bosonic dark matter, as well as fermionic collisionless dark matter, is likewise excluded over a wide range of parameters.

Thus, the purest form of sterile neutrino, with a particular mass and no non-gravitational interactions at all, is ruled out by observational evidence from the shape of dark matter halos.

Simple Self-Interacting Dark Matter Models Still Fail

We know that self-interactions between dark matter particles with each other with cross-sections of interaction on the order of 10^-23 to 10^-24 greatly improve the fit to the halo models observed (self-interactions on the order of 10^-22 or more, or of 10^25 or more, clearly do produce the observed halos). Notably, this cross section of self-interaction is fairly similar to the cross-section of interaction of ordinary matter (e.g. helium atoms) with each other.  So, if dark matter halos are explained by self-interaction, the strength of that self-interaction ought to be on the same order of magnitude as electromagnetic interactions.

But, our observations and simulations are now sufficiently precise that we can determine that ultimately, a simple constant coupling constant between dark matter particles or velocity dependent coupling constant between dark matter particles fails to fit the observed dark matter halos.

Generically, these models generate shallow spherically symmetric halos which are inconsistent with the comparatively dense and ellipsoidal halos that are observed.

Sophisticated Self-Interacting Dark Matter Models Might Work

Next generation self-interacting dark matter models look at more a general Yukawa potential generated by dark matter to dark matter forces with massive force carriers (often called "dark photons") that have masses which empirically need to be on the order of 1 MeV to 100 MeV (i.e. between the mass of an electron and a muon, but less than the lightest hadron, the pion, which has a mass on the order of 135-140 MeV) to produce dark halos that are a better fit to the dark matter halos that are observed.  Sean Carroll was a co-author on one of the early dark photon papers in 2008.

There have been some efforts to constrain these models directly in the case where photons and dark photons are allowed to interact to some extent (more recently here).  But, since these experimental searches depend upon dark photons decaying into particles outside the dark sector, and generally, decays of force carrying particles only involve particles that the force carrying particle couples to, and since photons are only known to couple to electric charge and nothing else, the prospects of this experimental search producing anything other than a null result are dim.  Indirect searches based upon precision measurements of the trajectories of visible matter and understandings of how invisible ordinary matter are distributed around that visible matter, that are used to infer dark matter distributions in a wide variety of systems observed by astronomers, are likely to be more fruitful in pinning down the properties of dark photons than direct detection experiments.

This force could be scalar (like Newtonian gravity or electromagnetism without polarization), pseudoscalar (like pions) or vector (like photons, gluons, W bosons and Z bosons).  This force could have a single repulsive charge (like imaginary number valued mass in Newtonian gravity, or like a universe made entirely of electrons without any protons), or could have positive and negative charges akin to the electric charge.  Often it is modeled as a second U(1) group interaction, like the U(1) electromagnetic force, with a massive boson much like the weak force but lighter than the W and Z bosons.  In contrast, the strong force is SU(3) and the weak force is SU(2) (strictly speaking the unified electroweak force is SU(2) * U(1), but not necessarily neatly separated into weak and electromagnetic components).

Still, the bottom line, is that to explain dark matter, one needs at least a new dark matter fermion and a new massive boson carrying a new force.

The apparent relationship between the known particle masses and the Higgs vacuum expectation value in which the sum of the square of the mass of each particle that obtains its mass via the Higgs boson equals the square of the Higgs vev disfavors new heavy particles that gain their mass via the Higgs mechanism.  But, these measurements are not so precise that they could disfavor new light particles that gain their mass via the Higgs mechanism.  Current experimental uncertainties in this equivalence could accommodate both a new massive boson of 100 MeV and a new massive fundamental fermion of up to about 3 GeV, so both particles could couple to the Higgs boson and obtain their mass entirely from interactions with it, even though they don't couple to the other Standard Model forces, consistent with current knowledge from colliders (which cannot yet quantify if there are any "missing" Higgs boson decays from new long lived electrically neutral particles).

Creating and Annihilating Dark Matter: Thermal Relics v. Other Sources

One of the things we know about dark matter is its overall abundance and expected mass density in the vicinity of Earth per volume of empty space.  Any model that calls for too much dark matter to be created or for too little dark matter to be created is disfavored.

The generally very successful six parameter lamda CDM cosmology model, which accurately describes a wide variety of properties of the observed universe such as its cosmic background radiation and the relative frequency of stars at different redshifts, assumes that dark matter is nearly collisionless and has an origin as a thermal relic consistent with a mass of ca. 2 keV or more (i.e. warm dark matter or cold dark matter).

The assumption that dark matter is largely a thermal relic (i.e. that the number of dark matter particles "freeze out" at a certain point when the temperature of the universe becomes cool enough) tightly relates the mass of a dark matter particle to its average velocity and kinetic energy, and places a floor on dark matter mass.

In models where dark matter has some self-interaction via a massive boson creating a Yukawa potential, thermal relic dark matter abundance and velocity is still a function of dark matter particle properties.  But, in that case, it is a product of the coupling constant of the self-interaction force, the mass of the force carrying boson and the mass of the dark matter particle itself, rather than merely the mass of the dark matter particle.  This creates a three dimensional parameter space window for combinations of these parameters in the model.

Once can create a dark matter model where dark matter abundance is not a function of the moment at which it freezes out as a thermal relic, and doing so greatly loosens this constraint on the parameter space for dark matter particles.

For example, axion dark matter, which has a mass so low that in most cases it would be "hot dark matter" rather than "warm dark matter" or "cold dark matter" if it was a thermal relic, might be possible to reconcile with observation if it was created non-thermally, for example, via strong force interactions that create and destroy it at a rate consistent with the current observed abundance of dark matter in the universe.

But, any dark matter model with a dark matter candidate that is not a thermal relic makes it necessary to radically rethink and overhaul the lamda CDM cosmology model from first principles and reconcile the revised cosmology model to what is observed.  Given how many pieces of the puzzle of the universe's cosmology come together nicely if it is possible merely to find a suitable thermal relic dark matter candidate that can reproduce the galaxy scale properties of dark matter phenomena, as well as the large scale structure dark matter phenomena that are observed, thermal relic dark matter candidates are particularly attractive.

Analysis

The strong possibility that dark matter is entirely sterile with respect to Standard Model particles and interactions (apart, perhaps from the Higgs boson), dramatically limits the usefulness of direct dark matter detection experiments.  The gravitational impact of individual dark matter particles is simply far to slight to be detected experimentally at any time in the foreseeable future.  Direct dark matter detection experiments are likely to do nothing more than to rule out all forms of dark matter that have meaningful non-gravitational interactions with ordinary matter.

But, the prospects for inferring the properties of one or more dark matter particles from their aggregate distribution as inferred from their collective gravitational effects are looking good.  The most simple models of the dark sector don't work, but a straightforward two particle model with one fermion and one massive boson whose properties are quite tightly bound by observational evidence may suffice to do the job of explaining all of our observations.  If this were accomplished, almost all of the "what" questions in physics would be answered, even if some of the "why" questions remained open, and there might be a few loose ends with observable consequences only capable of being seen in extremely esoteric circumstances (e.g. are there unstable, second and third generation dark matter fermions akin to muons and tau leptons).

Footnote

I am writing most of this from memory and will try to track down references to specific points later, probably as unacknowledged added links to this post.

Friday, January 24, 2014

Three Decades Later Dark Matter Models Catch Up To MOND

In 1983, Milgrom came up with a simple empirical formula (MOND for Modified Newtonian Dyanamics) that reasonably successfully fit essentially all galactic rotation curves, both of all types already observed and of all future types that would later be discovered, with a single empirically parameter, a0 with a value of approximately 1.21 * 10-10 m*s-2, having the dimension of acceleration, approximately representing the gravitational field strength at which gravity appears to display an ordinary 1/r2 relationship to the ordinary luminous matter in the galaxy, and the point at which it appears to display a modified 1/r relationship to that matter.

This constant has subsequently been observed to be on the same order of magnitude as the speed of light times the Hubble constant, and as the speed of light times the square root of a quantity equal to the cosmological constant divided by three.

Subsequent research have shown that while the relationship accurately predicts the behavior of galactic scale systems quite well over a wide range of scales, that the simple MOND relationship underestimates by a factor of several times the MOND predicted amount in galactic cluster systems. Milgrom's colleague Bekenstein has successfully generalized the core observation of MOND (which merely generalized Newtonian gravity) in a modification of general relativity called TeVeS (for tensor-vector-scalar gravity). The formula for interpolating between the Newtonian and MOND regime has not definitively established empirically, and there is significant uncertainty in the measured value of the MOND constant a0 that approaches almost 50% in measurements derived from the most extremely dark matter dominated HI type dwarf galaxies.

Alternately, some of the apparent uncertainties in the fitting of the MOND constant to these extreme rotation curves could derive from the use of an insufficiently sophisticated interpolation function between the two gravitational force law regimes in a rare context that is actually sensitive to the details of this part of the naive MOND theory which has never claimed to have the interpolation function right, because the transition take place more slowly in these systems than in most other galaxies.

MOND and variants on this theme are an alternative to the dominant paradigm that the effects it observes are the product of "dark matter" which is made of particles with properties different from matter made of ordinary atoms. In the dark matter paradigm, these particles (or several types of particles) interact via gravity, but lack electric charge, do not interact via the strong force, and interact via the weak force no more strongly than neutrinos or not at all, although most dark matter particle theories assume that there is some force other than gravity that couples to dark matter. As a result, dark matter is assumed to be nearly collisionless. The standard six parameter lamda CDM cosmology model assumes that dark matter is a thermal relic comprising approximately 27% of the mass-energy of the universe and that it must be made of particles significantly more massive than neutrinos, while just 5% is attributable to ordinary matter (the balance is "dark energy", i.e. due to the cosmological constant). Newly conducted precision observations of a relatively nearby galactic cluster observationally confirms the widely adopted hypothesis that dark matter phenomena are consistent with a pressureless fluid from a general relativity perspective.

Early dark matter models did a dreadfully poor job of accurately predicting the amount of inferred dark matter that would be present in newly measured types of galaxies, and involved many adjustable parameters to describe observed dark matter halo distributions, in addition to requiring the invention of a new kind of beyond the Standard Model dark matter particle of a type never observed (even indirectly, for example, in the form of missing mass-energy) in high energy physics experiments. Of course, without a theory to explain why dark matter was distributed as it was, any dark matter theory is incomplete.

Simple N-body simulations derived from first principles from cold dark matter models such as the Navarro-Frenk-White (NFW) dark matter halo model conceived in 1997 produce "cuspy" dark matter halos that contradict the observational evidence regarding the inferred shape of dark matter distributions. Generically, cold dark matter models produce cuspy halos, and an excessive number of dwarf galaxies, both contrary to the empirical evidence, in computer simulations applying them. Warm dark matter models, with dark matter particles that have masses on the order of 2 keV rather than 10 GeV or more, appear to be more successful in these respects.  But, recent N-body simulations of simple warm dark matter models still display cuspy halos inconsistent with observational data and similar to less exaggerated NFW halos.  Simple warm dark matter models do not solve this problem and viable models may require some form of feedback from ordinary baryonic matter in the galaxy or self-interactions within the dark sector, for example, to replicate observed results.

Since Milgrom published his MOND theory, dark matter theories have come a long way, as exemplified by a January 22, 2014 pre-print by South African physicists Toky Randriamampandry and Claude Carignan, entitled "Galaxy Mass Models: MOND versus Dark Matter Halos."  Their paper compares the fit of a pseudo-isothermal dark matter halo density distribution models with two free parameters to galactic rotation curves to that of similar calculations using one parameter MOND models for "15 nearby dwarf and spiral galaxies" that can be measured with unprecedented precision.

This study, and a similar previous one with an overlapping data set, finds that MOND produces good fits (chi squared less than two) for only 60% to 75% of the sample (differences that are probably not statistically significant given the small sample sizes and slightly different methodologies used). It also finds that the two parameter dark matter model, using fixed mass to light ratios established by stellar population models, produces slightly better fits for this data set biased towards the extreme of the range in which MOND fits are possible if the MOND constant can be individually adjusted for each galaxy and/or best fits within margins of error for other relevant measurements of the galaxies to be fit such as distance from Earth are permitted. And, this dark matter model produces significantly better fits than those made simply using the most widely accepted estimate of the MOND constant. Prior studies have produced mixed results on the question of whether "galaxies with higher central surface brightnesses tend to favor higher values of the constant a0," a relationship that the current paper's sample tends to weakly confirm.

Equally interesting, this paper confirms that the two free parameters of a pseudo-isothermal dark matter halo model, core radius and central halo density, are not independent of each other and that the surface density of the halo which is calculated as the product of these two parameters is approximately constant with a value of about 120 Mp/c2 (two previous observational estimated the value of this constant at 100 and 150, respectively). Thus, three decades later, dark matter theorists have finally produced an empirical single free parameter dark matter halo model, with a constants whose value is known to a similar precision to that of MOND's single parameter, and which produces comparable or even moderately better fits to galactic rotation curves. Of course, like MOND, the pseudo-isothermal dark matter halo model is still not derived from first principles based upon a particle with particular properties, its constant is not very precisely determined empirically, and still requires one element of new physics beyond the Standard Model and general relativity.

While these researchers, in my opinion, fail to show as convincingly as they claim, that there are now dark matter models that are clearly superior to MOND in the arena where it has historically been strongest empirically, they do demonstrate that there are finally, three decades later, dark matter models comparable to MOND in parsimony and predictive accuracy.

Despite these strong results, however, there is no consensus regarding the right way to model dark matter halos and there is considerable uncertainty regarding how to fit the observational data to any particular model.

On the other hand, some of this lack of consensus has more to do with the participants than the state of the empirical evidence. For example, an atrocious Snowmass preprint on Planning the Future of U.S. Particle Physics in Chapter 4 (The Cosmic Frontier), treats all manner of dark matter candidates which are overwhelmingly disfavored by existing combined observational evidence from astronomy, direct dark matter detection experiments (such as the Xenon experiment) and HEP experiments, such as black hole, WIMPzilla, Q-ball, WIMP Cold Dark Matter, and Stangelet Cold Dark Matter as credible dark matter candidates, while glossing over more viable possibilities like sterile neutrino warm dark matter. They also exaggerate the necessity of hypothetical axions, proposed by Pecci and Quinn in 1977 to solve the strong CP problem (which is ultimately a "why" problem within the Standard Model, and not a problem of the Standard Model failling to accurately model reality), while failing to even acknowledge decades of fruitless axions searches that should have revealed axions whose properties fit more naive expectations regarding this hypothetical particle. Much of this appears to flow from an ideological commitment to supersymmetry theories that do not provide any realistic dark matter candidates which are consistent with observational evidence since HEP experiments already largely rule out any SUSY dark matter candidates that are light enough to be consistent with the astronomy data.

The reality that Snowmass participants are loathe to admit is that many costly existing or imminent dark matter experiments designed to measure dark matter phenomena have exceedingly weak prospects for success because data obtained by other means already strongly disfavor the existence of a dark matter candidate of the type that the experiment in question could detect if it existed.  In short, these big projects have been rendered obsolete before ever reporting results.


Monday, January 20, 2014

The Collapse of Harappan Civilization Marked By Leprosy, TB and Violence

Indus River Valley civilization cities like Harappa grew from about 2200 BCE to 1900 BCE, only to suffer a collapse that brought down the civilization.  New archaeological studies point to infectious diseases like leprosy and tuberculosis, an arid climate change shift and interpersonal violence often marked by crushed skulls as characteristic of its final days.
Robbins Schug's research shows that leprosy appeared at Harappa during the urban phase of the Indus Civilization, and its prevalence significantly increased through time. New diseases, such as tuberculosis, also appear in the Late Harappan or post-urban phase burials. Violent injury such as cranial trauma also increases through time, a finding that is remarkable, she said, given that evidence for violence is very rare in prehistoric South Asian sites generally.

"As the environment changed, the exchange network became increasingly incoherent. When you combine that with social changes and this particular cultural context, it all worked together to create a situation that became untenable," she said. 
The results of the study are striking, according to Robbins Schug, because violence and disease increased through time, with the highest rates found as the human population was abandoning the cities. However, an even more interesting result is that individuals who were excluded from the city's formal cemeteries had the highest rates of violence and disease. In a small ossuary southeast of the city, men, women, and children were interred in a small pit. The rate of violence in this sample was 50 percent for the 10 crania preserved, and more than 20 percent of these individuals demonstrated evidence of infection with leprosy.
Evidence of class divides and violent deaths are in strong contrast to relatively egalitarian and peaceful earlier periods of the Indus River Valley civilization, suggesting that its high culture's desirable features were unable to sustain themselves in the face of the pressures created by a prolonged and severe drought.

The study also notes that the expanding scope of Harappan trade brought more infectious agents in touch with its urban areas.

The underlying article which is the source for the brief review above has a very rich and generally well supported context for the finds such as the following (citations omitted):
The Harappan Civilization developed in the context of a semi-arid climate that was pervasive in South Asia for the latter half of the Holocene. Since it was first proposed as a factor in the demise of the Indus Civilization, debates about the role of climate and environmental changes have raged on but it has become increasingly clear that by 2800 B.C., aridity levels in the Indus Valley were broadly similar to contemporary levels until a period of destabilized environment—fluctuating rainfall, increased seasonality, and accelerate channel migration—began in the Indus Valley after 2000 B.C. From 2200-1700 B.C., a significant rapid climate change event in South Asia saw disruptions in monsoon rainfall and significant changes in fluvial dynamics along the Indus Rivers, including the Beas River. 
Increasing aridity initially occurred in the context of a flourishing interaction sphere that spanned West and South Asia in the third millennium B.C. Historical records from Mesopotamia describe regular trade with ‘Meluhha’ (the Indus Valley) from 2400-2000 B.C. Harappans manufactured etched, biconical carnelian beads, shell, faience and steatite ornaments, ivory, copper, and ceramic items, cotton, silk, jute, cloth, barley, oil and other perishables. Exports focused on raw materials for these products, items which have been recovered from sites around the Persian Gulf region; Harappan seals have also been recovered; and cylindrical seals resembling those from Mesopotamia have been recovered at Indus urban centers as well.
Participation in the interaction sphere facilitated a period of rapid urbanization at the city of Harappa, creating a dense and heterogeneous population in the ancient city. Cities are political, economic, and ceremonial centers that can offer opportunities unavailable in the hinterlands. Technology, production, and consumption transformed Indus society, particularly in period IIIC, when population growth was at its fastest rate: high levels of immigration disrupted the formerly organized settlement pattern; houses in the core areas of the city spilled over onto the streets and ‘suburban’ areas sprang up on low mounds to the west and northwest of the city center. 
After 1900 B.C., in the Late Harappan phase, population density was diminished and settlement focused largely in the core areas of the city. Declining sanitation conditions and an increasingly disorganized settlement plan indicate disruptions to authority were systemic. Disruptions in the exchange network also occurred after 2000 B.C. at a time when West Asian trading partners were responding to their own rapid climate change event. At this point, Magan and Dilmun are mentioned more frequently in Mesopotamian writings while references to Meluhha largely disappear and material evidence of trade interactions declines. Large-scale depopulation of Indus cities in the Late Harappan phase weakened Indus society. Late Harappan settlements flourished in Gujarat and Rajasthan while only a handful of settlements remained occupied in the Beas River Valley.

These finds include those from Cemetery H, which corroborates and dates a transition in funerary practices closely associated with the appearance of Indo-Europeans in the region that is mentioned in early Rig Vedic materials, which are some of the earliest written materials from the region.

Ultimate source: Gwen Robbins Schug, K. Elaine Blevins, Brett Cox, Kelsey Gray, V. Mushrif-Tripathy. Infection, Disease, and Biosocial Processes at the End of the Indus CivilizationPLoS ONE, 2013; 8 (12): e84814 DOI:10.1371/journal.pone.0084814

Friday, January 17, 2014

How Long Do Exotic Particles Last?

Exotic hadrons and higher generation leptons are very short lived, and most are not found in Nature, or anywhere outside extraordinarily expensive high energy physics laboratories.  Their lifetimes are summarized below.  

How Does The Standard Model Determine Mean Lifetimes For Hadrons?

In principle, all of these mean hadron lifetimes ought to be possible to calculate from first principles using Standard Model physics and its 26 or so experimentally measured constants (i.e. the four CKM matrix parameters, the four PMNS matrix elements, the three fundamental force coupling constants, and the fifteen experimentally measured fundamental particle masses, plus constants like the speed of light, Plank's constant, and pi which are not generally considered to be "Standard Model" parameters, in particular).

Generally, the essence of the way that this is done is by determining the potential strong and weak force decays available for each hadron considering the relevant conservation laws, then assigning a time estimate to each possible decay path that leads to decay products with lighter combined rest mass and expressing that number in the form of a decay width, and then adding up all of the available path specific decay widths in the correct way so that the total decay width can be determined and converted back to mean lifetime.  

In general, the more mass-energy conservation permitted decays are available, the faster a particle will decay.  The decay widths of each available strong force decay is generally about six or seven orders of magnitude larger than that of each available weak force decay, so the mix of available strong force and weak force decays that are available dramatically influence the rate at which decays take place.

To do this from first principles for any given particle would require you to first calculate the masses of each of the roughly 700 possible ground state hadrons and many more possible excited state hadrons to determine which decay paths were permitted by mass-energy conservation, but with the crutch of experimentally measured masses for all or almost all of the hadrons which could plausible be in the decay paths for heavier hadrons, this task becomes much more manageable.

In the case of an undiscovered particle, the physicist must first predict the mass of the particle itself, and then examine all decay channels fitting known potential decay products that are permitted by conservation laws, but must finally consider whether there are any other undiscovered particles which could plausible be a decay channel for the undiscovered particle being evaluated and if so, must estimate the masses of these particles as well.

Mass estimates for undiscovered particles can start with simple extrapolation of patterns from the masses of particles that have some similarities with the target particle, as Gell-Mann did back in 1964 with the Omega baryon, and then can be refined by first principles calculations using tools like lattice QCD, which are much harder to do, but provide a more rigorously supported justification for the predicted value if done with sufficient precision.

In practice, all of the values below are experimentally measured values, because it is currently easier to obtain experimental observables, like particle masses, than it is to calculate them with precision using QCD.

The Seven Stable and Metastable Particles (Lifetimes Greater Than 10-5 s)

Only seven kinds of composite or fundamental subatomic particles have a mean lifetime of more than 10-5 (i.e. one hundred thousandths) seconds.

The Two Stable Or Metastable Spin 1/2 Baryons

*The proton (stable). This is a spin 1/2 baryon made up of two up quarks and one down quark.  This is the lightest possible baryon because less binding energy is required to bind a spin 1/2 baryon than a spin 3/2 baryon, and because it has the lightest possible spin 1/2 quark content, since the Pauli exclusion principle makes it impossible to have a three up quark baryon with spin 1/2.  Furthermore, conservation of baryon number prohibits decays into lighter mesons with equivalent quark contents (something that is far less problematic for meson decays since mesons have zero baryon number).

*The neutron (which is stable when bound in a stable atomic nucleus, but has a mean lifetime of 880 seconds as a free particle).  This is a spin 1/2 baryon made up of one up quark and two down quarks.  Neutrons can decay to protons via beta decay, which involves the weak force, but not via the strong force, because neutrons are only slightly heavier than protons because the binding energy is very similar but the down quark is slightly heavier than the up quark (even though this difference is muddied by the translation of the constituent quarks with their masses into binding energy amounts) leaving no other available strong force decays that conserve baryon number.

The Five Stable Fundamental Particles

*The electron (stable). This is a fundamental particle in the Standard Model.  As the lightest charged lepton, it cannot decay into anything lighter while still conserving charge and lepton number.

*The three kinds of neutrinos (whose stability is limited as the three neutrino flavors, electron, muon and tau, oscillate between each other in a process that is not yet fully understood). This is a fundamental particle in the Standard Model.  Neutrino oscillations conserve charge, since all neutrinos have zero electric charge and conserve lepton number.  Oscillations from lighter neutrino mass states to heavier ones require a conversion of energy into mass, but because all three neutrino types are so light, it doesn't take much kinetic energy to make an oscillation possible and neutrinos often have high kinetic energy relative to their rest mass.

*The photon (stable until it hits a charged particle). This is a fundamental particle in the Standard Model.  Since it has zero mass to start with, it can't decay into anything else with less rest mass, although it can have significant electromagnetic energy that can be converted into charged particle-charged antiparticle pairs in the right circumstances.

The Four Most Stable Exotic Particles (Lifetimes Less Than 10-5 s And Greater Than 10-9 s)

The muon and three kinds of spin zero mesons (and their antiparticles) have mean lifetimes of more than 10-9 (i.e. one billionth) of a second.

The Muon, A Fundamental Particle

The mean lifetime of a muon, the second generation electron, which is a fundamental particle in the Standard Model, is on the order of 10-6 (i.e. a millionth) seconds. This is about 100 times as long as the three longest lived types of mesons discussed below.  It can decay only via the weak force.

The Three Most Stable Mesons

The charged pion made of an up quark and an antidown quark, the charged kaon made of an up quark and an antistrange quark, and the long form of the neutral kaon consisting of the linear sum of a down quark and an antistrange quark (which appears only in combination with the short neutal kaon linear combination of the difference between that particle with a much shorter mean lifetime), all have mean lifetimes on the order of 10-8 seconds.

The Many More Ephemeral Exotic Particles (Lifetimes Less Than 10-9 s And More Than 10-25 s)

About a hundred other kinds of hadrons, tau leptons (i.e. third generation electrons), top quarks, W bosons and Z bosons all have mean lifetimes of less than a billionth of a second.  Gluons are also effectively very short lived.

The Six Most Ephemeral Fundamental Particles

A tau lepton (i.e. a third generation electron) has a mean lifetime on the order of 10-13 seconds, which is similar to the longer lived B mesons, D mesons, and spin-3/2 baryons, and is about 100,000 shorter than that of the longest lived mesons.  It decays much faster than the muon because its higher rest mass relative to the muon makes many more decay channels available to it than are available for the muon.  For example, a tau is heavy enough to decay to a muon, an electron, or up to four pions (charged and neutral combined).  But, muons are already muons so they can't decay to muons, and are lighter than pions.  Decays involving muons and pions make up much of its branching fractions, and it is also heavy enough to have a significant minority of decays that involve decays to kaons, which are also heavier than muons.  Tau leptons are also much lighter than W bosons, so the mass of the virtual W boson in tau decay does not place an upper boundary on its decay products.

The Higgs boson's mean lifetime has not been measured experimentally, but is predicted in the Standard Model to have a mean lifetime of 10-22 seconds - similar to that of many hadrons with aligned spins, and about 1000 times as long as that of the top quark, W boson and Z boson.

Gluons are in principle as long lived as photons, but in practice, are only exchanged between color charged objects at very short range while moving at the speed of light, so they are in existence for only a time period on the order of 10-24 seconds and certainly far less than 10-9 seconds.

The mean lifetime of a top quark (i.e. the third generation up type quark) is about 5*10-25 seconds, which is about ten times shorter than the shortest lived hadron. And, since theory dictates that this time period is too short for hadronization to occur, all hadrons should have longer mean lifetimes than the top quark. 

The W boson and Z boson have mean lifetimes of about 3*10-25 seconds, i.e. about 40% shorter than that of the top quark (which makes since because W bosons are what makes top quark decays possible).

Decays of heavy fundamental particles are sensitive to the existence of undiscovered fundamental particles which could provide decay paths for the heavy particles, but only if the undiscovered fundamental particles have some quantum number that is present in the original particle or can produce pairs of particles in the ends state that cancel each other out with respect to this quantum number.  For example, a quark with lepton number zero can produce leptons as decay products, so long as they come in lepton-antilepton pairs.  

Of course, examining particle decay widths as a means of detecting new particles only works if the the new particle's decay widths can be measured fairly accurately.  Often, relative decay width measurements are not adequate for this task because the existence of a new particle providing a new decay path doesn't necessarily materially change the relative frequencies of other available decay paths of the particle.  Also, this technique is less sensitive to low branching ratio decay paths which often have the most massive sets of particles and are often therefore hardest to detect.

General Trends For Hadrons (i.e. Mesons and Baryons)

In general, hadrons in which the all of the component quarks have aligned spins (spin 1 mesons and spin 3/2 baryons) are much less stable than hadrons whose component quarks have maximally unaligned spins (spin 0 mesons and spin 1/2 baryons).

In spin 0 mesons and spin 1/2 baryons, in general, the presence of charm and bottom quarks is associated with shorter mean lifetimes.  Likewise, higher generation charged leptons are shorter lived than lower generation charged leptons.  But, this simple trend does not even extend to the case of spin 1 and spin 3/2 hadrons.

Similarly, while some of the longest lived hadrons are also among the lightest, and some of the heavier hadrons are fairly short lived, and there is probably some modest correlation between mass and mean lifetime for hadrons, there is not really a consistent relationship between mass and mean lifetime for hadrons.  Hadrons with similar masses and similar quark contents can have dramatically different mean lifetimes.  

Hadrons can have mean lifetimes much greater than lighter hadrons.  For example, pions, one of the longest lived kinds of mesons, are seven times lighter than protons and neutrons, which are much more stable.

There is a roughly 16 order of magnitude range of mean lifetimes for exotic hadrons (i.e. excluding the proton and neutron), while there is only about a 2 order of magnitude range of exotic hadron masses (which includes the proton and neutron mass).

Exotic Baryons

The most stable exotic spin 1/2 baryons (i.e. spin 1/2 baryons other than the proton or neutron) have mean lifetimes on the order of 10-10 seconds or less, which is about 100 times shorter than the mean lifetime of the longest lived mesons.

The most stable spin 3/2 baryon, the Omega baryon which consists of three strange quarks, has a mean lifetime on the order of 10-11 seconds. The Omega baryon's mean lifetime is 1,000 times shorter than the longest lived mesons and 10 times shorter than the longest lived unstable spin 1/2 baryons, but 100,000,000,000 (i.e. 100 billion) times as long as any other spin 3/2 baryon.  It's long life is explained essentially by the fact that the only available decays require one of the strange quarks to decay via the weak force into an up quark (typically into a uss baryon and an pi minus meson made of an antiup quark and a down quark that are produced in the decay of a W- boson emitted by one of the strange quarks as it became an up quark).  Since all combinations of hadrons with exactly three strange quarks have higher masses combined than the Omega baryon, it cannot decay via the strong force.  

The prediction of the Omega baryon's existence and properties with the quark model by Gell-Mann which was confirmed in 1964 was a major confirmation of that model (the model looked at a pattern in spin 3/2 baryons observing that there were four kinds of Delta baryons comprised only of up and down quarks that each had about the same mass, three kinds of Sigma baryons comprised of one strange quark and two up or down quarks that each had a mass about 152 MeV higher than the Delta baryons, two with two strange quarks and one up or down quark with about the same mass about 150 MeV higher than the Sigma baryons, and no candidates with three strange quarks.  Gell-Mann followed the logical progression and guessed that the new sss baryon would have a mass 151 MeV higher than the previous layer of spin 3/2 baryons - in practice it turned out to be 152 MeV higher with the discrepancy being far less than margin of error in the measurement.  At each step, it turns out, this involves about 90-100 MeV of strange quark mass and 50-60 MeV of additional gluon binding energy, although the strange quark mass wasn't known at the time.

As Hyperphysics explains, another step in the Omega baryon decay which also occurs only via the weak force, the decay of sigma baryons (see here), while for example a Delta++ baryon (three up type quarks) can easily decay via the strong force because its mass is greater than the sum of the proton (uud) and positively charged pion (u anti-d) with the same quark flavor numbers allowing it to decay via the strong force but with three sets of the lightest of the quark flavors does not easily decay via the weak force.  This makes it possible to discern the strength of the weak force coupling constant relative to the strong force coupling constant:


The fact that the both the strong force and the weak force initiate decays of particles gives a way to compare their strength. The lifetime of a particle is proportional to the inverse square of the coupling constant of the force which causes the decay. From the example of the decays of the delta and sigma baryons, the weak coupling constant can be related to the strong force coupling constant. This application gives
.
Since the strong coupling constant has a value of about 1 in the energy range around 1 GeV, this suggests a value for the weak coupling constant in the range
.

Similarly, the slowness of charged pion and chaged kaon and long neutral kaon decay is attributable to the fact that all of these processes involve only weak force decays.  This is not unrelated to mass.  The charged pion is the lighest hadron, so its decays can only take place when the down quark or anti-down quark in the meson decays to an up quark of opposite matter-antimatter character to the other quark in the meson leaving no quark flavor numbers to conserve, because there are no lighter hadrons that preserve its quark content.  They slow kaon decays, similarly, require a change in strangeness because there are no other hadrons with strange quarks lighter than this already light meson.  But, the mass is relevant only in relation to other combinations of hadrons with the same amount of quark conservation, not in absolute terms.

All of the other spin 3/2 baryons (even those with only up and down quark content) have mean lifetimes on the order of 10-22 seconds or less, which is 100,000,000,000,000 (i.e. 100 trillion) times shorter than the longest lived mesons.  This is surprising since the strange quark is the median mass hadronizing quark, rather than being at either the light or the heavy extreme.  Presumably, spin 3/2 baryons without triplets of the same quark type decay quickly via the strong force because they have spin 1/2 counterparts with the same quark content which take less binding energy since the spins are balanced rather than aligned (there are no experimentally measured values for the properties of the bbb and ccc spin 3/2 baryons or for many other of the heavier quark content spin 3/2 baryons).

Delta baryons (which have spin 3/2) contain only up and down quarks, are the shortest lived hadrons of all, with mean lifetimes of about 5.58*10-24 seconds, while the other spin 3/2 baryons, which contain strange, charm and bottom quarks, have longer mean lifetimes for the reasons explained above.

Mesons

The longest lived spin zero B and D mesons have mean lifetimes on the order of 10-12 seconds or less, which is about 10,000 times shorter than the mean lifetime of the longest lived mesons.  The other spin zero mesons (aka scalar and pseudoscalar mesons) that are not mentioned above (such as short neutral kaons, and various eta mesons) have mean lifetimes on the order of 10-11 seconds or less, but are generally longer lived than spin 1 mesons.

The longest lived vector mesons (those with spin one) have mean lifetimes on the order of 10-20 seconds or less, which is about 1,000,000,000,000 (i.e. a trillion) times shorter than the mean lifetime of the longest lived mesons.  Their greater mass, relative to quark content, makes strong force decay routes to lower spin mesons (for example) available in most cases.  For example, the charged rho and charged pion both have quark contents of an up quark and an antidown quark.  But, the charged rho has a mass of about 775 MeV while the charged pion has a mass of about 139.6 MeV.  Given that the quark mass in both cases is about 10 MeV, the charged rho has gluonic binding energy of about 765 MeV while the charged pion has gluonic binding energy about about 130 MeV, so the rho takes about 5.6 times as much energy to hold together as the pion.

The shortest lived vector mesons have mean lifetimes on the order of 10-24 seconds for the charged and neutral rho mesons (made up of only up and down quarks), and of 10-23 seconds for the Omega meson (which is the long form of the neutral rho meson); while vector mesons with heavier constituent quarks are longer lived.

General Considerations and Observations

Five Flavors Of Quarks Have No Mean Lifetimes In Isolation

None of the quarks, other than the top quark, has a mean lifetime that is a meaningful number.  Their mean lifetimes are very dependent upon the kind of hadron in which they are confined.  The more binding energy a hadron has, the more likely it is that it will have many options to decay into products that conserve quantum numbers while involving less mass.  The presence of heavy quarks in a hadron can also increase the amount of mass that can be present in an end state and thereby make more decay routes available, but only at the cost of making decay products that preserve quantum numbers without flavor changing weak force interactions which proceed more rapidly scarce.

For example, an up quark in a proton is perfectly stable, while an up quark in a three up quark Delta baryon, Rho Meson or Omega meson is highly unstable.  A down quark in a proton is perfectly stable, but a down quark in a free neutron has a mean lifetime of 880 seconds.  An Omega baryon, with three strange quarks, is much longer lived than some baryons with fewer strange quarks.

On the other hand, no hadron containing a second or third generation quark is stable, because mass reducing weak force decays are always possible in that situation, even though they are slower than strong force decays. No hadron containing even a single strange quark has a mean lifetime of more than 10-8 seconds.  No hadron containing even a single charm or bottom quark has a mean lifetime of more than 10-12 seconds.  Since the slowest hadron decays generally involve cases where only the weak force is available to bring about the decays, these times are reasonable proxies for the characteristic weak force decay time period for these heavy quarks.

Presumably, the mean lifetimes of quarks other than the top quark when unconfined (e.g. at the instant that they come into being during Z boson decay) is longer than the time required for hadronization, but this number is basically counterfactual as all quarks other than top quarks are always observed in a confined state.

Impact of Hadronic Molecular Binding

As noted in the previous post, there are indications that "hadronic molecules" akin to atomic nuclei can form from pairs of B mesons and from pairs of D mesons.  Neither of these "hadronic molecules" is stable and it isn't clear if this form of bonding has an impact on mean meson lifetime, although the comparable phenomena in atomic nuclei, clearly does have the capacity to extend the mean lifetime of a neutrino in some circumstances.

Certainly, there is no theoretical expectation at this time that some otherwise unstable exotic meson or baryon could be even metastable with a mean lifetime as long as a muon by virtue of being part of such a "hadronic molecule", although this is in part because there is very little theoretical work that has been done on the subject.

Mean Lifetime Is An Inverse Function Of Total Decay Width

For those so inclined, mean lifetimes can be converted to total decay widths as follows: mean lifetime equals reduced Planck's constant h divided by total decay width gamma.

Particles With Identical Lifetimes Are Treated As One Particle For These Purposes

At least at the level of precision involved in this post, in the Standard Model, particles and antiparticles have the same mean lifetimes.  Therefore, particles and antiparticles are not counted as different particles for the purposes of this post and are not separately discussed.  Similarly, other particles which are indistinguishable for purposes of mean lifetime (such as different parity versions of fermions, and the eight different kinds of gluons) are treated as one kind of particle for these purposes.

An Aside On Proton Decay

As an aside, a very large share of all beyond the Standard Model theories, and grand unification theories (GUTs) in particular, call for the proton to be unstable, even though proton decay (for example, into a neutral pion and a positron, which have the same charge and spin and lower mass, but which would violate baryon number conservation and is therefore absent from the Standard Model).

Proton decay has never been observed. The experimental bound on the minimum mean lifetime of a proton is about 1034 years. By comparison, the age of the universe is approximately 1010 years, so the number of protons that have decayed since they formed over the lifetime of the universe is not more than one in 1024. By comparison, there are approximately 3.5*1079 protons in the universe (of which fewer than one in 1010 are antiprotons). The mean proton lifetime is at least 1036 times as long as the mean lifetime of a free neutron. The mean proton lifetimes is at least 1062 times as long as the shortest observed mean hadron lifetime, which is very close to the minimum theoretical minimum hadron lifetime due to the minimum time necessary for hadronization revealed by the top quark's inability to hadronize.

Personally, I have a low opinion of beyond the Standard Model theories that claim, as some do, that proton decay in a manner that does not conserve baryon number occurs at a very low, but non-zero rate just beyond current detection limits, such as 1036 years.