Friday, March 7, 2014

The Trouble With QCD

A recent pre-print by Stephen Lars Olsen of Seoul National University sums up a major unsolved problem in quantum chromodynamics (QCD), the Standard Model theory that explains how quarks interact with each other via the strong force as mediated by gluons.

Most of QCD is just fine.  The spectrum of observed three quark baryon states, and of two quark meson states involving quarks of different flavors, largely matches the simple naive QCD expectation.  The proton mass has been calculated from first principles to more than 1% accuracy.  The strong force coupling constant at the Z boson mass is known to a four significant digit accuracy, and estimates of the masses of the top, bottom, and charm quark are improving in precision greatly compared to just a few years ago.

There is also no trouble on the horizon with our understanding of how QCD is involved in the nuclear binding force within atoms.  There are, however, theoretical discussions of a couple of alternative understandings of it that implicate the meson spectrum issues discussed below.  Traditionally pions have been tapped as the force carrier between protons and neutrons, but now, other light scalar mesons such as scalar meson f(500) have been suggested as alternative carriers of the residual nuclear force between nucleons in an atom.

Missing Exotic States Predicted By QCD

Current experiments have allowed us to observe hadrons up to 10 GeV.  But, many "exotic" states that QCD naively seems to allow in the mass range where observations should be possible (including less exotic predicted quarkonium states discussed in the next section), have not yet been detected.

There have still not been definitive sighting of glueballs, of tetraquarks, of pentaquarks, or of H-dibaryons.  The implication of our failure to see them despite the fact that QCD predicts their existence and properties with considerable precision, is that we may be missing a solid understanding of why QCD discourages or highly suppress these states.  Such QCD rules might be emergent from the existing QCD rules of the Standard Model in a way that we have not yet understood, or it could reflect something missing in those equations or in the other rules of the Standard Model that are used to apply them.

Similarly, no well established resonances have JPC quantum number combinations (total angular momentum, parity and in the case of electrically neutral mesons, charge parity) that have no obvious source in any kind of quark model with purely qq mesons.  In the case of hypothetical mesons with J=0, 1 or 2, these are the JPC quantum numbers: O--, O+-, 1-+ and 2+- to name just those with J=0, 1 or 2.  As one professor explains: "These latter quantum numbers are known as explicitly exotic quantum numbers. If a state with these quantum numbers is found, we know that it must be something other than a normal, qq¯ meson."  At higher levels of integer J, the combination +- is prohibited for even integer values and -+ is prohibited for odd integer values.  These combinations might be created by bound states of a quark, antiquark and a gluon, each of which contribute to the J, P and C of the overall composite particle that are called "hybrid mesons" and are not observed.  Lattice QCD has calculated masses, widths and decay channels for these hybrids, just as it has for glueballs (aka gluonium).

But, these well defined and predicted resonances are simply not observed at those masses in experiments, suggesting that for some unknown reason, there are emergent or unstated rules of QCD that prohibit or highly suppress resonances that QCD naively permits such as gluonium (aka glueballs), or hybrid mesons, or true tetraquarks or true pentaquarks, or H-dibaryon states (at least in isolation, as opposed to blended with other states in linear combinations that produce qq model consistent aggregate states).

A few resonances have been observed that are probably "meson molecules" in which two mesons are bound by residual strong force much like protons and neutrons in an atomic nucleus, however, have been observed.  This is the least exotic and least surprising of the QCD structures other than plain vanilla mesons, baryons and atomic nuclei observed to date, since it follows obviously from the same principles that explain the nuclear binding force that derives from the strong force mediated by gluons between quarks based on their "color charge."

Not very surprisingly, because top quarks have a mean lifetime an order of magnitude shorter than the mean strong force interaction time, mesons or baryons that include top quarks have not been observed.  Still, the mean lifetime of the top quark is not so short that one wouldn't expect at least some highly suppressed top quarks to briefly hadronize when they end up in rare cases having lives much longer than the mean lifetime, so while the suppression of top hadrons is unsurprising, the magnitude of that suppression is a bit of a surprise.

Surprising Meson Spectrums

Meanwhile, many mesons have been observed whose quantum numbers, decay patterns, and masses taken together are not a good fit for simple models in which mesons are made up of a particular quark and a particular anti-quark which have either aligned spins (and hence have total angular moment J=1 called vector mesons) or oppositely aligned spins (and hence have total angular momentum J=0 called pseudoscalar mesons).

The spectrum of quarks with a quark and anti-quark of the same type, called quarkonium, are particularly problematic.

These states were already the subject to an exception to the usual QCD rules governing hadron decay.  We know that quarkonium states are usually suppressed in hardonic decays, due to the Zweig rule, also known as OZI suppression, which can also be stated in the form that "diagrams that destroy the initial quark and antiquark are strongly suppressed with respect to those that do not."

Quarkonium mesons easily blend into linear combinations with each other because (1) bosons can be in the same place at the same time, and (2) they have similar quantum numbers because all quarkonium mesons have zero electric charge, baryon number (quarks minus antiquarks), isospin (net number of up and down quarks and up and down antiquarks), strangeness (net number of strange and antistrange quarks), charm number (charm quarks minus charm antiquarks) and bottom number (bottom quarks minus bottom antiquarks).

There are no mesons that appear to have a purely uu, dd or ss composition.  The neutral pion, the neutral rho meson, and the neutral omega meson are believed to be linear combinations of uu and dd mesons (the omitted one of four simple combinations of uu and dd may be a scalar meson).  The eta meson and the eta prime meson are believed to be linear combinations of the uu, dd and ss mesons.  Many of the lighter scalar and axial-vector meson states without charm or bottom quarks are also presumed to include linear combinatons of uu, dd, and ss quarkonium mesons.  There have been proposed nonets of scalar mesons that are chiral partners of the pseudoscalar mesons, for example, although the issues of the quark compositions of these true scalar mesons is not well resolved.

The only meson in which a meson with a quark and anti-quark of different flavors are prominently described as being linear combinations of different combinations states is the neutral kaon made of a strange quark and anti-down quark or visa versa, with the particle and anti-particle added in the long form and subtracted from each other in the short form of this meson.  This is a case where the hadron and its antiparticle have the same values for J, P and C, and it has a neutral electric charge and is a boson.  But, unlike quarkonium, they differ in the sign of their isospin and strangeness.  There may be other cases of linearly combined states with no single dominant qq pair when the quarks have different flavors, like the kaon, but I haven't heard of them.

The large masses of charm and bottom quarks relative to the non-quark "glue" mass of mesons makes it harder for the quark content of charmonium and bottomonium-like states to remain ambiguous.  These mesons are called XYZ mesons.  But, they continue to show signs that they may be mixings of quarkonium states, rather than always being composed of a simple quark-antiquark pair of the same flavor of quark  There are about seven charmonium-like states that have been discovered that were not predicted by QCD, and a like number of such states that are predicted to exist but have not been observed. Bottomonium states present similar issues.  The XYZ mesons have JPC quantum numbers of 0-+  (pseudo-scalar), 0++ (scalar), 1-- (vector), 1+- (pseudo-vector) and 2++ (tensor) in their J=0, 1 and 2 states.  Mesons with combinations 1++ (axial vector) and 2-+ and 2-- are also theoretically permitted.

There are even some indications that there is a resonance that is made up of a proton-antiproton pair that is acting like a quarkonium mesons.

There are competing theories to describe and predict these quarkonium dominated meson spectrums.

Mr. Olsen concludes by stating that:
The QCD exotic states that are much preferred by theorists, such as pentaquarks, the H-dibaryon, and meson hybrids with exotic JPC values continue to elude confirmation even in experiments with increasingly high levels of sensitivity.
On the other hand, a candidate pp bound state and a rich spectroscopy of quarkoniumlike states that do not fit into the remaining unassigned levels for cc charmonium and bb bottomonium states has emerged.
No compelling theoretical picture has yet been found that provides a compelling description of what is seen, but, since at least some of these states are near D(*)D* or B(*)B* thresholds and couple to S-wave combinations of these states, molecule-like confi gurations have to be important components of their wavefunctions. This has inspired a new field of flavor chemistry" that is attracting considerable attention both by the experimental and theoretical hadron physics communities.
Time For A Breakthrough?

Implicit in Olsen's discussion is the recognition that we have a sufficiently large body of non-conforming experimental evidence that we may be close to the critical moment where some major theoretical break through could in one fell sweep explain almost all of the data that is not a clean fit for existing QCD models with some sort of paradigm shift.

Other Issues with QCD

There are other outstanding issues in QCD beyond those identified by Olsen's paper.  A few of these follow.

As I've noted previously, sometimes perturbative QCD predictions differ materially from observed results even at energy scales where it should be reliable, perhaps simply because the calculations are so hard to do right.

The infrared (i.e. low energy) structure of QCD that can be explored only with lattice QCD is also sometimes mysterious with different methods producing different results.  Particularly important is the question of whether the QCD potential reaches a theoretical zero at zero distance, or has a "non-trivial fixed point."  In the infrared, it also appears the gluons acquire dynamical mass.

We still aren't sure if there are any deep reasons for the fact that no CP violation is observed in QCD despite the fact that it is a chiral theory and that there is a natural term for it in the QCD Lagrangian.  This is called the "strong CP problem."

Meanwhile, even basic QCD exercises like estimating a hadron's properties from its components, when they are well defined, suffers from issues of low precision, because while it is possible to measure observable hadron properties precisely, it is very hard to do QCD calculations with enough terms to make the theoretical work highly precise, and this in turn leads the values for input parameters like the strong coupling constant and quark masses to be fuzzy as well.  Recent progress has been made, however, in using new calculation methods like Monte Carlo methods and the amplitudeheron, to reduce the computational effort associated with these calculations.

While QCD has not yet definitively failed any tests of the Standard Model theory, and instead, has been repeatedly validated, it has also been subject to much less precise experimental tests than any other part of the Standard Model.  The absence of any really viable alternative to QCD has been key to its survival and lack controversy in beyond the Standard Model physics discussions.

Wednesday, March 5, 2014

Is The Structure of the CKM Matrix Determinable Predominantly From The Electroweak Gauge Coupling Constants?

The Wolfenstein parameterization of the CKM matrix (articulated first in L. Wolfenstein, "Parametrization of the Kobayashi-Maskawa Matrix", 51 (21) Physical Review Letters 1945 (1983)) emphasizes the extent to which the probability of a quark flavor change when it emits a W boson depends upon a change in quark family.

* A transition from the first generation to the second generation (or visa versa) happens with a probability of lambda squared (about 5.07%-5.08%).

* A transition from the second generation to the third generation (or visa verse) happens with a probability of about A squared times lambda to the fourth power (about 0.16%-0.17%).

* A transition from the first generation to the third generation (or visa versa) happens with a probability roughly equal to the probability of a transition from the first generation to the second generation, multiplied by the probability of a transition from the second generation to the third generation, times an adjustment in the form of a complex number an absolute value of a magnitude on the O(1) that includes a CP violating phase. In all, a first to third generation (or visa versa) quark family transition happens with a probability of about 0.0012% to 0.0075%.

* The probability that a quark will remain in the same quark generation is equal to one minus the probability that it will change generations (about 94.9202% in the first generation, about 94.7585 in the second generation, and about 99.8293% in the third generation).

* The Wolfenstein paramterization emphasizes that the slight percentage differences between the probability of CKM matrix entry Vcb and Vts, between Vcd and Vus, flows mostly from compensating from other entries in that row, such as the considerably more significant (roughly 6-1) differences between the tiny Vtd and Vub.  There are, in fact, more than the two CP violating parameters shown in a simplified version of the Wolfenstein paramterization, in the Standard Model CKM matrix, but those effects are tiny on a percentage basis relative to the magnitude of the other, much larger CKM matrix entries.

* Of course, the decay of a quark at rest is mass-energy conservation barred from becoming a heavier quark. It can only decay to the lighter quark (something that is possible for all types of quarks except up quarks). Only quarks with sufficient kinetic energy

In the Wolfenstein parameterization, "lambda" is roughly 0.02257 (and is another way of stating the Cabibbo angle), "A" is roughly 0.814, and the "p-in" CP violating term is about 0.135 minus 0.349i.

The point that the Wolfenstein parameterization underscores is that the CKM matrix derives largely from crossing one of two (or both) of the lepton families, not just from the particular quarks involved in the transition.

The probability of a second to third generation transition is consistent with two-thirds (between 62.7% and 69.7% with a best fit value of 66.3%) of the square of the probability of a first to second generation transition.

Some CKM Matrix Structure Conjectures

It wouldn't be too hard to imagine that in some deeper "within the Standard Model" theory, that before adjusting for a complex numbered CP violating term, that:

(1) the probability of a second to third generation quark flavor change is exactly two-thirds of the probability of a first to second generation quark flavor change squared (before considering a Wolfenstein parameterization style complex numbered CP violating term with both a real and an imaginary part), and

(2) that there is a CP violating term of similar absolute magnitude in all of the CKM matrix terms but that it is such a tiny percentage of the larger terms that it is impossible to discern experimentally in those terms, thus, we notice CP violation only in low probability CKM matrix entries only because the effect is large relatively to the pre-CP violation term signal.and

(3) that there is some deeper reason for both the magnitude of the Cabibbo angle and the magnitude of the CP violating term in the CKM matrix.

Assumption (1), if true, would reduce the number of experimentally measured, parameters in the CKM matrix from four to three.

This also suggests that if the absolute magnitude of the probability of a first to second generation flavor change, or a second to third generation flavor change, is a function of the quark mass matrix, that it depends on mass of both quarks in the first generation, both quarks in the second generation, and both quarks in the third generation, respectively.

But, given this disconnect between individual quark masses and CKM matrix probabilities, that it seems more likely that any causal relationship between the CKM matrix and the quark mass matrix flows from the CKM matrix to the mass matrix and not the other way around.

This implicitly disproves the possibility that I considered recently, that the CKM matrix probabilities and the fermion mass matrix might be a two way street with the parameters resulting from a mutual balance between the two.  It could very well be that the CKM matrix probabilities do drive the form of the mass matrix of the fermions in the manner that I had suggested previously. But, if the CKM matrix can be derived almost entirely from the two electroweak gauge couplings of the Standard Model, then the fermion masses do not meaningfully drive the value of its parameters.

Footnote on the Cabibbo Angle and Weinberg Angle

One of the great unsolved problems in physics is to determine any deeper principles that relate the couple dozen or so experimentally measured constants of the Standard Model to each other, so that the model would have fewer experimentally measured parameters.

An achievement like this would both allow for greater precision in fundamental physics, since theoretically determined values of constants that are harder to measure precisely could be determined using experimentally measured values of quantities that are easier to measure precisely.  It would also illuminate us by providing a deeper understanding of how a relative kludge of a rule book for the universe really works, much like grand unified theories hope to do.

Given the conjectures above, finding a way to derive the Cabibbo angle from first principle is a tempting prize.

The Cabibbo angle

The Cabibbo angle, also known as Euler angle Theta12 in the standard parameterization of the CKM matrix,  is defined as the inverse tangent of the absolute value of CKM matrix element Vus divided by the absolute value of CKM matrix element Vud. The currently measured sine of the Cabibbo angle is 0.2256* with a one standard deviation confidence interval range of 0.22325 to 0.2265.

This assumes the value of quoted at Wikipedia based on a PDG source for the two CKM matrix elements of Vus = 0.22534 and Vud = 0.97427. But, late last year, a new precision measurement of the first element of 0.22290(90) was made which also tweaks the global best fit for the second number in the Cabibbo angle formula. As a result, the sine of the Cabibbo angle is really a bit more than 0.2230.

The Weinberg angle.

The Weinberg angle, also known as the "weak mixing angle" is defined as the inverse cosine of the mass of the W boson over the mass of the Z boson.

In many applications, the quantity actually used and measured is the square of the sine of the weak mixing angle, which runs with the energy scale of the interaction.  At the Z boson mass, the sine squared of the weak mixing angle is 0.23120 +/- 0.00015, while at the energy scale of 0.16 GeV, the sine squared of the weak mixing angle is 0.2397 +/- 0.0013.

The Weinberg angle and the Cabibbo angle compared

Using the old measurement, the square of the sine of the Weinberg angle was 2.48% larger than the sine of the Cabibbo angle.  These two values are inconsistent with each other at a 6.14 standard deviation level (and slightly less but still clearly different with the new value of the Cabibbo angle).

The electroweak gauge coupling constants and Higgs vev

The Weinberg angle is also defined as the inverse tangent of  the bare electromagnetic force gauge coupling constant g' divided by the bare weak force gauge coupling constant g.

The magnitude of the fundamental electric charge "e", in turn, is the bare weak force gauge coupling constant g times the sine of the weak force mixing angle (and thus can be determined solely from g and g').

The Fermi coupling constant is a function of the weak force gauge coupling constants g, the reduced Plank's constant, the speed of light, and the W boson mass, or alternately, directly from the Higgs vacuum expectation value "v", which has been experimentally measured in connection with measurement of the lifetime of the muon to be 246.22 GeV/c.

The mass of the W and Z bosons can be computed from g, g' and v.

The square of the bare electromagnetic force gauge coupling constant g' equals four pi times the fine structure constant times the permittivity of free space.  While I don't personally know how to determine the relative magnitudes of the fine structure constant and permissivity of free space exclusively from g, g' and v, I believe that it is possible to do so.

If the supposition that the mass of the Higgs boson is equal simply to the W boson mass plus half of the Z boson mass, then its mass can be computed from g, g' and v as well, i.e. MH=1/2*g*v+1/4*g*sqrt(g2+g'2).  The world average measurement of the Higgs boson mass per PDG is currently 125.9+/-0.4 GeV/c and the theoretical value of the Higgs boson mass given this assumption and using PDG world average measurement values for the W and Z boson masses is 125.98 GeV/c.

Thus, if one accepts one experimentally plausible conjecture about the relationship of the W and Z boson masses to the Higgs boson mass, then the masses of all of the massive gauge bosons in the Standard Model, the strength of the fundamental electric charge, and the strength of the strong and weak forces can be computed from first principles using only three physical constants: g, g' and v (and the speed of light and Planck's constant).

If one further accepts the proposition that the sum of the square of the masses of all of the fundamental particles in the Standard Model is equal to the square of the Higgs vacuum expectation value (something that is empirically true using the PDG mass estimates of those particles to a precision well in excess of the margin of error in the underlying measurements and a very beautiful and plausible hypothesis), then one can also computer the aggregate sum of the squared masses of the fundamental fermions using only g, g' and v (plus Plank's constant and the speed of light), although not their masses relative to each other.

In other words, if that is true, one can compute the overall mass scale of the fundamental fermions of the Standard Model from just g, g' and v.

But, there is no current theory that explains how the CKM matrix parameters can be computed from these constants.  The CKM matrix parameters in the Standard Model are simply experimentally measured inputs.

Likewise, there is no widely accepted theory to explain the relative masses of the fundamental fermions of the Standard Model from first principles, although extended versions of Koide's formula come pretty close to approximating these relative masses with resort to just one more constant (e.g. the ratio of the muon mass to the electron mass).

Is the Cabibbo angle a function of the Weinberg angle?

It is also tempting to wonder why these two fundamental parameters of the electroweak portion of the Standard Model are so close to each other.

Is there is some way that the Cabibbo angle could be computed from g and g' as well?

If the sine of the Cabbibo angle is simply the square of the sine of the weak mixing angle, this would certainly be true.

If this was possible, and if my supposition about the role of the Cabibbo angle in setting the absolute magnitude of the weak force flavor changing probabilities apart from the p-in term of the Wolfenstein parameterization of the CKM matrix discussed above is correct (i.e. if the square of Wolfenstein parameter "A" is exactly two-thirds), then the two dominant parameters of the four CKM matrix parameters in the Standard Model could also be derived from first principles solely from g and g'.

Could the two angles be reconciled at some energy scale?  No.

Is there perhaps something significant about the energy scale at which the square of the sine of the Weinberg angle is equal to the sine of the Cabibbo angle, i.e. the energy scale at which the square of the sine of the Weinberg angle is equal to 0.2256 or so of the old measurement?

The short answer is no.

The Weinberg angle, consistent with the Standard Model prediction fell by 0.0085 from 0.16 GeV to the Z boson mass, and would need to fall another 0.0056 (about 65.9% of the decline in the previous mass shift range) to drop to the sine of the Cabibbo angle.

But, as the linked paper above measuring the weak mixing angle at 0.16 GeV illustrates, the Standard Model expectation is that the Z boson mass is at or near the minimum value of the weak mixing angle which grows larger again at higher energies, so there is no energy scale at which these to key electroweak parameters of the Standard Model would coincide.

Could the Cabibbo angle be redefined to make the two consistent?  Yes.

Could there be, for example, some other natural way of defining an angle similar to the Cabibbo angle so that the sine of the Cabibbo angle would be equal to the square of the sine of the Weinberg angle?

In principle, the answer is yes.

For example, one could imagine redefining it as the inverse tangent of (the absolute value of CKM matrix element Vus plus the absolute value of CKM matrix element Vub) divided by the absolute value of CKM matrix element Vud  which would increase the sine of the Cabibbo angle to about 0.22867, and then multiplying this time one plus the fine structure constant (which is roughly 1/137), which would bring it to 0.23034.  This would be within one standard deviation of the square of the sine of the Weinberg angle at the Z boson energy scale given the precision of current experimental measurements (the precision of the Weinberg angle measurement is about six times greater than the precision of the Cabibbo angle measurement).

The extension of the definition of the Cabibbo angle to include the addition of CKM matrix element Vub is very natural.  The Cabibbo angle was originally defined before the third generation of Standard Model fermions was discovered.  In a two fermion generation Standard Model that Cabibbo angle was simultaneous the probability of a transition to a non-first generation quark and the probability of a transition from a first to a second generation quark.  Including the CKM matrix element for a transition to a third generation would generalize it using the latter interpretation of its meaning, rather than the former, which were both identical in the two generation case.

The inclusion of a factor of one plus the fine structure constant is less obvious and somewhat arbitrary.  But, given that we are talking about an electroweak process that always involves a W boson with has both a weak force coupling and an electromagnetic coupling, it would hardly be stunning that a formula to derive from first principles a probability of quark generation transitions from one generation to another might involve both the weak mixing angle and the electromagnetic coupling constant.

After all, electroweak theory is full of equations that involve various combinations of both the weak and electromagnetic coupling constants g and g' respectively, such as the mass of the Z boson and the fundamental electric charge "e" of the positron and the proton.

If the Cabibbo angle could be determined from first principles using a modified definition such as this one, it would follow that all of the CKM matrix elements, other than the "p-in" parameters of the Wolfenstein parameterization, could be computed from first principles using only the two electroweak coupling constants.

It would then be possible to calculate the CKM matrix elements without any reference to the non-electroweak constants of the Standard Model or any of the mass constants of the Standard Model.

To be clear, I am not actually arguing that I have come up with the correct formula to determine a redefined Cabibbo angle of the CKM matrix from first principles using only g and g'.  I am merely arguing that it is well within the realm of possibility to do so and proving the concept by demonstrating how this could be accomplished.  This particular example is probably just numerology, but it doesn't seem like an impossible task to come up with a formula that is better justified theoretically, or even to better justify the use of the fine structure constant in that way that I do above in that derivation.  But, this example does seem to indicate that it would be possible to come up with such a formula using only quantities that could be plausible related to electroweak interactions.

R&D

Some people bite their nails when they are nervous or stressed.  I analyze data.  And, with a trial last week two hours from home and another big court deadline breathing down my neck, my spare moments have been spent on data analysis instead of blogging.

Lately, I've been looking at the available data points on hadron masses, which I have on a spreadsheet on my computer (that I fear may die in the near future), on the backs of envelopes, and in scribbled notes that are impossible for anyone but me to read on yellow pads.  The spreadsheet contains a regression model that summarizes the data with a small number of variables to an accuracy of about 99%.  Some of the process is mnemonic.  There is no better way to learn the nuances of a bunch of numbers than to play around with them.

Maybe some day, I'll blog about the topic.  But, right now, I don't have the time for a quality post and I am more inclined to simply parse and deconstruct the data set in stolen minutes here and there.