Neff, the effective number of neutrino species in the lamda CDM Standard Model of cosmology, theoretically, should have a value of 3.046 if there are three neutrino flavors (under about 10 eV in mass) and there is no "dark radiation".
The measured value of Neff combining the most recent Planck satellite data and some other astronomy observations that it included in its analysis is 3.52 +0.48/-0.45, assuming that the tensor to scalar ratio, r, is zero. This result is roughly equally consistent at with the possibility of three neutrino species and with the possibility of four neutrino species.
Big Bang Nucleosynthesis data point to a consistent result of 3.5 +/- 0.2, again equally consistent with three neutrino species or with four (or with three species of neutrinos and a fractional neutrino species attributable for example to dark radiation).
The big question to date has been whether this stubborn mean value in excess of the expected 3.046 in study after study has been simply a product of experimental error, or if it instead denotes some other fundamental physical phenomena, such as a light sterile neutrino that could also explain the reactor anomaly in neutrino oscillations, or a light particle just on the brink of being too heavy to count as a neutrino that only counts fractionally, or dark radiation, any of which would constitute beyond the Standard Model physics.
The best fitting dark matter particle content to fit existing astronomy data regarding dark matter call for a single type of Dirac dark matter particle and a massive boson often called a "dark photon" that mediates a U(1) force between them (the dark photon terminology flows from the fact that this vector boson would behave essentially like photons in QED if photons were massive).
So, there are reasonably well motivated, conservative extensions of the Standard Model that could accommodate a fourth neutrino species or a dark radiation component (each apparently worth an Neff of about 0.227 (i.e. 7/8*(4/11)4/3). Two dark radiation components would be a very nice fit to the pre-BICEP2 experimental data.
BICEP2 has reported that r=0.20 +0.07/-0.05, which would imply a result of Neff=4.00 +/- 0.41. Another set of unpublished preliminary results (A. Lewis, http://cosmoco ee.info/ les/Antony Lewis/bicep planck.pdf (20 March 2014)), point to Neff=3.80 +/-0.35.
Given the tension between the BICEP2 estimate of r=.10-.34 in a 95% confidence interval, and r=0-0.11 in a 95% based on pre-BICEP2 data reported by BICEP2, a true value of r=0.10-.11 would be not inconsistent with the 95% confidence interval of either of the data points that are in tension with each other. Presumably, such an intermediate number would split the difference of the tensor to scalar ratio adjustment to Neff, bringing its value to about 3.66-3.76 with error bars of about +/- 0.4.
Note, however, that if r is not equal to zero, that the Neff associated with three neutrino species might not be 3.046 (I don't know enough to be sure).
Thursday, March 27, 2014
Wednesday, March 26, 2014
Machian Numerology
Merab Gogberashvili has written a number of papers showing how many of the physical constants of cosmology and particle physics flow naturally from the Machian principle, i.e. that inertia is due to the sum total of the gravitational pulls on an object from every other gravitating object in the universe.
He uses corollaries of this observation to note that the observed amount of dark energy in the universe corresponds to the aggregate gravitational energy of all of the particles in the universe acting on all of the other particles in the universe. He examines the notion of gravity as an entropic force. He derives the fine structure constant (i.e. the physical constant that determines the strength of electromagnetic interactions) from the relative proportions of baryonic, radiative and dark energy components of the total energy density of the universe. He suggests a Machian approach to resolving the "hierarchy problem". He uses this approach to derive Schrodinger equation and the Planck constant.
His other work is also very interesting. He uses a five dimensional standing wave braneworld model to derive the fundamental fermion masses and to examine the source of the speed of light constant and the relationship between the Higgs and Planck scales. He considers "apple shaped" extra dimensions as a possible source of the three ferimon families. He explores formulations of traditional physics in
Octonionic forms. He sketches out a gravity-electromagnetic unification.
His proses is interesting and the span of the concepts he explores are a breath of fresh air compared to the stale recycling of well trod approaches by other theorists.
He uses corollaries of this observation to note that the observed amount of dark energy in the universe corresponds to the aggregate gravitational energy of all of the particles in the universe acting on all of the other particles in the universe. He examines the notion of gravity as an entropic force. He derives the fine structure constant (i.e. the physical constant that determines the strength of electromagnetic interactions) from the relative proportions of baryonic, radiative and dark energy components of the total energy density of the universe. He suggests a Machian approach to resolving the "hierarchy problem". He uses this approach to derive Schrodinger equation and the Planck constant.
His other work is also very interesting. He uses a five dimensional standing wave braneworld model to derive the fundamental fermion masses and to examine the source of the speed of light constant and the relationship between the Higgs and Planck scales. He considers "apple shaped" extra dimensions as a possible source of the three ferimon families. He explores formulations of traditional physics in
Octonionic forms. He sketches out a gravity-electromagnetic unification.
His proses is interesting and the span of the concepts he explores are a breath of fresh air compared to the stale recycling of well trod approaches by other theorists.
Tuesday, March 25, 2014
Limits To Experimental Evidence For A Big Bang Singularity
Matt Strassler, rightly, takes a moment to breath and to acknowledge that using general relativity and the Standard Model to extrapolate from the existing universe backward in time only as far as those theories have been validated experimentally, or at least by a consensus very well motivated theoretical argument to extend those theories from the region where they have been validated experimentally, does not take you all of the way back to a Big Bang singularity.
It doesn't get you past the inflation barrier. It doesn't get you to baryogenesis and leptogenesis. It doesn't get you to matter-antimatter symmetry. It doesn't get you back to time periods when the mean energy density of the universe was much above the TeV scale. But, it really is remarkable how little of the history of the universe all of those processes that we don't understand well actually consume.
The known laws of physics do get you to nucleosynthesis from primordial protons and neutrons (ca. 0.01 to 0.00001 seconds after the Big Bang in conventional cosmologies), and maybe even to a quark epoch ca. 10^-12 seconds after the Big Bang in a conventional cosmology when there is a quark-gluon plasma in the universe that has not settled into hadrons yet. This is a very long way to work back from a universe that is about 13.7 billion years old, plus or minus 170,000 years or so, but it isn't quite the singularity ether.
This is roughly the point at which you have to plug in the baryon number of the universe, the lepton number of the universe, the size of the universe, the total mass-energy of the universe, the matter-antimatter balance of the universe, the value of the cosmological constant, and the value of all of the parameters of the Standard Model and general relativity, and can get it to extrapolate from there pretty neatly all of the rest of the way to the present using known laws of physics (with a bare bones toy model of dark matter filling in the only bits we don't understand too well yet).
Working back to the size of the universe at the end of the inflationary epoch reputedly squeezes the universe into the size of a grain of sand (about 0.9 millimeters) at a time that ends at about 10^-32 seconds after the Big Bang. Without inflation (proposed in 1980 by Alan Guth) it would have taken about 10^-11 second instead of 10^-34 seconds for a Big Bang that started at time zero and expanded at the speed of light to reach that size. Essentially, the increased speed of inflation provides is greater homogeneity at the start of the quark epoch relative to what would have been expected naively, at a lower than naively expected temperature for that proper time, at the cost of wildly violating the speed of light speed limit that is a bedrock of general and special relativity.
Another philosophically problematic aspect of inflation is that it elevates ad hoc assumptions about the nature of the time zero Big Bang state about which we have no supportive direct observational evidence to an unreasonably high priority in choosing a hypothesis. A neutral charge, neutral color charge, pure energy, zero size starting point may be pretty, but there is nothing terribly sacred about it that actual empirical data forces us to assume. It may be a natural starting point, but we do not at all known that it is Nature's starting point.
Assuming expansion at the speed of light after inflation, at the start of the quark epoch the universe has a radius of about 10 centimeters (i.e. 0.1 meters), and at the start of hadronization the universe has a radius of about 100 kilometers (an approximately spherical object with dimensions of about the size of a largish U.S. county). It isn't hard, although I probably won't do it today, to calculate the total mass-energy density of the universe.
So, until you get all the way back to the first microsecond or even a bit less, in a big U.S. county sized universe, the laws of physics as we know them seem to work just fine and we can extrapolate to some "initial conditions" at that point. But, before that point, we have extrapolated to a point before what we can explain with the laws of physics as we know them and how the initial conditions at that point came to be are much more speculative.
Now, laws of physics that appear to hold for a 22 order of magnitude span of time and space is pretty impressive. Yet, we still have to sacrifice less than a microsecond from a hypothetical singularity with pure energy initial conditions to a point in the history of the universe that we really and truly understand and aren't just guessing at (even if those guesses make a certain amount of reasonable sense). It is easy to focus on the controversy in cosmology over what allegedly happened in the first fraction of a microsecond after the Big Bang, while ignoring the consensus over the remaining 99.99999999999999999999% about which there is profound agreement regarding the cosmological history of the universe among physicists.
These initial conditions aren't a singularity, but they an extremely basic point of beginning that we can all have considerable confidence in that is backed up by a lot of available data (except for dark matter which is a bit fuzzy).
Beyond that is something more than entertainment and fancy, but less than utterly reliable science. Call it the legendary history of the universe, after the earliest periods of written history where fact and fantasy blend into each other.
As much as a scientist as I am, I don't find it terribly pressing to try to go back further before these initial conditions, although I keep my eyes and ears open against the possibility that something sufficiently solid or interesting in this earlier era may turn up. I'm not much of a mystic, but I can tolerate a sort of stochastic clockwork Creator-God who sets into place a set of initial conditions and laws of physics that prevail at that point and plays no further role in the universe once it is set in motion, or at least I can be content to be agnostic about what happens before the quark epoch or hadron epoch.
Put another way, the beginning of the quark epoch is very close to the point at which we may very well cross over from that which is knowable to that which is unknowable. But, I prefer to accentuate the positive and think about how big the knowable part is and how small the unknowable parts is in space-time.
I have some theories, and I'm familiar with other theories. Maybe some insight well get us further. I certainly recognize how there is a natural link between the pre-quark epoch part of the history of the universe, and the physics of energies far in excess of the LHC,and hence to the deepest layer of the laws of nature.
But, for example, it is generically true that any cosmological inflation theory, in which the universe expands at vastly more than the speed of light, creates problems as well as solving them. Is time even actually well defined in a tachyonic universe? Why should the speed of light limit on the speed of the universe's expansion, rather than some other law of nature break down at that point? Where do physicists get off calling anything that happens (hypothetically) in less than 10^-31 seconds "slow roll inflation"? What the hell is slow about that? There is just too much that we don't know.
It doesn't get you past the inflation barrier. It doesn't get you to baryogenesis and leptogenesis. It doesn't get you to matter-antimatter symmetry. It doesn't get you back to time periods when the mean energy density of the universe was much above the TeV scale. But, it really is remarkable how little of the history of the universe all of those processes that we don't understand well actually consume.
The known laws of physics do get you to nucleosynthesis from primordial protons and neutrons (ca. 0.01 to 0.00001 seconds after the Big Bang in conventional cosmologies), and maybe even to a quark epoch ca. 10^-12 seconds after the Big Bang in a conventional cosmology when there is a quark-gluon plasma in the universe that has not settled into hadrons yet. This is a very long way to work back from a universe that is about 13.7 billion years old, plus or minus 170,000 years or so, but it isn't quite the singularity ether.
This is roughly the point at which you have to plug in the baryon number of the universe, the lepton number of the universe, the size of the universe, the total mass-energy of the universe, the matter-antimatter balance of the universe, the value of the cosmological constant, and the value of all of the parameters of the Standard Model and general relativity, and can get it to extrapolate from there pretty neatly all of the rest of the way to the present using known laws of physics (with a bare bones toy model of dark matter filling in the only bits we don't understand too well yet).
Working back to the size of the universe at the end of the inflationary epoch reputedly squeezes the universe into the size of a grain of sand (about 0.9 millimeters) at a time that ends at about 10^-32 seconds after the Big Bang. Without inflation (proposed in 1980 by Alan Guth) it would have taken about 10^-11 second instead of 10^-34 seconds for a Big Bang that started at time zero and expanded at the speed of light to reach that size. Essentially, the increased speed of inflation provides is greater homogeneity at the start of the quark epoch relative to what would have been expected naively, at a lower than naively expected temperature for that proper time, at the cost of wildly violating the speed of light speed limit that is a bedrock of general and special relativity.
Another philosophically problematic aspect of inflation is that it elevates ad hoc assumptions about the nature of the time zero Big Bang state about which we have no supportive direct observational evidence to an unreasonably high priority in choosing a hypothesis. A neutral charge, neutral color charge, pure energy, zero size starting point may be pretty, but there is nothing terribly sacred about it that actual empirical data forces us to assume. It may be a natural starting point, but we do not at all known that it is Nature's starting point.
Assuming expansion at the speed of light after inflation, at the start of the quark epoch the universe has a radius of about 10 centimeters (i.e. 0.1 meters), and at the start of hadronization the universe has a radius of about 100 kilometers (an approximately spherical object with dimensions of about the size of a largish U.S. county). It isn't hard, although I probably won't do it today, to calculate the total mass-energy density of the universe.
So, until you get all the way back to the first microsecond or even a bit less, in a big U.S. county sized universe, the laws of physics as we know them seem to work just fine and we can extrapolate to some "initial conditions" at that point. But, before that point, we have extrapolated to a point before what we can explain with the laws of physics as we know them and how the initial conditions at that point came to be are much more speculative.
Now, laws of physics that appear to hold for a 22 order of magnitude span of time and space is pretty impressive. Yet, we still have to sacrifice less than a microsecond from a hypothetical singularity with pure energy initial conditions to a point in the history of the universe that we really and truly understand and aren't just guessing at (even if those guesses make a certain amount of reasonable sense). It is easy to focus on the controversy in cosmology over what allegedly happened in the first fraction of a microsecond after the Big Bang, while ignoring the consensus over the remaining 99.99999999999999999999% about which there is profound agreement regarding the cosmological history of the universe among physicists.
These initial conditions aren't a singularity, but they an extremely basic point of beginning that we can all have considerable confidence in that is backed up by a lot of available data (except for dark matter which is a bit fuzzy).
Beyond that is something more than entertainment and fancy, but less than utterly reliable science. Call it the legendary history of the universe, after the earliest periods of written history where fact and fantasy blend into each other.
As much as a scientist as I am, I don't find it terribly pressing to try to go back further before these initial conditions, although I keep my eyes and ears open against the possibility that something sufficiently solid or interesting in this earlier era may turn up. I'm not much of a mystic, but I can tolerate a sort of stochastic clockwork Creator-God who sets into place a set of initial conditions and laws of physics that prevail at that point and plays no further role in the universe once it is set in motion, or at least I can be content to be agnostic about what happens before the quark epoch or hadron epoch.
Put another way, the beginning of the quark epoch is very close to the point at which we may very well cross over from that which is knowable to that which is unknowable. But, I prefer to accentuate the positive and think about how big the knowable part is and how small the unknowable parts is in space-time.
I have some theories, and I'm familiar with other theories. Maybe some insight well get us further. I certainly recognize how there is a natural link between the pre-quark epoch part of the history of the universe, and the physics of energies far in excess of the LHC,and hence to the deepest layer of the laws of nature.
But, for example, it is generically true that any cosmological inflation theory, in which the universe expands at vastly more than the speed of light, creates problems as well as solving them. Is time even actually well defined in a tachyonic universe? Why should the speed of light limit on the speed of the universe's expansion, rather than some other law of nature break down at that point? Where do physicists get off calling anything that happens (hypothetically) in less than 10^-31 seconds "slow roll inflation"? What the hell is slow about that? There is just too much that we don't know.
Short Range Constraints On Non-Newtonian Gravitational Effects
Background
For many practical purposes, e.g. aeronautical engineering applications and the dynamics of stars within a galaxy, Newton's law of gravity (i.e. G*M*m/r^2 in the case of two point masses at distance r from each other), proposed in the 17th century, remains perfectly accurate to the limits of experimental accuracy or engineering requirements.
General relativity modifies Newtonian gravity in circumstances where objects are moving with linear momentum or angular momentum that is non-negligible relative to the speed of light (e.g. special relativistic gravitational effects), in very strong fields such as those found in the vicinity of black holes, neutron stars, binary systems, and around the time of the Big Bang (singularities and other strong gravitational field effects equivalent to similar accelerations), in the vicinity of very heavy objections in circular motion (frame dragging), and in that photons have gravitational interactions (gravitational lensing).
One of the only general relativistic effect observable in an Earth bound, experimental setting (without involving astronomy observations) is the impact that the elevation of an atomic clock has on the rate at which it ticks, due to the weakening strength of the Earth's gravitational field as one's distance from the center of the Earth increases.
Where it has been possible to make experimental observations of these predicted general relativistic deviations from Newtonian gravity, they have confirmed general relativity with a small cosmological constant to the limits to experimental accuracy (which, alas is nowhere near as precise as the experimental tests of electroweak phenomena, for example), with the exception of the phenomena attributed in astronomy observations to dark matter effects, at scales from about ten meters to Earth orbit to galaxies to the scale of the universe as a whole.
In general, general relativity does not predict measurable phenomenological differences from Newtonian gravity for objects made of laboratory scale objects made of matter, at laboratory distances, at velocities relative to each other that are tiny relative to the speed of light, after controlling for any gravitational impact of the Earth's gravitational field (or any other dominant gravitational field in the vicinity of the experiment).
But, for a variety of reasons, including the weakness of the gravitational force between laboratory sized masses, it is very tricky, experimentally to measure deviations from Newtonian gravity at very short distances in laboratory experiments.
Why Look For Short Range Non-Newtonian Gravitational Effects?
Yet, as scientists, we would like to validate the correctness of the physical laws that govern gravity, one of the four fundamental forces of Nature, over as many orders of magnitude as possible. And, conceptually and theoretically, there are plausible reasons to think that at a sufficient small scale (e.g. the Planck scale of ca. 10^-34 meters, or even the atomic scale of ca. 10^-15 meters) that quantum interactions might cause gravity to behave in a non-classical manner.
Almost no realistic laboratory experiments can hope to directly measure gravitational effects between laboratory scale or smaller objects at these distance scales, other than to conclude that they do not have a measurable effect on experiments conducted at this scale relative to the Standard Model forces.
But, many beyond the Standard Model theories seek to include the three forces of the Standard Model and the force of gravity into a single unified framework, often as manifestations of some single underlying principle or particle. This isn't easy, because the three Standard Model forces are profoundly stronger than the force of gravity at a fundamental particle or atomic scale. Gravity is about 10^-28 times weaker than the weak nuclear force, about 10^-34 times weaker than the electromagnetic force, and 10^-36 times weaker than the strong nuclear force at this scale. But, there is a way to accomplish this goal, in a mathematically elegant way that is not contradicted by any current experimental evidence
First, assume that the Standard Model particles and interactions are confined to a space-time vacuum membrane with three dimensions of space of a scale not less than about 13 billion light years in extent, and a familiar dimension of time of not less than about 13 billion years in extent. Then assume, that at every point in conventional space-time, additional dimensions (frequently seven) extend for only a finite distance on the order of a small fraction of a meter (sometimes called, for example, "compactified" or Kaluza-Klein dimensions).
In this important class of beyond the Standard Model theories that include most variations on string theory and many other important classes of beyond the Standard Model theories, gravity should start to behave in a non-Newtonian manner, not at the Planck scale, but instead at the characteristic extent of the extra compact dimensions of the theory (often assumed, for sake of simplicity to all have the same size). Thus, it is not safe to assume that there will be a desert of new gravitational physics between the Planck scale and the planetary scale in the absence of experimental confirmation (although, of course, almost everyone does precisely that the vast majority of the time).
For example, if the typical extra dimension accessible only to gravitational interactions at a given point had a characteristic scale of a micrometer (i.e. 10^-6 meters) then one would expect to see non-Newtonian gravitational effects at distances smaller than a micrometer between laboratory scale or smaller objects.
Gravitational effects are the primary means by which we observe the topology of the universe at every scale.
Particle collider experiments such as the Large Hadron Collider, however, as of 2012, appear to rule out extra dimensions down to scales of about 10^-14 meters to 10^-15 meters (albeit in a mildly model dependent ways), which is about the size of a typical atom to the size of a typical molecule.
This is pretty discouraging for someone looking for theoretical motivations to look for non-Newtonian gravitational effects in experiments that can only measure gravitational effects at larger distances. Non-Newtonian gravitational effects between the classical physics scale and the atomic scale would have to be due to some other kind of theory than one predicting extra dimensions in a relatively conventional way.
This is also rather discouraging considering that extra dimensions are a characteristic feature of a huge swath of the most popular beyond the Standard Model physics theories, although the woes of these theories in the face of this evidence can often be cured mathematically with only mild tweaks to their parameters. Indeed, one can even argue in the case of many models, with a straight face, that only smaller extra dimensions as "natural" in some sense.
Still, for whatever it is worth, I'm not terribly impressed with the notion of an eleven dimensional world with four effectively infinite dimensions and seven atomic scale or smaller dimensions, at least to the extent that all of these dimensions are space-like. This seems horribly contrived to me, as does the assumption that the universe needs a full complement of superpartner particles of which we have never seen any significant experimental evidence.
Current Short Range Constraints On Non-Newtonian Gravitational Effects
To how small of a scale has the accuracy Newtonian gravity been experimentally validated, and how precise is that validation?
For example, the correctness of Newtonian gravity (which in this context is indistinguishable from general relativity) has not been experimentally validated at the very short atomic and molecular scales where chemistry and nuclear physics play out, except to the extent that we know that it is small enough to be negligible relative to the Standard Model forces.
As of early 2013, experimental evidence rules out gravitational effects more than a few orders of magnitude stronger than those predicted by Newtonian gravity at distances of less than about 10 micrometers (i.e. 10^-5 meters). The linked pre-print also explains in technical detail why it is so hard to be more precise.
By way of example, this is roughly the precision to which interchangeable motorcycle engine and firearm parts are crafted. So we know experimentally, for example, that gravity does not behave weirdly in any significantly measurable way at the scale of metal or paint powders interacting with precision crafted mechanical or electronic equipment. At these scales, only the most precisely understood force of the Standard Model (electromagnetism), and the overall, locally uniform at these distance scales, strength of the background gravitational force of the Earth-Moon system matter.
Gravitational effects dozens of orders of magnitude stronger than those predicted by Newtonian gravity are ruled out at distances of about 10^-9 meters or more, which is to say, basically, that at scales at least that large, gravity is still profoundly weaker than any of the Standard Model forces, (although this conclusion is pretty trivial conclusion in the case of the weak nuclear force and strong nuclear force that operate for all practical purposes only at extremely short, basically subatomic distances in any case).
Put another way, gravity doesn't behave weirdly at almost any distance scales large enough to be dominated by classical mechanics, as opposed to quantum mechanics, with something of a gray area at distance scales where quantum mechanical effects are tiny, but not so small that they can be completely ignored without serious thought in high precision applications.
Of course, deviations from Newtonian gravity at small scales don't have to be big (i.e. the orders of magnitude differences that current experimental constrains at micrometer and smaller scales involve) to be very interesting to scientists.
In practice, five standard deviation direct experimental proof of even a 1% deviation from Newtonian gravity/general relativity at any scale would revolutionize physics as we know it and win its discoverer a Nobel prize and fame almost on a par with Newton and Einstein. Yet, it isn't truly inconceivable, given only our experimental data, that such a subtle deviation from Newtonian gravity could take place even at the millimeter scale. So, the existing constraints on new gravitational physics at small distances are really very modest indeed.
Bonus: What Would Be The Wavelength and Amplitude of Gravitational Waves?
The range of gravitational wave frequencies that could conceivably be detected experimentally and that would have a conceivable astronomy source ranges from about 10^-7 Hertz to 10^11 Hertz (a Hertz is number of wavelengths per second, and gravitational waves would propagate at the speed of light in general relativity). A light second is about 186,000 miles (about 982,090,000 feet).
A 10^-7 Hertz gravity wave would have a wavelength of about 31% of a light year (about a tenth of the distance to the nearest star to Earth other than the Sun). A 10^11 Hertz gravity wave would have a wavelength of about 1/8th of an inch. In this direction, limitations on the minimum size of a gravity wave come from a lack of sufficiently high energy sources for it in the universe.
The amplitude of a gravitational wave in general relativity that we might expect to encounter passing through Earth might squish or stretch the fabric of space-time by a factor on the order of one part per 10^20 between its peak and trough, which relative to the radius of the Earth would be a distance of about 100 atoms in length (e.g. the length of one strand of DNA within a cell's nucleus).
For many practical purposes, e.g. aeronautical engineering applications and the dynamics of stars within a galaxy, Newton's law of gravity (i.e. G*M*m/r^2 in the case of two point masses at distance r from each other), proposed in the 17th century, remains perfectly accurate to the limits of experimental accuracy or engineering requirements.
General relativity modifies Newtonian gravity in circumstances where objects are moving with linear momentum or angular momentum that is non-negligible relative to the speed of light (e.g. special relativistic gravitational effects), in very strong fields such as those found in the vicinity of black holes, neutron stars, binary systems, and around the time of the Big Bang (singularities and other strong gravitational field effects equivalent to similar accelerations), in the vicinity of very heavy objections in circular motion (frame dragging), and in that photons have gravitational interactions (gravitational lensing).
One of the only general relativistic effect observable in an Earth bound, experimental setting (without involving astronomy observations) is the impact that the elevation of an atomic clock has on the rate at which it ticks, due to the weakening strength of the Earth's gravitational field as one's distance from the center of the Earth increases.
Where it has been possible to make experimental observations of these predicted general relativistic deviations from Newtonian gravity, they have confirmed general relativity with a small cosmological constant to the limits to experimental accuracy (which, alas is nowhere near as precise as the experimental tests of electroweak phenomena, for example), with the exception of the phenomena attributed in astronomy observations to dark matter effects, at scales from about ten meters to Earth orbit to galaxies to the scale of the universe as a whole.
In general, general relativity does not predict measurable phenomenological differences from Newtonian gravity for objects made of laboratory scale objects made of matter, at laboratory distances, at velocities relative to each other that are tiny relative to the speed of light, after controlling for any gravitational impact of the Earth's gravitational field (or any other dominant gravitational field in the vicinity of the experiment).
But, for a variety of reasons, including the weakness of the gravitational force between laboratory sized masses, it is very tricky, experimentally to measure deviations from Newtonian gravity at very short distances in laboratory experiments.
Why Look For Short Range Non-Newtonian Gravitational Effects?
Yet, as scientists, we would like to validate the correctness of the physical laws that govern gravity, one of the four fundamental forces of Nature, over as many orders of magnitude as possible. And, conceptually and theoretically, there are plausible reasons to think that at a sufficient small scale (e.g. the Planck scale of ca. 10^-34 meters, or even the atomic scale of ca. 10^-15 meters) that quantum interactions might cause gravity to behave in a non-classical manner.
Almost no realistic laboratory experiments can hope to directly measure gravitational effects between laboratory scale or smaller objects at these distance scales, other than to conclude that they do not have a measurable effect on experiments conducted at this scale relative to the Standard Model forces.
But, many beyond the Standard Model theories seek to include the three forces of the Standard Model and the force of gravity into a single unified framework, often as manifestations of some single underlying principle or particle. This isn't easy, because the three Standard Model forces are profoundly stronger than the force of gravity at a fundamental particle or atomic scale. Gravity is about 10^-28 times weaker than the weak nuclear force, about 10^-34 times weaker than the electromagnetic force, and 10^-36 times weaker than the strong nuclear force at this scale. But, there is a way to accomplish this goal, in a mathematically elegant way that is not contradicted by any current experimental evidence
First, assume that the Standard Model particles and interactions are confined to a space-time vacuum membrane with three dimensions of space of a scale not less than about 13 billion light years in extent, and a familiar dimension of time of not less than about 13 billion years in extent. Then assume, that at every point in conventional space-time, additional dimensions (frequently seven) extend for only a finite distance on the order of a small fraction of a meter (sometimes called, for example, "compactified" or Kaluza-Klein dimensions).
In this important class of beyond the Standard Model theories that include most variations on string theory and many other important classes of beyond the Standard Model theories, gravity should start to behave in a non-Newtonian manner, not at the Planck scale, but instead at the characteristic extent of the extra compact dimensions of the theory (often assumed, for sake of simplicity to all have the same size). Thus, it is not safe to assume that there will be a desert of new gravitational physics between the Planck scale and the planetary scale in the absence of experimental confirmation (although, of course, almost everyone does precisely that the vast majority of the time).
For example, if the typical extra dimension accessible only to gravitational interactions at a given point had a characteristic scale of a micrometer (i.e. 10^-6 meters) then one would expect to see non-Newtonian gravitational effects at distances smaller than a micrometer between laboratory scale or smaller objects.
Gravitational effects are the primary means by which we observe the topology of the universe at every scale.
Particle collider experiments such as the Large Hadron Collider, however, as of 2012, appear to rule out extra dimensions down to scales of about 10^-14 meters to 10^-15 meters (albeit in a mildly model dependent ways), which is about the size of a typical atom to the size of a typical molecule.
This is pretty discouraging for someone looking for theoretical motivations to look for non-Newtonian gravitational effects in experiments that can only measure gravitational effects at larger distances. Non-Newtonian gravitational effects between the classical physics scale and the atomic scale would have to be due to some other kind of theory than one predicting extra dimensions in a relatively conventional way.
This is also rather discouraging considering that extra dimensions are a characteristic feature of a huge swath of the most popular beyond the Standard Model physics theories, although the woes of these theories in the face of this evidence can often be cured mathematically with only mild tweaks to their parameters. Indeed, one can even argue in the case of many models, with a straight face, that only smaller extra dimensions as "natural" in some sense.
Still, for whatever it is worth, I'm not terribly impressed with the notion of an eleven dimensional world with four effectively infinite dimensions and seven atomic scale or smaller dimensions, at least to the extent that all of these dimensions are space-like. This seems horribly contrived to me, as does the assumption that the universe needs a full complement of superpartner particles of which we have never seen any significant experimental evidence.
Current Short Range Constraints On Non-Newtonian Gravitational Effects
To how small of a scale has the accuracy Newtonian gravity been experimentally validated, and how precise is that validation?
For example, the correctness of Newtonian gravity (which in this context is indistinguishable from general relativity) has not been experimentally validated at the very short atomic and molecular scales where chemistry and nuclear physics play out, except to the extent that we know that it is small enough to be negligible relative to the Standard Model forces.
As of early 2013, experimental evidence rules out gravitational effects more than a few orders of magnitude stronger than those predicted by Newtonian gravity at distances of less than about 10 micrometers (i.e. 10^-5 meters). The linked pre-print also explains in technical detail why it is so hard to be more precise.
By way of example, this is roughly the precision to which interchangeable motorcycle engine and firearm parts are crafted. So we know experimentally, for example, that gravity does not behave weirdly in any significantly measurable way at the scale of metal or paint powders interacting with precision crafted mechanical or electronic equipment. At these scales, only the most precisely understood force of the Standard Model (electromagnetism), and the overall, locally uniform at these distance scales, strength of the background gravitational force of the Earth-Moon system matter.
Gravitational effects dozens of orders of magnitude stronger than those predicted by Newtonian gravity are ruled out at distances of about 10^-9 meters or more, which is to say, basically, that at scales at least that large, gravity is still profoundly weaker than any of the Standard Model forces, (although this conclusion is pretty trivial conclusion in the case of the weak nuclear force and strong nuclear force that operate for all practical purposes only at extremely short, basically subatomic distances in any case).
Put another way, gravity doesn't behave weirdly at almost any distance scales large enough to be dominated by classical mechanics, as opposed to quantum mechanics, with something of a gray area at distance scales where quantum mechanical effects are tiny, but not so small that they can be completely ignored without serious thought in high precision applications.
Of course, deviations from Newtonian gravity at small scales don't have to be big (i.e. the orders of magnitude differences that current experimental constrains at micrometer and smaller scales involve) to be very interesting to scientists.
In practice, five standard deviation direct experimental proof of even a 1% deviation from Newtonian gravity/general relativity at any scale would revolutionize physics as we know it and win its discoverer a Nobel prize and fame almost on a par with Newton and Einstein. Yet, it isn't truly inconceivable, given only our experimental data, that such a subtle deviation from Newtonian gravity could take place even at the millimeter scale. So, the existing constraints on new gravitational physics at small distances are really very modest indeed.
Bonus: What Would Be The Wavelength and Amplitude of Gravitational Waves?
The range of gravitational wave frequencies that could conceivably be detected experimentally and that would have a conceivable astronomy source ranges from about 10^-7 Hertz to 10^11 Hertz (a Hertz is number of wavelengths per second, and gravitational waves would propagate at the speed of light in general relativity). A light second is about 186,000 miles (about 982,090,000 feet).
A 10^-7 Hertz gravity wave would have a wavelength of about 31% of a light year (about a tenth of the distance to the nearest star to Earth other than the Sun). A 10^11 Hertz gravity wave would have a wavelength of about 1/8th of an inch. In this direction, limitations on the minimum size of a gravity wave come from a lack of sufficiently high energy sources for it in the universe.
The amplitude of a gravitational wave in general relativity that we might expect to encounter passing through Earth might squish or stretch the fabric of space-time by a factor on the order of one part per 10^20 between its peak and trough, which relative to the radius of the Earth would be a distance of about 100 atoms in length (e.g. the length of one strand of DNA within a cell's nucleus).
Friday, March 21, 2014
Higgs Boson Width Less Than 17.43 MeV
The predicted width of the Standard Model Higgs Boson (another way of describing its mean lifetime - a small number implies a long lifetime, a large number implies a short lifetime) is 4.15 MeV (compared to 1,500 MeV for a top quark and 2,500 MeV for a W boson).
A 4.15 MeV width implies a mean lifetime on the order of 5*10^-22 seconds, about 100 times longer than the typical time scale at which hadronization occurs, for example, but still very ephemeral.
The Higgs boson width is hard to measure directly. The direct measurement of the width at the LHC limits it to only about 3,000 MeV or less. But, the CMS experiment at the Large Hadron Collider (LHC) has used some clever data interpretation to determine that the observed Higgs boson width cannot be more than 4.2 times the expected value with 95% confidence (i.e. not more than 17.43 MeV).
This estimate confirms two year old estimates based on comparing the observed cross-sections to the predicted cross-sections of a sampling of particular Higgs boson decays.
This dramatically narrows the range of possible non-Standard Higgs boson decays because more possible decay paths increase the width of a particle.
A 4.15 MeV width implies a mean lifetime on the order of 5*10^-22 seconds, about 100 times longer than the typical time scale at which hadronization occurs, for example, but still very ephemeral.
The Higgs boson width is hard to measure directly. The direct measurement of the width at the LHC limits it to only about 3,000 MeV or less. But, the CMS experiment at the Large Hadron Collider (LHC) has used some clever data interpretation to determine that the observed Higgs boson width cannot be more than 4.2 times the expected value with 95% confidence (i.e. not more than 17.43 MeV).
This estimate confirms two year old estimates based on comparing the observed cross-sections to the predicted cross-sections of a sampling of particular Higgs boson decays.
This dramatically narrows the range of possible non-Standard Higgs boson decays because more possible decay paths increase the width of a particle.
Why Do We Need Beyond The Standard Model Physics?
Five Reasons That We Know The Standard Model And General Relativity Aren't Complete
1. The Standard Model of particle physics is consistent with special relativity (i.e. the adjustments to the rate at which time flows and to momentum relative to Newtonian mechanics associated with particles that move at speeds near the speed of light). But, the Standard Model is not theoretically consistent with gravity and does not provide a quantum mechanical theory of gravity.
These issues are particularly acute at very small distances, at very high energy scales, and in very strong gravitational fields. (Fortunately, in most practical circumstances, the Standard Model alone, or general relativity alone, can be deployed to analyze a question in circumstances where we can be comfortable that quantum effects or relativistic effects, respectively play an insignificant role.)
2. Phenomena attributed to dark matter are observed. No Standard Model particles, fundamental or composite, appears to be capable of providing a good fit to the inferred behavior of dark matter, and no Standard Model fermion or term in the equations of general relativity can explain this phenomena. Needless to say, to the extent that dark matter particles do exist, we don't know how they are created.
3. Neither the Standard Model, nor general relativity, provide an explanation for cosmological inflation in the wake of the Big Bang, despite mounting evidence that inflation or some other similarly remarkable thing happened, for example from BICEP-2.
4. There are approximately 4*1079 baryons in the universe. The ratio of anti-baryons to baryons in the universe is on the order of 10-11 or less (approximately 1/8th are neutrons and approximately 7/8th are protons).
We have no Standard Model explanation for the baryon asymmetry of the universe, in other words, why there are many more quarks than anti-quarks in the universe, which in the language of quantum physics is described technically as the question of how the universe acquired a substantial non-zero baryon number (baryon number is defined as sum of quarks minus the sum of anti-quarks divided by three). This means that we need a beyond the Standard Model baryogenesis mechanism (assuming that the baryon number and lepton number generated in the Big Bang was zero).
There is a Standard Model process that can give rise to baryogenesis called a sphaleron process, but the consensus of theorists who have studied this process is that this process could not give rise to a 10-11 anti-baryon to baryon ratio within the parameters of mainstream cosmology theories that we are aware of at this time.
Note that baryon asymmetry itself (and likewise charged lepton asymmetry) really isn't all that remarkable in a universe that is not a vacuum. We would expect matter-antimatter annihilation to convert to energy all of quarks in the universe that have corresponding anti-quarks, over time. Assuming that we can estimate the total mass-energy of the universe that is not captured in pure matter, and that mass-energy conservation holds, we can even estimate what percentage of the mass-energy from the Big Bang either never entered a matter state, or generated particles and anti-particles that subsequently annihilated each other. But, finding a way for such an extreme asymmetry to arise isn't obvious when one assumes that the Big Bang starts in pure energy state that is neutral between matter and antimatter and has no net fermion numbers.
5. We don't know if the universe has a non-zero lepton number (i.e. if the sum of charged leptons and neutrinos in the universe is greatly in excess of the number of charged anti-leptons and anti-neutrinos in the universe), but this is very likely.
There are about 3.5*1079 charged leptons in the universe. The ratio of charged anti-leptons to leptons is almost exactly the same as the ratio of anti-baryons to baryons in the universe. There are also about 1.2*1089 neutrinos in the universe and we don't have reliable measurements of the ratio of anti-neutrinos to neutrinos in the universe, although the early indications are that the number of anti-neutrinos exceeds the number of neutrinos in the universe by many orders of magnitude more than the baryon asymmetry in the universe. If the ratio of anti-neutrinos to neutrinos in the universe differs from 1 by even 10-9, then we need a beyond the Standard Model explanation for leptogenesis (assuming that the baryon number and lepton number generated in the Big Bang was zero). And, if this asymmetry was sufficiently great, it could not be generated by a sphaleron process.
The mass of all of the dark matter in the universe is about 2*1086 keV. In the case of warm dark matter scenarios with 2 keV dark matter particles, there are about 1,200 neutrinos in the universe for every dark matter particle. In the case of cold dark matter scenarios with dark matter particles with a mass on the order of 20 GeV, there are about 12,000,000,000 neutrinos for every dark matter particle. If dark matter particles are "thermal relics" and have a mass on the order of 1 eV - 10 eV, which they would need to in order to balance out any significant imbalance between neutrinos and anti-neutrinos in the universe, they would be "hot dark matter" particles which could not reproduce observed dark matter phenomena in the universe (in principle, such light dark matter particles are possible if they are generated non-thermally and have much lower mean velocities than thermal relic dark matter would). Thus, even if dark matter particles carried a positive lepton number, this is almost certainly not sufficient to make the lepton number in the universe.
Also, the Standard Model sphaleron process which is the only means of baryongenesis and lepton genesis in the Standard Model, conserves the quantity B-L (baryon number minus lepton number in the universe).
If there is even a 1% excess of anti-neutrinos over neutrinos in the universe (and the reality is that the excess is probably profoundly greater than that), then B is much greater than zero, L is much less than zero, and B-L is much less than zero.
Ten More Reasons To Explore Beyond The Standard Model Physics Or Within The Standard Model Physics
1. There are very strong hints that the experimentally measured parameters of the Standard Model have deeper connections to each other than we understand and understanding these relationships would both deepen our understanding of the laws of nature, and allow us to use more precisely measured experimental constants to obtain more precise values for less precisely measured experimental constants of the Standard Model.
2. The discovery of additional relationships and symmetries in the laws of physics might make it possible to greatly simplify the calculations involved in applying the Standard Model.
3. There are processes such as the mechanism by which neutrinos acquire mass, neutrino oscillation, the hadronic physics of large classes of mesons and possibly some exotic baryons as well, that the Standard Model does not understand well, either theoretically, or in an operational manner that we can use to make practical calculations. This suggests that either there are some missing or not quite correct pieces in our current understanding of neutrino physics and QCD, or that there are some subtle corollaries of existing equations of Standard Model physics that we have not yet recognized.
4. We have not been able to experimentally validate the Standard Model and general relativity in circumstances of extremely high energies (especially those approaching the "GUT" scale of 1016 GeV and the Planck scale of 1018 GeV, extremely strong gravitational fields, and extreme short distances (especially at the Planck scale), where it is plausible to think, for a variety of theoretical reasons, that new physics may be lurking.
High energy scales present in the very early universe are a natural place to expect beyond the Standard Model and beyond general relativity physics that could help explain inflation, baryogenesis, leptogenesis, the creation of dark matter, dark energy, and the topology of the universe. Many of the apparent discrepancies between general relativity and the Standard Model also manifest themselves in this regime in ill understood ways.
5. Multiple decades of theoretical research into supersymmetric theories, supergravity, and string theories suggest that that are certain properties of any kinds of laws of physics that could explain the Standard Model and general relativity at once in a mathematically consistent way. In general, these theories point to the strong possibility that there is a deeper reality with more than the familiar three dimensions of space and one dimension of time, and to the likelihood of additional possible particle states at high energies.
6. We have not ruled out the possibility that the space-time does not actually have a smooth, continuous, local, real and causal structure. Indeed, entanglement phenomena in quantum mechanics appears to strongly imply that the laws of the universe cannot simultaneously be local, real and causal. Quantum mechanics equations can give us results but doesn't tell us which of these things is not true to give rise to them, or in the alternative, why these concepts are conceptually flawed. There is effectively a "black box" between the start point and the observed end point in quantum mechanical equations.
7. It is very plausible that our understanding of the distinction between particles and the vacuum may be inadequate or flawed. Particles may actually be localized excited states of space-time, rather than separate objects existing within a separate background of space-time. A better understanding of this might explain, for example, why the Higgs field's vacuum expectation value does not give rise to a cosmological constant much larger than is observed, or why multiple different conservation laws (like conservation of lepton number and baryon number) flow from some deeper principle derived from particles that are excited states of the vacuum rather than objects within it.
8. We don't understand the meaning of the "arrows of time" in the laws of physics very well. At the fundamental physics equation level, only CP violation is not time symmetric, and it observes CPT conservation, so violations of time symmetry take place only in very narrow circumstances.
9. The path integrals that govern the propagation of particles in the Standard Model sum up how probability amplitudes evolve for every possible path that a particle could take from point A to point B. Surprisingly, to produce the correct values, these paths must include paths that would seem to be impossible in order to produce the correct answers.
For example, the path integral for the propagation of a photon must consider paths in which a photon travels at greater than, and less than, the speed of light, despite the ordinary assumption of general relativity that massless particles always travel at exactly the speed of light. Similarly, so long as conservation of mass-energy is conserved in the end state of a path, intermediate steps in a path of a propagating particle in the Standard Model can "borrow" mass energy in what is called a "virtual particle" path - these phenomena which include the concept of "tunneling" in transistors, and oscillations between neutral meson states, are absolutely critical to how life works as we observe it on a day to day basis.
This suggest that our concept of many of the laws of nature as "absolute" is merely a classical approximation of how the universe really works. Disparities between classically permitted paths and those that must be considered in quantum physical path integrals suggest what kind of deeper structure the universe might hold that we usually ignore because these effects usually average out.
10. It is not clear that the concept of an "observation" that collapses the wave function of a quantum mechanical particle is rigorously defined in the leading Copenhagen interpretation of quantum physics.
Justifications For Beyond The Standard Model Physics That Don't Impress Me
1. Many theorists consider issues in quantum physics such as "the hierarchy problem", the "strong CP problem", "fine tuning" and "naturalness" to be important motivations for further research into beyond the Standard Model physics, and sometimes seek explanations of the parameters of the Standard Model or general relativity based upon concepts like the anthropic principle (i.e. the laws of nature must be such that we can exist to observe them), and the multiverse (i.e. that our universe is a likely combination of all conceptually possible universes).
Neither these motivations, nor these explanations, impress me.
These motivations, essentially, presume that we have any way of knowing what values of Standard Model parameters (or BSM parameters) to expect.
If Nature decides that the Higgs boson mass should be just so, or that the CP violating parameter of the QCD equations should be zero, or that neutrinos should have masses wildly smaller than the masses of other fermions, that is Her prerogative, and She doesn't need any additional laws of physics to set them at those values. If these choices seem "unnatural" or "fine tuned" then clearly the problem is with the way we are looking at the situation, since what is, is.
If it would be really cool if the gauge couplings of the Standard Model unified, but all available evidence suggests that they do not, then maybe that bit of numerology is just barking up the wrong tree and seeing deep meaning in a mere near coincidence. If the Higgs boson mass seems wildly fine tuned, then maybe our hypothesis about how it is generated is wrong and it can be derived from one or more much simpler mechanisms in which context its value seems far more natural - the unnatural aspect may have much more to do with a highly unnatural and contorted higher order loop approach we use to determine its mass than anything else.
Looking for deeper relationships between parameters we know is one thing, assuming that there must be new pieces to the puzzle based purely on a desire to make Nature fit some arbitrary notion of mathematical beauty is another.
These explanations (the anthropic principle and the multiverse), meanwhile are basically, unscientific ways of generating just so stories.
2. I am similarly unimpressed with those who believe that M-Theory is the only possible path to fundamental physics truth.
In essence, M-Theory and its low energy supergravity approximations, make assumptions about the right way to merge gravity and the rest of quantum physics that have not been very fruitful beyond the not particularly string theory restricted observation that a fundamental massless spin-2 gauge boson has the right properties and right number of degrees of freedom to largely reproduce the gravitational attributes of general relativity.
Specific variations of M-theory that reproduce the particles and interactions that we observe, while not predicting particles and interactions that we do not observe have not been successful after several decades of theoretical work by a large share of the entire theoretical physics community. None of the predictions particular to what is added to the Standard Model by supersymmetry or string theory have been borne out.
Many aspects of M-theory, such as its infamous 11 dimensions, appear to be artifacts of trying to integrate gravity in a unified way into a TOE in a manner that dilutes it relative to other Standard Model forces. Thus, some of the immense complexity of M-theories basically flows from a desire to tweak the magnitude of one coupling constant so that a unified approach can be taken. The price of consistency on this front is high relative to its costs.
1. The Standard Model of particle physics is consistent with special relativity (i.e. the adjustments to the rate at which time flows and to momentum relative to Newtonian mechanics associated with particles that move at speeds near the speed of light). But, the Standard Model is not theoretically consistent with gravity and does not provide a quantum mechanical theory of gravity.
These issues are particularly acute at very small distances, at very high energy scales, and in very strong gravitational fields. (Fortunately, in most practical circumstances, the Standard Model alone, or general relativity alone, can be deployed to analyze a question in circumstances where we can be comfortable that quantum effects or relativistic effects, respectively play an insignificant role.)
2. Phenomena attributed to dark matter are observed. No Standard Model particles, fundamental or composite, appears to be capable of providing a good fit to the inferred behavior of dark matter, and no Standard Model fermion or term in the equations of general relativity can explain this phenomena. Needless to say, to the extent that dark matter particles do exist, we don't know how they are created.
3. Neither the Standard Model, nor general relativity, provide an explanation for cosmological inflation in the wake of the Big Bang, despite mounting evidence that inflation or some other similarly remarkable thing happened, for example from BICEP-2.
4. There are approximately 4*1079 baryons in the universe. The ratio of anti-baryons to baryons in the universe is on the order of 10-11 or less (approximately 1/8th are neutrons and approximately 7/8th are protons).
We have no Standard Model explanation for the baryon asymmetry of the universe, in other words, why there are many more quarks than anti-quarks in the universe, which in the language of quantum physics is described technically as the question of how the universe acquired a substantial non-zero baryon number (baryon number is defined as sum of quarks minus the sum of anti-quarks divided by three). This means that we need a beyond the Standard Model baryogenesis mechanism (assuming that the baryon number and lepton number generated in the Big Bang was zero).
There is a Standard Model process that can give rise to baryogenesis called a sphaleron process, but the consensus of theorists who have studied this process is that this process could not give rise to a 10-11 anti-baryon to baryon ratio within the parameters of mainstream cosmology theories that we are aware of at this time.
Note that baryon asymmetry itself (and likewise charged lepton asymmetry) really isn't all that remarkable in a universe that is not a vacuum. We would expect matter-antimatter annihilation to convert to energy all of quarks in the universe that have corresponding anti-quarks, over time. Assuming that we can estimate the total mass-energy of the universe that is not captured in pure matter, and that mass-energy conservation holds, we can even estimate what percentage of the mass-energy from the Big Bang either never entered a matter state, or generated particles and anti-particles that subsequently annihilated each other. But, finding a way for such an extreme asymmetry to arise isn't obvious when one assumes that the Big Bang starts in pure energy state that is neutral between matter and antimatter and has no net fermion numbers.
5. We don't know if the universe has a non-zero lepton number (i.e. if the sum of charged leptons and neutrinos in the universe is greatly in excess of the number of charged anti-leptons and anti-neutrinos in the universe), but this is very likely.
There are about 3.5*1079 charged leptons in the universe. The ratio of charged anti-leptons to leptons is almost exactly the same as the ratio of anti-baryons to baryons in the universe. There are also about 1.2*1089 neutrinos in the universe and we don't have reliable measurements of the ratio of anti-neutrinos to neutrinos in the universe, although the early indications are that the number of anti-neutrinos exceeds the number of neutrinos in the universe by many orders of magnitude more than the baryon asymmetry in the universe. If the ratio of anti-neutrinos to neutrinos in the universe differs from 1 by even 10-9, then we need a beyond the Standard Model explanation for leptogenesis (assuming that the baryon number and lepton number generated in the Big Bang was zero). And, if this asymmetry was sufficiently great, it could not be generated by a sphaleron process.
The mass of all of the dark matter in the universe is about 2*1086 keV. In the case of warm dark matter scenarios with 2 keV dark matter particles, there are about 1,200 neutrinos in the universe for every dark matter particle. In the case of cold dark matter scenarios with dark matter particles with a mass on the order of 20 GeV, there are about 12,000,000,000 neutrinos for every dark matter particle. If dark matter particles are "thermal relics" and have a mass on the order of 1 eV - 10 eV, which they would need to in order to balance out any significant imbalance between neutrinos and anti-neutrinos in the universe, they would be "hot dark matter" particles which could not reproduce observed dark matter phenomena in the universe (in principle, such light dark matter particles are possible if they are generated non-thermally and have much lower mean velocities than thermal relic dark matter would). Thus, even if dark matter particles carried a positive lepton number, this is almost certainly not sufficient to make the lepton number in the universe.
Also, the Standard Model sphaleron process which is the only means of baryongenesis and lepton genesis in the Standard Model, conserves the quantity B-L (baryon number minus lepton number in the universe).
If there is even a 1% excess of anti-neutrinos over neutrinos in the universe (and the reality is that the excess is probably profoundly greater than that), then B is much greater than zero, L is much less than zero, and B-L is much less than zero.
Ten More Reasons To Explore Beyond The Standard Model Physics Or Within The Standard Model Physics
1. There are very strong hints that the experimentally measured parameters of the Standard Model have deeper connections to each other than we understand and understanding these relationships would both deepen our understanding of the laws of nature, and allow us to use more precisely measured experimental constants to obtain more precise values for less precisely measured experimental constants of the Standard Model.
2. The discovery of additional relationships and symmetries in the laws of physics might make it possible to greatly simplify the calculations involved in applying the Standard Model.
3. There are processes such as the mechanism by which neutrinos acquire mass, neutrino oscillation, the hadronic physics of large classes of mesons and possibly some exotic baryons as well, that the Standard Model does not understand well, either theoretically, or in an operational manner that we can use to make practical calculations. This suggests that either there are some missing or not quite correct pieces in our current understanding of neutrino physics and QCD, or that there are some subtle corollaries of existing equations of Standard Model physics that we have not yet recognized.
4. We have not been able to experimentally validate the Standard Model and general relativity in circumstances of extremely high energies (especially those approaching the "GUT" scale of 1016 GeV and the Planck scale of 1018 GeV, extremely strong gravitational fields, and extreme short distances (especially at the Planck scale), where it is plausible to think, for a variety of theoretical reasons, that new physics may be lurking.
High energy scales present in the very early universe are a natural place to expect beyond the Standard Model and beyond general relativity physics that could help explain inflation, baryogenesis, leptogenesis, the creation of dark matter, dark energy, and the topology of the universe. Many of the apparent discrepancies between general relativity and the Standard Model also manifest themselves in this regime in ill understood ways.
5. Multiple decades of theoretical research into supersymmetric theories, supergravity, and string theories suggest that that are certain properties of any kinds of laws of physics that could explain the Standard Model and general relativity at once in a mathematically consistent way. In general, these theories point to the strong possibility that there is a deeper reality with more than the familiar three dimensions of space and one dimension of time, and to the likelihood of additional possible particle states at high energies.
6. We have not ruled out the possibility that the space-time does not actually have a smooth, continuous, local, real and causal structure. Indeed, entanglement phenomena in quantum mechanics appears to strongly imply that the laws of the universe cannot simultaneously be local, real and causal. Quantum mechanics equations can give us results but doesn't tell us which of these things is not true to give rise to them, or in the alternative, why these concepts are conceptually flawed. There is effectively a "black box" between the start point and the observed end point in quantum mechanical equations.
7. It is very plausible that our understanding of the distinction between particles and the vacuum may be inadequate or flawed. Particles may actually be localized excited states of space-time, rather than separate objects existing within a separate background of space-time. A better understanding of this might explain, for example, why the Higgs field's vacuum expectation value does not give rise to a cosmological constant much larger than is observed, or why multiple different conservation laws (like conservation of lepton number and baryon number) flow from some deeper principle derived from particles that are excited states of the vacuum rather than objects within it.
8. We don't understand the meaning of the "arrows of time" in the laws of physics very well. At the fundamental physics equation level, only CP violation is not time symmetric, and it observes CPT conservation, so violations of time symmetry take place only in very narrow circumstances.
9. The path integrals that govern the propagation of particles in the Standard Model sum up how probability amplitudes evolve for every possible path that a particle could take from point A to point B. Surprisingly, to produce the correct values, these paths must include paths that would seem to be impossible in order to produce the correct answers.
For example, the path integral for the propagation of a photon must consider paths in which a photon travels at greater than, and less than, the speed of light, despite the ordinary assumption of general relativity that massless particles always travel at exactly the speed of light. Similarly, so long as conservation of mass-energy is conserved in the end state of a path, intermediate steps in a path of a propagating particle in the Standard Model can "borrow" mass energy in what is called a "virtual particle" path - these phenomena which include the concept of "tunneling" in transistors, and oscillations between neutral meson states, are absolutely critical to how life works as we observe it on a day to day basis.
This suggest that our concept of many of the laws of nature as "absolute" is merely a classical approximation of how the universe really works. Disparities between classically permitted paths and those that must be considered in quantum physical path integrals suggest what kind of deeper structure the universe might hold that we usually ignore because these effects usually average out.
10. It is not clear that the concept of an "observation" that collapses the wave function of a quantum mechanical particle is rigorously defined in the leading Copenhagen interpretation of quantum physics.
Justifications For Beyond The Standard Model Physics That Don't Impress Me
1. Many theorists consider issues in quantum physics such as "the hierarchy problem", the "strong CP problem", "fine tuning" and "naturalness" to be important motivations for further research into beyond the Standard Model physics, and sometimes seek explanations of the parameters of the Standard Model or general relativity based upon concepts like the anthropic principle (i.e. the laws of nature must be such that we can exist to observe them), and the multiverse (i.e. that our universe is a likely combination of all conceptually possible universes).
Neither these motivations, nor these explanations, impress me.
These motivations, essentially, presume that we have any way of knowing what values of Standard Model parameters (or BSM parameters) to expect.
If Nature decides that the Higgs boson mass should be just so, or that the CP violating parameter of the QCD equations should be zero, or that neutrinos should have masses wildly smaller than the masses of other fermions, that is Her prerogative, and She doesn't need any additional laws of physics to set them at those values. If these choices seem "unnatural" or "fine tuned" then clearly the problem is with the way we are looking at the situation, since what is, is.
If it would be really cool if the gauge couplings of the Standard Model unified, but all available evidence suggests that they do not, then maybe that bit of numerology is just barking up the wrong tree and seeing deep meaning in a mere near coincidence. If the Higgs boson mass seems wildly fine tuned, then maybe our hypothesis about how it is generated is wrong and it can be derived from one or more much simpler mechanisms in which context its value seems far more natural - the unnatural aspect may have much more to do with a highly unnatural and contorted higher order loop approach we use to determine its mass than anything else.
Looking for deeper relationships between parameters we know is one thing, assuming that there must be new pieces to the puzzle based purely on a desire to make Nature fit some arbitrary notion of mathematical beauty is another.
These explanations (the anthropic principle and the multiverse), meanwhile are basically, unscientific ways of generating just so stories.
2. I am similarly unimpressed with those who believe that M-Theory is the only possible path to fundamental physics truth.
In essence, M-Theory and its low energy supergravity approximations, make assumptions about the right way to merge gravity and the rest of quantum physics that have not been very fruitful beyond the not particularly string theory restricted observation that a fundamental massless spin-2 gauge boson has the right properties and right number of degrees of freedom to largely reproduce the gravitational attributes of general relativity.
Specific variations of M-theory that reproduce the particles and interactions that we observe, while not predicting particles and interactions that we do not observe have not been successful after several decades of theoretical work by a large share of the entire theoretical physics community. None of the predictions particular to what is added to the Standard Model by supersymmetry or string theory have been borne out.
Many aspects of M-theory, such as its infamous 11 dimensions, appear to be artifacts of trying to integrate gravity in a unified way into a TOE in a manner that dilutes it relative to other Standard Model forces. Thus, some of the immense complexity of M-theories basically flows from a desire to tweak the magnitude of one coupling constant so that a unified approach can be taken. The price of consistency on this front is high relative to its costs.
Thursday, March 20, 2014
Precision Of Top Quark Mass Measurement Improved
A new analysis combines all of the top quark mass measurement data from the CDF and D0 experiments at the now closed Tevatron collider and the ATLAS and CMS experiments at the Large Hadron Collider (LHC).
Bottom line: the mass of the top quark is 173,340 ± 760 MeV (combined, the error is ± 270 MeV statistical and ± 710 MeV systemic error). This is a precision of one part in 228 (i.e. ± 0.044%).
The top quark mass is the only quark mass that can be directly measured. All other quark masses must be inferred from the masses of hadrons believed to contain those quarks in a model dependent manner.
While the top quark mass is the most precisely known quark mass on a percentage basis, is known with only slight less precision than the Higgs boson mass, and is known more precisely than any of the neutrino masses (the masses of the W boson, Z boson and charged leptons, as well as the Higgs vacuum expectation value ("vev") are known more precisely).
But, as explained below, about 61% of the uncertainty in the sum of the absolute value of the Standard Model fundamental particle masses (including the Higgs vev), and about 72% of the uncertainty in the sum of the square square of the Standard Model fundamental particle masses (including the square of the Higgs vev) is due to uncertainty in the mass of the top quark. All but 6.3% and 0.85% of the balance of these uncertainties is due to uncertainty in the mass of the Higgs boson.
Thus, even a modest improvement in the precision of the top quark mass improves the overall precision of the measurements o the Standard Model fundamental particle masses considerably.
Previous Estimates of the Top Quark Mass
Previous Direct Estimates of the Top Quark Mass
The top quark mass is the only quark mass that can be directly measured. All other quark masses must be inferred from the masses of hadrons believed to contain those quarks in a model dependent manner.
The previous best estimate based upon direct measurements from the Particle Data Group had been 173,070 ± 888 MeV (combined, ± 520 MeV statistical, ± 720 MeV systemic).
The new result is 270 MeV higher than the previous best estimate (0.3 standard deviations from the previous best estimate, which is unsurprising since the new result uses most of the same data as the old result and merely analyzes it more rigorously and precisely) and has a 14% smaller margin of error (with almost all of the improvement coming from a much smaller reduced statistical error in a pooled data set).
The final estimate of the top quark mass from Tevatron alone (CDF and D0 combined) has been 173,200 ± 600 ± 800 MeV.
Previous Top Quark Mass Global Fits
A previous global electroweak observable based fit including early LHC data in 2012 had come up with a top quark mass of 173,520 ± 0.880 MeV which was about 450 MeV more than the previous best estimate and about 180 MeV more than the current combined analysis. These global fits are based upon Standard Model relationships between the Higgs boson mass, top quark mass and W boson mass. These fits are most sensitive to the W boson mass, then to the top quark mass, and are least sensitive to the precise value of the Higgs boson mass.
The diagonal line shows the combinations of the top quark mass and W boson mass that are expected in the Standard Model at a Higgs boson mass of about 125,000 MeV, with the thickness of the line reflecting the range of uncertainty in that measurement. The latest estimate of the Higgs boson mass shift that line imperceptibly to the right, while the latest estimate of the top quark mass shift the center point of the 1 standard deviation confidence interval ellipse to the right by about a third of a hash mark. The greatest impact of a global fit is to favor a low end estimate of the W boson mass of about 80,362 MeV, rather than the current best estimate from the Particle Data Group of 80,385 ± 15 MeV.
The Extended Koide's Rule Fit To the Top Quark Masses (and Other Quark Masses)
An extended Koide's rule estimate of the top quark mass using only the electron and muon masses as inputs, predicted a top quark mass of 173,263.947 ± 0.006 MeV, which is about 80 MeV less than the latest direct measurement. This is within 0.1 standard deviations from the new directly measured value.
Prior to the new combined measurement, the extended Koide's rule estimate was 0.22 standard deviations from the measured value.
The fact that the extended Koide's rule estimate has become more precise than it was when it was devised, as the experimental value has been measured more precisely, is impressive. Indeed, the extended Koide's rule estimate was closer to the new measurement than it was to the old one.
On the other hand, new precision estimates of the bottom and charm quark masses, mentioned below, increase the number of standard deviations in the gap between the experimentally measured values of these masses and the extended Koide's rule estimates for them (from 0.58 sigma to 3.57 sigma for the bottom quark, and from 3.38 sigma to 14.4 sigma for the charm quark). In the case of the bottom quark, the extended Koide's rule estimate is about 0.7% too high. In the case of the charm quark, the extended Koide's rule estimate is about 6.8% too high).
To four significant digits the t-b-c triple's Koide ratio which is predicted to be 0.6667 to four significant digits is 0.6695 at the PDG values and is 0.6699 using the new combined value for the t quark mass, and the new precision values for the b quark and c quark masses. This is still a better fit than any of the other quark triples, although it is a less good fit than it was before the new precision measurements were reported.
The accuracy of the extended Koide's rule estimate for the strange, down and up quark masses is unchanged since there are no new estimates of these masses. The strange quark estimate is 2.9% low (0.55 standard deviations), the down quark estimate is 10.8% high (1.3 standard deviations), and the up quark estimate is 98.5% low (2.26 standard deviations).
The original Koide's rule predicts a mass of the tau lepton from the electron and muon masses that is within 0.93 standard deviations of the currently measured value.
Other Standard Model Fundamental Particle Mass Measurements.
Other Quark Mass Uncertainties
All of the other measured values of the Standard Model quark masses are model dependent estimates based on QCD and hadron masses, and definition issues arise because a quark's mass is a function of the energy scale at which it is measured. The pole mass of a quark is roughly speaking, the mass of a quark at an energy scale equal to its own rest mass and this is the number quotes for the bottom quark and charm quark as well as for the top quark. The pole masses of the strange quark, down quark and up quark are ill defined and instead their masses in what is known as the MS scheme at an energy scale of 2 GeV (i.e. slightly more than the mass of two protons, two neutrons or one proton and one neutron), is normally used instead.
In the case of the bottom quark, the second heaviest of the quarks, the PDG estimate of the bottom quark mass has an uncertainty of 30 MeV (4,180 ± 30 MeV), but a new and consistent precision estimate using improved QCD approaches claims an uncertainty of just 8 MeV (4,169 ± 8 MeV). Similarly, in the case of the charm quark, the third heaviest of the quarks, the PDG estimate has an uncertainty of 25 MeV (1275 ± 25 MeV), but a new and consistent precision estimate using improved QCD approaches claims an uncertainty of just 6 MeV (1,273 ± 6 MeV). The best estimates of the up and down quark masses are about 2.3-0.5+0.7 MeV (about 25% precision) and 4.8-0.3+0.5 MeV (about 8% precision) respectively, but in each case the uncertainty in absolute terms is less than 1 MeV.
The strange quark mass (95 ± 5 MeV per the Particle Data Group) is known only to a roughly 5% precision. The QCD approaches used to reduce uncertainties in bottom quark and charm quark mass produce estimates of only about 10% precision in the case of the strange quark mass.
Charged Lepton Mass Uncertainties
The measured tau lepton mass is 1776.82 MeV with an uncertainty in the mass of the tau lepton is 0.16 MeV. The measured muon mass is 105.6583715 Mev with an uncertainty in the muon mass is 0.0000035 MeV. The measured electron mass is 0.510998928 MeV with an uncertainty in the electron mass is 0.000000011 MeV
Standard Model Neutrino Mass Uncertainties
The two differences in masses between the neutrino mass states are known to a precision of less than 10-5 MeV and 10-4 MeV respectively (the smaller one is about 0.007 eV, while the larger one is about 0.046 eV).
The absolute value of the lightest neutrino mass has been directly measured to be less than 2*10-6 MeV (i.e. 2 eV), and is constrained in a model dependent way by cosmic background radiation and other astronomy measurements to be less than 10-7 MeV (i.e. 0.1 eV). If the neutrinos have a "normal" rather than "inverted" mass hierarchy lightest electron neutrino mass is probably on the order of 0.001 eV.
Gauge Boson Mass and Higgs vev Uncertainties
The newly discovered Higgs boson has a global best fit measured mass of 125,900 MeV with an uncertainty about about ± 400 MeV, which is just over half of the absolute value of the uncertainty in the top quark mass. This is 79 MeV lower than the expectation if the Higgs boson mass is exactly equal to the W boson mass plus 1/2 of the Z boson mass (about 0.2 standard deviations from the measured value).
The accepted value of the Higgs vev is 246,227.9579 MeV with an uncertainty on the order of 0.001 MeV. This is measured via measurements of the lifetime of the mean lifetime of the muon (the Higgs vev is the inverse of the square root of two times the Fermi coupling constant). Both the Higgs vev and Fermi coupling constant are functions of the W boson mass and the weak force coupling constant, but the combined impact of these two factors is known much more precisely than the exact value of either of them.
As noted above, the uncertainty in the measured mass of the W boson is about 15 MeV (from a base number 80,385 MeV). The uncertainty in the Z boson mass is 2.1 MeV (from a base number of 91,187.6 MeV).
Why Is The Accurate Determination Of The Top Quark Mass So Important?
The Absolute Value of the Top Quark Mass Uncertainty Is The Largest In The Standard Model
In evaluating relationships between Standard Model fundamental particle masses, the uncertainty in the top quark mass is one of the dominant sources of uncertainty, because the top quark mass is the heaviest fundamental particle in the Standard Model and the absolute value of the uncertainty in this value is greater than that for any other Standard Model particle.
For example, the accuracy of the strange quark mass measurement is about 100 times less precise on a percentage basis than the top quark mass measurement, but the absolute value of the uncertainty in the top quark mass is still 760 MeV, compared to just 5 MeV for the strange quark mass.
The absolute value of the uncertainty in the top quark mass (760 MeV, which is about 61% of the total) is greater than the sum of the absolute values of the uncertainty in all of the other Standard Model fundamental particles combined (478.261 MeV, which is about 39% of the total), of which 400 MeV is the uncertainty in the Higgs boson mass and 78.261 MeV (which is about 6.3% of the total).
Thus, two fundamental particle mass measurements, one of which is just a couple of years old, account for 93.7% of all of the absolute value of the uncertainty in Standard Model particle mass measurements.
The Top Mass Squared Uncertainty Is Even More Dominant
The dominance of any imprecision in the top quark mass to overall model fits is further amplified in cases where the quantities compared are the square of the masses rather than the masses themselves (e.g. comparing the sum of squares of the Standard Model particle masses to the almost precisely identical square of the vacuum expectation value of the Higgs field).
About 72% of this imprecision is due to the top quark mass and about 99.15% of the imprecision is due to the top quark mass and Higgs boson masses combined.
The difference between the high end and low end of the square of the current combined estimate of various fundamental particle masses (i.e. their uncertainty) is as follows (in MeV^2, using PDG error bars for cases other than the top quark):
t quark mass squared: 526,953,600
All other Standard Model fundamental particles mass squared combined: 207,683,572 (about 28% of the total).
Higgs boson mass squared: 201,440,000
All other masses square except the t quark and Higgs boson masses: 6,243,572 (about 0.85% of the total).
W boson mass squared: 4,823,100
Z boson mass squared: 765,976
b quark mass squared: 501,600
c quark mass squared: 127,500
tau lepton mass squared: 22,497
s quark mass squared: 1,900
Higgs vev squared: 984.912
d quark mass squared: 7.84
u quark mass squared: 5.76
muon mass squared: 0.0015
electron mass squared: 0.0000000225
neutrino masses squared (combined): < 0.0000000003
A Final Prediction Re Top Quark Mass
1. Assume that the W boson mass has its global fit value of 80,362 MeV, rather than its best fit measured value of 80,385 MeV (a 1.53 standard deviation shift).
2. Assume that the Higgs boson mass is exactly equal to the W boson mass plus one half of the Z boson mass (i.e. that it is 125,955.8 MeV) (a 0.14 standard deviation shift).
3. Assume that the Tau lepton mass has its Koide's rule predicted value of 1776.97 MeV, rather than the 1776.82 MeV value that it is measured at today (a 0.93 standard deviation shift).
4. Assume that the bottom quark has the precision value of 4,169 MeV (a 0.37 standard deviation shift)
5. Assume that the charm quark has the precision value of 1,273 MeV (a 0.08 standard deviation shift)
6. Assume that the sum of the squares of the masses of the fundamental particles in the Standard Model equals the sum of the squares of the Higgs vev.
7. Assume that the electron, muon, up quark, down quark, and strange quark have their PDG values and that the neutrinos have masses of less than 2 eV each.
8. Assume that there are no fundamental particles beyond the Standard Model that contribute to the Higgs vev.
What is the best fit value for the top quark mass?
Answer: 173,112.5 ± 2.5 MeV (a 0.29 standard deviation shift of 227.5 MeV from the newly announced value).
N.B. The value of the top quark mass necessary to make the sum of the squares of the fermion masses equal to the sum of the square of the boson masses would be about 174,974 MeV under the same set of assumptions, about 174,646 MeV with a Higgs boson mass at the 125,500 MeV low end of the current 68% confidence interval for the Higgs boson mass, and about 175,222 MeV with a Higgs boson mass at the 126,300 MeV high end of the 68% confidence interval for the Higgs boson mass. These are about 2.07, 1.65, and 2.38 standard deviations, respectively, from the measured value of the mass of the top quark, and thus is not grossly inconsistent with the evidence, despite being a less good fit the the Higgs vev contribution hypothesis (which I also find more compelling theoretically). But, in order to achieve this, the sum of the squares of all of the fundamental Standard Model particle masses must be about 0.7% or more in excess of the square of the Higgs vev.
While both relationships are possible within experimental error, they cannot be true simultaneously despite being quite similar, at least for the pole masses. It is possible to imagine some running mass scale where they might coincide, however, and if there is some energy scale at which this happens, this might be viewed as the energy scale at which fermion-boson symmetry (much like that of supersymmetry, but without the extra particles) breaks down.
The running of the charged lepton masses is almost 2% up to the top quark mass and 3.6% over fourteen orders of magnitude. In contrast, the Higgs boson self-coupling runs to zero at the GUT scale, and the W and Z boson masses at high energies appear to be functions of the running of the electromagnetic and weak force coupling constants. The electromagnetic force coupling constant gets stronger (from about 1/137 to 1/125 up to the electroweak scale) while the weak force coupling constant gets weaker. This appears to be more dramatic than the running of the fermion masses, so the equalization of masses between bosons and fermions shouldn't require too high an energy scale.
Footnote: The measured Higgs boson mass is very nearly the mass that minimizes the second loop corrections necessary to convert the mass of a gauge boson from an MS scheme to a pole mass scheme.
Bottom line: the mass of the top quark is 173,340 ± 760 MeV (combined, the error is ± 270 MeV statistical and ± 710 MeV systemic error). This is a precision of one part in 228 (i.e. ± 0.044%).
The top quark mass is the only quark mass that can be directly measured. All other quark masses must be inferred from the masses of hadrons believed to contain those quarks in a model dependent manner.
While the top quark mass is the most precisely known quark mass on a percentage basis, is known with only slight less precision than the Higgs boson mass, and is known more precisely than any of the neutrino masses (the masses of the W boson, Z boson and charged leptons, as well as the Higgs vacuum expectation value ("vev") are known more precisely).
But, as explained below, about 61% of the uncertainty in the sum of the absolute value of the Standard Model fundamental particle masses (including the Higgs vev), and about 72% of the uncertainty in the sum of the square square of the Standard Model fundamental particle masses (including the square of the Higgs vev) is due to uncertainty in the mass of the top quark. All but 6.3% and 0.85% of the balance of these uncertainties is due to uncertainty in the mass of the Higgs boson.
Thus, even a modest improvement in the precision of the top quark mass improves the overall precision of the measurements o the Standard Model fundamental particle masses considerably.
Previous Estimates of the Top Quark Mass
Previous Direct Estimates of the Top Quark Mass
The top quark mass is the only quark mass that can be directly measured. All other quark masses must be inferred from the masses of hadrons believed to contain those quarks in a model dependent manner.
The previous best estimate based upon direct measurements from the Particle Data Group had been 173,070 ± 888 MeV (combined, ± 520 MeV statistical, ± 720 MeV systemic).
The new result is 270 MeV higher than the previous best estimate (0.3 standard deviations from the previous best estimate, which is unsurprising since the new result uses most of the same data as the old result and merely analyzes it more rigorously and precisely) and has a 14% smaller margin of error (with almost all of the improvement coming from a much smaller reduced statistical error in a pooled data set).
The final estimate of the top quark mass from Tevatron alone (CDF and D0 combined) has been 173,200 ± 600 ± 800 MeV.
Previous Top Quark Mass Global Fits
A previous global electroweak observable based fit including early LHC data in 2012 had come up with a top quark mass of 173,520 ± 0.880 MeV which was about 450 MeV more than the previous best estimate and about 180 MeV more than the current combined analysis. These global fits are based upon Standard Model relationships between the Higgs boson mass, top quark mass and W boson mass. These fits are most sensitive to the W boson mass, then to the top quark mass, and are least sensitive to the precise value of the Higgs boson mass.
The diagonal line shows the combinations of the top quark mass and W boson mass that are expected in the Standard Model at a Higgs boson mass of about 125,000 MeV, with the thickness of the line reflecting the range of uncertainty in that measurement. The latest estimate of the Higgs boson mass shift that line imperceptibly to the right, while the latest estimate of the top quark mass shift the center point of the 1 standard deviation confidence interval ellipse to the right by about a third of a hash mark. The greatest impact of a global fit is to favor a low end estimate of the W boson mass of about 80,362 MeV, rather than the current best estimate from the Particle Data Group of 80,385 ± 15 MeV.
The Extended Koide's Rule Fit To the Top Quark Masses (and Other Quark Masses)
An extended Koide's rule estimate of the top quark mass using only the electron and muon masses as inputs, predicted a top quark mass of 173,263.947 ± 0.006 MeV, which is about 80 MeV less than the latest direct measurement. This is within 0.1 standard deviations from the new directly measured value.
Prior to the new combined measurement, the extended Koide's rule estimate was 0.22 standard deviations from the measured value.
The fact that the extended Koide's rule estimate has become more precise than it was when it was devised, as the experimental value has been measured more precisely, is impressive. Indeed, the extended Koide's rule estimate was closer to the new measurement than it was to the old one.
On the other hand, new precision estimates of the bottom and charm quark masses, mentioned below, increase the number of standard deviations in the gap between the experimentally measured values of these masses and the extended Koide's rule estimates for them (from 0.58 sigma to 3.57 sigma for the bottom quark, and from 3.38 sigma to 14.4 sigma for the charm quark). In the case of the bottom quark, the extended Koide's rule estimate is about 0.7% too high. In the case of the charm quark, the extended Koide's rule estimate is about 6.8% too high).
To four significant digits the t-b-c triple's Koide ratio which is predicted to be 0.6667 to four significant digits is 0.6695 at the PDG values and is 0.6699 using the new combined value for the t quark mass, and the new precision values for the b quark and c quark masses. This is still a better fit than any of the other quark triples, although it is a less good fit than it was before the new precision measurements were reported.
The accuracy of the extended Koide's rule estimate for the strange, down and up quark masses is unchanged since there are no new estimates of these masses. The strange quark estimate is 2.9% low (0.55 standard deviations), the down quark estimate is 10.8% high (1.3 standard deviations), and the up quark estimate is 98.5% low (2.26 standard deviations).
The original Koide's rule predicts a mass of the tau lepton from the electron and muon masses that is within 0.93 standard deviations of the currently measured value.
Other Standard Model Fundamental Particle Mass Measurements.
Other Quark Mass Uncertainties
All of the other measured values of the Standard Model quark masses are model dependent estimates based on QCD and hadron masses, and definition issues arise because a quark's mass is a function of the energy scale at which it is measured. The pole mass of a quark is roughly speaking, the mass of a quark at an energy scale equal to its own rest mass and this is the number quotes for the bottom quark and charm quark as well as for the top quark. The pole masses of the strange quark, down quark and up quark are ill defined and instead their masses in what is known as the MS scheme at an energy scale of 2 GeV (i.e. slightly more than the mass of two protons, two neutrons or one proton and one neutron), is normally used instead.
In the case of the bottom quark, the second heaviest of the quarks, the PDG estimate of the bottom quark mass has an uncertainty of 30 MeV (4,180 ± 30 MeV), but a new and consistent precision estimate using improved QCD approaches claims an uncertainty of just 8 MeV (4,169 ± 8 MeV). Similarly, in the case of the charm quark, the third heaviest of the quarks, the PDG estimate has an uncertainty of 25 MeV (1275 ± 25 MeV), but a new and consistent precision estimate using improved QCD approaches claims an uncertainty of just 6 MeV (1,273 ± 6 MeV). The best estimates of the up and down quark masses are about 2.3-0.5+0.7 MeV (about 25% precision) and 4.8-0.3+0.5 MeV (about 8% precision) respectively, but in each case the uncertainty in absolute terms is less than 1 MeV.
The strange quark mass (95 ± 5 MeV per the Particle Data Group) is known only to a roughly 5% precision. The QCD approaches used to reduce uncertainties in bottom quark and charm quark mass produce estimates of only about 10% precision in the case of the strange quark mass.
Charged Lepton Mass Uncertainties
The measured tau lepton mass is 1776.82 MeV with an uncertainty in the mass of the tau lepton is 0.16 MeV. The measured muon mass is 105.6583715 Mev with an uncertainty in the muon mass is 0.0000035 MeV. The measured electron mass is 0.510998928 MeV with an uncertainty in the electron mass is 0.000000011 MeV
Standard Model Neutrino Mass Uncertainties
The two differences in masses between the neutrino mass states are known to a precision of less than 10-5 MeV and 10-4 MeV respectively (the smaller one is about 0.007 eV, while the larger one is about 0.046 eV).
The absolute value of the lightest neutrino mass has been directly measured to be less than 2*10-6 MeV (i.e. 2 eV), and is constrained in a model dependent way by cosmic background radiation and other astronomy measurements to be less than 10-7 MeV (i.e. 0.1 eV). If the neutrinos have a "normal" rather than "inverted" mass hierarchy lightest electron neutrino mass is probably on the order of 0.001 eV.
Gauge Boson Mass and Higgs vev Uncertainties
The newly discovered Higgs boson has a global best fit measured mass of 125,900 MeV with an uncertainty about about ± 400 MeV, which is just over half of the absolute value of the uncertainty in the top quark mass. This is 79 MeV lower than the expectation if the Higgs boson mass is exactly equal to the W boson mass plus 1/2 of the Z boson mass (about 0.2 standard deviations from the measured value).
The accepted value of the Higgs vev is 246,227.9579 MeV with an uncertainty on the order of 0.001 MeV. This is measured via measurements of the lifetime of the mean lifetime of the muon (the Higgs vev is the inverse of the square root of two times the Fermi coupling constant). Both the Higgs vev and Fermi coupling constant are functions of the W boson mass and the weak force coupling constant, but the combined impact of these two factors is known much more precisely than the exact value of either of them.
As noted above, the uncertainty in the measured mass of the W boson is about 15 MeV (from a base number 80,385 MeV). The uncertainty in the Z boson mass is 2.1 MeV (from a base number of 91,187.6 MeV).
Why Is The Accurate Determination Of The Top Quark Mass So Important?
The Absolute Value of the Top Quark Mass Uncertainty Is The Largest In The Standard Model
In evaluating relationships between Standard Model fundamental particle masses, the uncertainty in the top quark mass is one of the dominant sources of uncertainty, because the top quark mass is the heaviest fundamental particle in the Standard Model and the absolute value of the uncertainty in this value is greater than that for any other Standard Model particle.
For example, the accuracy of the strange quark mass measurement is about 100 times less precise on a percentage basis than the top quark mass measurement, but the absolute value of the uncertainty in the top quark mass is still 760 MeV, compared to just 5 MeV for the strange quark mass.
The absolute value of the uncertainty in the top quark mass (760 MeV, which is about 61% of the total) is greater than the sum of the absolute values of the uncertainty in all of the other Standard Model fundamental particles combined (478.261 MeV, which is about 39% of the total), of which 400 MeV is the uncertainty in the Higgs boson mass and 78.261 MeV (which is about 6.3% of the total).
Thus, two fundamental particle mass measurements, one of which is just a couple of years old, account for 93.7% of all of the absolute value of the uncertainty in Standard Model particle mass measurements.
The Top Mass Squared Uncertainty Is Even More Dominant
The dominance of any imprecision in the top quark mass to overall model fits is further amplified in cases where the quantities compared are the square of the masses rather than the masses themselves (e.g. comparing the sum of squares of the Standard Model particle masses to the almost precisely identical square of the vacuum expectation value of the Higgs field).
About 72% of this imprecision is due to the top quark mass and about 99.15% of the imprecision is due to the top quark mass and Higgs boson masses combined.
The difference between the high end and low end of the square of the current combined estimate of various fundamental particle masses (i.e. their uncertainty) is as follows (in MeV^2, using PDG error bars for cases other than the top quark):
t quark mass squared: 526,953,600
All other Standard Model fundamental particles mass squared combined: 207,683,572 (about 28% of the total).
Higgs boson mass squared: 201,440,000
All other masses square except the t quark and Higgs boson masses: 6,243,572 (about 0.85% of the total).
W boson mass squared: 4,823,100
Z boson mass squared: 765,976
b quark mass squared: 501,600
c quark mass squared: 127,500
tau lepton mass squared: 22,497
s quark mass squared: 1,900
Higgs vev squared: 984.912
d quark mass squared: 7.84
u quark mass squared: 5.76
muon mass squared: 0.0015
electron mass squared: 0.0000000225
neutrino masses squared (combined): < 0.0000000003
A Final Prediction Re Top Quark Mass
1. Assume that the W boson mass has its global fit value of 80,362 MeV, rather than its best fit measured value of 80,385 MeV (a 1.53 standard deviation shift).
2. Assume that the Higgs boson mass is exactly equal to the W boson mass plus one half of the Z boson mass (i.e. that it is 125,955.8 MeV) (a 0.14 standard deviation shift).
3. Assume that the Tau lepton mass has its Koide's rule predicted value of 1776.97 MeV, rather than the 1776.82 MeV value that it is measured at today (a 0.93 standard deviation shift).
4. Assume that the bottom quark has the precision value of 4,169 MeV (a 0.37 standard deviation shift)
5. Assume that the charm quark has the precision value of 1,273 MeV (a 0.08 standard deviation shift)
6. Assume that the sum of the squares of the masses of the fundamental particles in the Standard Model equals the sum of the squares of the Higgs vev.
7. Assume that the electron, muon, up quark, down quark, and strange quark have their PDG values and that the neutrinos have masses of less than 2 eV each.
8. Assume that there are no fundamental particles beyond the Standard Model that contribute to the Higgs vev.
What is the best fit value for the top quark mass?
Answer: 173,112.5 ± 2.5 MeV (a 0.29 standard deviation shift of 227.5 MeV from the newly announced value).
N.B. The value of the top quark mass necessary to make the sum of the squares of the fermion masses equal to the sum of the square of the boson masses would be about 174,974 MeV under the same set of assumptions, about 174,646 MeV with a Higgs boson mass at the 125,500 MeV low end of the current 68% confidence interval for the Higgs boson mass, and about 175,222 MeV with a Higgs boson mass at the 126,300 MeV high end of the 68% confidence interval for the Higgs boson mass. These are about 2.07, 1.65, and 2.38 standard deviations, respectively, from the measured value of the mass of the top quark, and thus is not grossly inconsistent with the evidence, despite being a less good fit the the Higgs vev contribution hypothesis (which I also find more compelling theoretically). But, in order to achieve this, the sum of the squares of all of the fundamental Standard Model particle masses must be about 0.7% or more in excess of the square of the Higgs vev.
While both relationships are possible within experimental error, they cannot be true simultaneously despite being quite similar, at least for the pole masses. It is possible to imagine some running mass scale where they might coincide, however, and if there is some energy scale at which this happens, this might be viewed as the energy scale at which fermion-boson symmetry (much like that of supersymmetry, but without the extra particles) breaks down.
The running of the charged lepton masses is almost 2% up to the top quark mass and 3.6% over fourteen orders of magnitude. In contrast, the Higgs boson self-coupling runs to zero at the GUT scale, and the W and Z boson masses at high energies appear to be functions of the running of the electromagnetic and weak force coupling constants. The electromagnetic force coupling constant gets stronger (from about 1/137 to 1/125 up to the electroweak scale) while the weak force coupling constant gets weaker. This appears to be more dramatic than the running of the fermion masses, so the equalization of masses between bosons and fermions shouldn't require too high an energy scale.
Footnote: The measured Higgs boson mass is very nearly the mass that minimizes the second loop corrections necessary to convert the mass of a gauge boson from an MS scheme to a pole mass scheme.
Tuesday, March 18, 2014
Scalar-Tensor Ratio From Primordial B Waves Is 0.2 + 0.07/-0.05
A core concept of cosmology is "inflation" which has been a core feature of mainstream cosmology theory for three or four decades. The notion of inflation is that the universe expanded at a rate faster than the speed of light for a fraction of a second at a time when the entire post-Big Bang universe was smaller than downtown Denver, and that this explains why the universe is unnaturally homogeneous.
But, there are several hundred different varieties of inflation theories out there, only one of which is actually true (if any of them are true). There are several key parameters that describe the way that inflation unfolded in the first moments after the Big Bang unfolded that allow one to distinguish between many of these several hundred theories.
Planck data and previous cosmic background radiation studies established that one key parameter, which is the exponent describing scale dependence in cosmological evolution, is on the low side of the range between 0.96 and 0.97, where 1.0 is scale invariance.
A second key parameter is called "e-fold" and roughly speaking, measures how long an early phase of the expansion of the universe from the Big Bang called "inflation" lasted - previous data has favored a value for this parameter of somewhere from 50 to 60.
A third key parameter, a measurement of which was announced yesterday, is the scalar-tensor ratio, "r". which can be measured by observing the polarization of the cosmic background radiation. Basically, r is important in distinguishing different proposed version of cosmological inflation or substitutes for it that explain features of the universe that cosmological inflation theories try to explain.
Earlier combined results from data including the preliminary Planck satellite data (without polarization data which will be released later this year), found that the value of r was not inconsistent with zero (with a best fit of about 0.01) and determined that r was less than 0.11 at the two sigma level.
A very low value of r favored the simplest inflation scenarios, such as Higgs inflation, where inflation arises from the evolution of the Higgs field of the Standard Model.
A report released yesterday by the BICEP-2 observatory at the South Pole found that r=0.20 + 0.07/-0.05.
I had predicted the weekend before, based on an analysis of the rumors and the previous science in the field, that the result would be r=0.020+/-0.07, plus or minus 0.01 in either of the numbers. I was within my margin of error for the average margin of error, and was exactly right except for a difference of two hundredth in the downside error bar.
In particular, this report was based on an observation of "Primordial B waves" at a roughly 4 sigma significance compared to the null hypothesis that this was zero. The news reports claim this as the first experimental evidence of gravitational waves, which are predicted in general relativity, but this isn't really true. There was already evidence of gravitational waves from observations of binary star systems. But, this does confirm the existence of gravitational waves with a second independent data point.
The BICEP-2 figure, taken at face value, naively implies that the energy density of the universe at the time of inflation was on the order of 2*10^16 GeV, which is about the Grand Unified Theory scale at which the coupling constants of SUSY-like theories converge, and about 1% of the Planck scale (this inference is highly model dependent). The BICEP-2 data is the first experimental evidence for the existence of some key scale at which new physics emerges between the electroweak scale (ca. 200 GeV), and the Planck scale (ca. 10^18 GeV).
Thus, for a particle physicist, the BICEP-2 result favors a conclusion that the Standard Model does not hold without modification all of the way up to the Planck scale, although the first observable deviations from the Standard Model if it is merely a low energy effective field theory of the laws of nature could still be greatly above the scale at which any deviations can be measured in man made laboratories or experiments.
Given the previous data, the best combined fit is now r=0.10 to 0.11, assuming that the BICEP-2 result isn't flawed in methodology in some way, which is an entirely plausible possibility that will look more plausible if it is not confirmed by the Planck polarization data later this year, and several other experiments that will be reporting their results within the next year or two. Skepticism of the result, in the absence of independent confirmation by another experiment (Jester puts the odds that this result is right at only 50-50) flows from the fact that the value reported is so different from the consensus value from all previous experiments, with the results in a roughly three standard deviation tension with each other.
Any set of the three key parameters (r, scale exponent, and e-folds) that is consistent with both the BICEP-2 data and the previous observational evidence has to fall in a quite narrow range of the parameter space, leaving only a small number of inflation theories in the viable category.
If BICEP-2 is correct, then some of the simplest theories of inflation in the early universe, such as "Higgs inflation" are ruled out, in favor of theories where the topology of the inflating universe is exactly or very nearly flat, rather than concave or convex, and in particular, inflation theories such as "natural inflation." In a "natural inflation" scenario, the BICEP-2 data also favor a high end result for another result, called the number of "e-folds" during the inflation process of 60 (the low end of the range from other data is about 50).
UPDATE: A new paper in the wake of BICEP-2 data argues that r=.2 implies that Neff is 4+/-0.41, thus strongly favoring the existence of a light singlet sterile neutrino.
But, there are several hundred different varieties of inflation theories out there, only one of which is actually true (if any of them are true). There are several key parameters that describe the way that inflation unfolded in the first moments after the Big Bang unfolded that allow one to distinguish between many of these several hundred theories.
Planck data and previous cosmic background radiation studies established that one key parameter, which is the exponent describing scale dependence in cosmological evolution, is on the low side of the range between 0.96 and 0.97, where 1.0 is scale invariance.
A second key parameter is called "e-fold" and roughly speaking, measures how long an early phase of the expansion of the universe from the Big Bang called "inflation" lasted - previous data has favored a value for this parameter of somewhere from 50 to 60.
A third key parameter, a measurement of which was announced yesterday, is the scalar-tensor ratio, "r". which can be measured by observing the polarization of the cosmic background radiation. Basically, r is important in distinguishing different proposed version of cosmological inflation or substitutes for it that explain features of the universe that cosmological inflation theories try to explain.
Earlier combined results from data including the preliminary Planck satellite data (without polarization data which will be released later this year), found that the value of r was not inconsistent with zero (with a best fit of about 0.01) and determined that r was less than 0.11 at the two sigma level.
A very low value of r favored the simplest inflation scenarios, such as Higgs inflation, where inflation arises from the evolution of the Higgs field of the Standard Model.
A report released yesterday by the BICEP-2 observatory at the South Pole found that r=0.20 + 0.07/-0.05.
I had predicted the weekend before, based on an analysis of the rumors and the previous science in the field, that the result would be r=0.020+/-0.07, plus or minus 0.01 in either of the numbers. I was within my margin of error for the average margin of error, and was exactly right except for a difference of two hundredth in the downside error bar.
In particular, this report was based on an observation of "Primordial B waves" at a roughly 4 sigma significance compared to the null hypothesis that this was zero. The news reports claim this as the first experimental evidence of gravitational waves, which are predicted in general relativity, but this isn't really true. There was already evidence of gravitational waves from observations of binary star systems. But, this does confirm the existence of gravitational waves with a second independent data point.
The BICEP-2 figure, taken at face value, naively implies that the energy density of the universe at the time of inflation was on the order of 2*10^16 GeV, which is about the Grand Unified Theory scale at which the coupling constants of SUSY-like theories converge, and about 1% of the Planck scale (this inference is highly model dependent). The BICEP-2 data is the first experimental evidence for the existence of some key scale at which new physics emerges between the electroweak scale (ca. 200 GeV), and the Planck scale (ca. 10^18 GeV).
Thus, for a particle physicist, the BICEP-2 result favors a conclusion that the Standard Model does not hold without modification all of the way up to the Planck scale, although the first observable deviations from the Standard Model if it is merely a low energy effective field theory of the laws of nature could still be greatly above the scale at which any deviations can be measured in man made laboratories or experiments.
Given the previous data, the best combined fit is now r=0.10 to 0.11, assuming that the BICEP-2 result isn't flawed in methodology in some way, which is an entirely plausible possibility that will look more plausible if it is not confirmed by the Planck polarization data later this year, and several other experiments that will be reporting their results within the next year or two. Skepticism of the result, in the absence of independent confirmation by another experiment (Jester puts the odds that this result is right at only 50-50) flows from the fact that the value reported is so different from the consensus value from all previous experiments, with the results in a roughly three standard deviation tension with each other.
Any set of the three key parameters (r, scale exponent, and e-folds) that is consistent with both the BICEP-2 data and the previous observational evidence has to fall in a quite narrow range of the parameter space, leaving only a small number of inflation theories in the viable category.
If BICEP-2 is correct, then some of the simplest theories of inflation in the early universe, such as "Higgs inflation" are ruled out, in favor of theories where the topology of the inflating universe is exactly or very nearly flat, rather than concave or convex, and in particular, inflation theories such as "natural inflation." In a "natural inflation" scenario, the BICEP-2 data also favor a high end result for another result, called the number of "e-folds" during the inflation process of 60 (the low end of the range from other data is about 50).
UPDATE: A new paper in the wake of BICEP-2 data argues that r=.2 implies that Neff is 4+/-0.41, thus strongly favoring the existence of a light singlet sterile neutrino.
Monday, March 10, 2014
The Anomalous Magnetic Moment Of The Tau Lepton
The anomalous magnetic moment of the muon, the second generation charged lepton, has generated enormous interest because the calculated value is 3-4 standard deviations from the experimental value, about one part per million different. A variety of theories explain this discrepancy.
The situation is different in the case of the anomalous magnetic moment of the tau, the third generation charged lepton. It has a theoretically predicted value, as of January 30, 2007, of 117721(5)*10^-8.
This value is within the current experimental bounds on that value in a 95% confidence interval form as follows:
-0.052< 0.00117721(5) < 0.013
although tighter current experimental bound are argued to exist in the linked paper after doing a reanalysis of the data of:
-0.007 < 0.00117721(5) < 0.005
Unlike the case of the muon, the precision of the theoretical calculation is the tau anomalous magnetic moment is so much more profoundly precise than the experimentally measured value, that it is for all intents and purposes an exact value until such time as experimental measurements of the tau anomalous magnetic moment are 100,000+ times more precise than they are today in 2014.
For example, while uncertainty in the QCD contribution to the muon anomalous magnetic moment is critical (accounting for 99.1% of the uncertainty in the theoretical estimate), in the case of the tau that contribution is on the order of 3.5*10^-6, i.e. 0.0000035, which will not be relevant for a long time to come. The electroweak contributon is on the order of 0.5*10^-6, i.e. 0.0000005, and the balance is from the QED contribution which is known with about 2.5 times as much precision as either the QCD or the electroweak contribution.
Even if the tau anomalous magnetic moment had a discrepancy between a theoretical value and experimental value similar to that of the muon, the experimental value would still be identical to the precision of 0.001177.
A co-author compares the theoretical predictions for the electron, muon and tau with the experimental results at once in a companion article.
Limits on New Physics From The Muon Anomalous Magnetic Moment
The muon and electron measurements, because they are so exquisitely precise, impose strict bounds on beyond the Standard Model physics that would tweak these measurements in any way. But, the tau lepton measurement will be an exercise in precise experimental efforts to confirm a foregone conclusion for the foreseeable future. A new pre-print spells out those new physics constraints (or hints, as the case may be):
I've also noted that the small value of the electric dipole moment (EDM) of the electron (and perhaps other particles) greatly constrains SUSY as well.
Footnote on SUSY Fermions
SUSY theories also have new spin-1/2 fermion partners to the Standard Model fundamental bosons, all but one of which (the wino) are electrically neutral. These include partners to the gluon (neutral gluinos), electroweak bosons (the charged wino and neutral bino and/or zino, the photino), and Higgs boson (often two neutral "Higgisnos", sometimes becoming a charged "chargino" and a neutral "neutralino" after electroweak symmetry breaking). But, limits on their masses apparently aren't implicated by the anomalous magnetic moment of the muon.
The situation is different in the case of the anomalous magnetic moment of the tau, the third generation charged lepton. It has a theoretically predicted value, as of January 30, 2007, of 117721(5)*10^-8.
This value is within the current experimental bounds on that value in a 95% confidence interval form as follows:
-0.052< 0.00117721(5) < 0.013
although tighter current experimental bound are argued to exist in the linked paper after doing a reanalysis of the data of:
-0.007 < 0.00117721(5) < 0.005
Unlike the case of the muon, the precision of the theoretical calculation is the tau anomalous magnetic moment is so much more profoundly precise than the experimentally measured value, that it is for all intents and purposes an exact value until such time as experimental measurements of the tau anomalous magnetic moment are 100,000+ times more precise than they are today in 2014.
For example, while uncertainty in the QCD contribution to the muon anomalous magnetic moment is critical (accounting for 99.1% of the uncertainty in the theoretical estimate), in the case of the tau that contribution is on the order of 3.5*10^-6, i.e. 0.0000035, which will not be relevant for a long time to come. The electroweak contributon is on the order of 0.5*10^-6, i.e. 0.0000005, and the balance is from the QED contribution which is known with about 2.5 times as much precision as either the QCD or the electroweak contribution.
Even if the tau anomalous magnetic moment had a discrepancy between a theoretical value and experimental value similar to that of the muon, the experimental value would still be identical to the precision of 0.001177.
A co-author compares the theoretical predictions for the electron, muon and tau with the experimental results at once in a companion article.
Limits on New Physics From The Muon Anomalous Magnetic Moment
The muon and electron measurements, because they are so exquisitely precise, impose strict bounds on beyond the Standard Model physics that would tweak these measurements in any way. But, the tau lepton measurement will be an exercise in precise experimental efforts to confirm a foregone conclusion for the foreseeable future. A new pre-print spells out those new physics constraints (or hints, as the case may be):
We consider the contributions of individual new particles to the anomalous magnetic moment of the muon, utilizing the generic framework of simplified models. We also present analytic results for all possible one-loop contributions, allowing easy application of these results for more complete models which predict more than one particle capable of correcting the muon magnetic moment. . . . Furthermore, we derive bounds on each new particle considered, assuming either the absence of other significant contributions to aμ or that the anomaly has been resolved by some other mechanism.These limits, on new boson masses, are particularly relevant to SUSY theories which imply there there are at least eleven new spin-0 charged bosons (two charged Higgs bosons and nine spin-0 partners of the charged fermions) and about five new spin-0 neutral bosons (an extra scalar Higgs boson, an extra pseudo-scalar Higgs boson, and up to three partners to the neutrinos which may mix with each other).
In summary we found the following particles capable of explaining the current discrepancy, assuming unit couplings: 2 TeV (0.3 TeV) neutral scalar with pure scalar (chiral) couplings, 4 TeV doubly charged scalar with pure pseudoscalar coupling, 0.3−1 TeV neutral vector boson depending on what couplings are used (vector, axial, or mixed), 0.5−1 TeV singly-charged vector boson depending on which couplings are chosen, and 3 TeV doubly-charged vector-coupled bosons.
We also derive the following 1σ lower bounds on new particle masses assuming unit couplings and that the experimental anomaly has been otherwise resolved: a doubly charged pseudoscalar must be heavier than 7~TeV, a neutral scalar than 3 TeV, a vector-coupled new neutral boson 600 GeV, an axial-coupled neutral boson 1.5TeV, a singly-charged vector-coupled W′ 1 TeV, a doubly-charged vector-coupled boson 5 TeV, scalar leptoquarks 10 TeV, and vector leptoquarks 10 TeV.
I've also noted that the small value of the electric dipole moment (EDM) of the electron (and perhaps other particles) greatly constrains SUSY as well.
Footnote on SUSY Fermions
SUSY theories also have new spin-1/2 fermion partners to the Standard Model fundamental bosons, all but one of which (the wino) are electrically neutral. These include partners to the gluon (neutral gluinos), electroweak bosons (the charged wino and neutral bino and/or zino, the photino), and Higgs boson (often two neutral "Higgisnos", sometimes becoming a charged "chargino" and a neutral "neutralino" after electroweak symmetry breaking). But, limits on their masses apparently aren't implicated by the anomalous magnetic moment of the muon.
Friday, March 7, 2014
The Trouble With QCD (updated March 9, 2014)
A recent pre-print by Stephen Lars Olsen of Seoul National University sums up a major unsolved problem in quantum chromodynamics (QCD), the Standard Model theory that explains how quarks interact with each other via the strong force as mediated by gluons.
Most of QCD is just fine. The spectrum of observed three quark baryon states, and of two quark meson states involving quarks of different flavors, largely matches naive QCD expectation. The proton mass has been calculated from first principles to more than 1% accuracy. The strong force coupling constant at the Z boson mass is known to a four significant digit accuracy, and estimates of the masses of the top, bottom, and charm quark are improving in precision greatly compared to just a few years ago. As discussed at greater length in the section on meson oscillation below where I discuss baryon number violating neutron oscillation, the Standard Model QCD rule that baryon number is conserved has held up to intense experimental scrutiny (as has the Standard Model principle that lepton number is conserved).
There is also no trouble on the horizon with our understanding of how QCD is involved in the nuclear binding force within atoms. There are, however, theoretical discussions of a couple of alternative understandings of it that implicate the meson spectrum issues discussed below. Traditionally pions have been tapped as the force carrier between protons and neutrons, but now, other light scalar mesons such as scalar meson f(500) have been suggested as alternative carriers of the residual nuclear force between nucleons in an atom.
But, as explored below at greater length, there are also areas where QCD is falling short. It predicts that exotic hadrons which are not observed are possible, while not easily explaining a variety of neutral quark-anti-quark states (and hadrons that look like them) called mesons where different combinations of particular kinds of quarks and antiquarks seem to blend ito each other.
Missing Exotic States Predicted By QCD
Current experiments have allowed us to observe hadrons up to 10 GeV. But, many "exotic" states that QCD naively seems to allow in the mass range where observations should be possible (including less exotic predicted quarkonium states discussed in the next section), have not yet been detected.
There have still not been definitive sighting of glueballs, of tetraquarks, of pentaquarks, or of H-dibaryons. The implication of our failure to see them despite the fact that QCD predicts their existence and properties with considerable precision, is that we may be missing a solid understanding of why QCD discourages or highly suppress these states. Such QCD rules might be emergent from the existing QCD rules of the Standard Model in a way that we have not yet understood, or it could reflect something missing in those equations or in the other rules of the Standard Model that are used to apply them.
Similarly, no well established resonances have JPC quantum number combinations (total angular momentum, parity and in the case of electrically neutral mesons, charge parity) that have no obvious source in any kind of quark model with purely qq mesons. In the case of hypothetical mesons with J=0, 1 or 2, these are the JPC quantum numbers: O--, O+-, 1-+ and 2+- to name just those with J=0, 1 or 2. As one professor explains: "These latter quantum numbers are known as explicitly exotic quantum numbers. If a state with these quantum numbers is found, we know that it must be something other than a normal, qq¯ meson." At higher levels of integer J, the combination +- is prohibited for even integer values and -+ is prohibited for odd integer values. These combinations might be created by bound states of a quark, antiquark and a gluon, each of which contribute to the J, P and C of the overall composite particle that are called "hybrid mesons" and are not observed. Lattice QCD has calculated masses, widths and decay channels for these hybrids, just as it has for glueballs (aka gluonium).
But, these well defined and predicted resonances are simply not observed at those masses in experiments, suggesting that for some unknown reason, there are emergent or unstated rules of QCD that prohibit or highly suppress resonances that QCD naively permits such as gluonium (aka glueballs), or hybrid mesons, or true tetraquarks or true pentaquarks, or H-dibaryon states (at least in isolation, as opposed to blended with other states in linear combinations that produce qq model consistent aggregate states).
A few resonances have been observed that are probably "meson molecules" in which two mesons are bound by residual strong force much like protons and neutrons in an atomic nucleus, however, have been observed. This is the least exotic and least surprising of the QCD structures other than plain vanilla mesons, baryons and atomic nuclei observed to date, since it follows obviously from the same principles that explain the nuclear binding force that derives from the strong force mediated by gluons between quarks based on their "color charge."
Not very surprisingly, because top quarks have a mean lifetime an order of magnitude shorter than the mean strong force interaction time, mesons or baryons that include top quarks have not been observed. Still, the mean lifetime of the top quark is not so short that one wouldn't expect at least some highly suppressed top quarks to briefly hadronize when they end up in rare cases having lives much longer than the mean lifetime, so while the suppression of top hadrons is unsurprising, the magnitude of that suppression is a bit of a surprise.
Surprising Meson Spectrums
Meanwhile, many mesons have been observed whose quantum numbers, decay patterns, and masses taken together are not a good fit for simple models in which mesons are made up of a particular quark and a particular anti-quark which have either aligned spins (and hence have total angular moment J=1 called vector mesons) or oppositely aligned spins (and hence have total angular momentum J=0 called pseudoscalar mesons).
Standard Model QCD is sophisticated enough to deal with the variations from a simpler model seen in neutral quarks that can experience matter-antimatter oscillations to great precision. But, Standard Model QCD still struggled to explain a spectrum of mesons that appear to be made up of various forms of "quarkonium" (i.e. mesons made up of a quark and an anti-quark of the same quark flavor) which blend into each other in ways not fully understood or easily predicted. A variety of competing theories seek to explain these phenomena after the fact within the context of QCD, but nobody predicted how this would happen in advance.
Matter-Antimatter Meson Oscillations: Weird But Understood
Oscillations of neutral mesons with their anti-particles is a phenomena of quantum physics that has been known since 1955. M. Gell-Mann and A. Pais, “Behavior of Neutral Particles under Charge Conjugation,” Phys. Rev. 97, 1387 (1955). This seminal paper observed that:
It turns out that there are at least two dominant means of blending the particle and anti-particle state of the neutral kaon, K0. One have a mixed bound state that is the sum of the particle and anti-particle state (divided by the square root of two), which is called the long kaon, or KL, or a mixed bound state that is the difference between the particle and the anti-particle state (divided by the square root of two), which is called the short kaon, or KS.
It isn't clear to me the extent to which particle and anti-particle states of D0, B0, and B0s mesons mix into long and short bound states with distinct lifetimes and decays, in the way that the K0 meson does. It could be that this is discouraged by the mass gap between the up and charm quarks in the neutral D meson (about 554-1), between the down and bottom quarks in the neutral B meson (about 871-1), and between the strange and bottom quark in the neutral strange B meson (about 44-1) relative to the gap between the down and strange quark in the neutral kaon (a ratio of about 17-22 according to the PDG) lead these neutral mesons to mix their particle and antiparticle states less strongly than the K0meson.
In addition to the K0 meson, there are three other mesons which are bound states of a quark and an anti-quark of different flavors, and a neutral electric charge, which oscillate between matter and anti-matter states of the meson: the D0 (i.e. neutral D meson, made up of a charm quark and anti-strange quark), the B0(i.e. neutral B meson, made up of a down quark and anti-bottom quark)(reported in H. Albrecht et al. (ARGUS collaboration), Phys. Lett. B 192, 245 (1987)), and the B0s(i.e. neutral strange B meson, made up of a strange quark and anti-bottom quark)(reported in A. Abulencia et al. (CDF collaboration), Phys. Rev. Lett. 97, 242003 (2006)). The last of these oscillations to be observed experimentally was in the D0, which was announced in a March 5, 2013 paper from LHCb. This announcement was the culmination of prior studies starting in 2007. As an author of that paper explained: "First evidence came from both the BaBar and Belle Collaborations in 2007, with further proof soon supplied by the CDF Collaboration and other additional measurements. A global combination of these pioneering results established the existence of these oscillations. Now, LHCb has presented the first clear observation based on a single measurement."
While these oscillations appear at the hadron level as a case of matter turning into anti-matter, the conventional explanation for this phenomena, as explained by an LHCb investigator is that this is actually a second order weak force interaction.
For example, in the case of the D0oscillation, what happens is that the charm quark oscillates into an up quark via an intermediate virtual down, strange or bottom quark, while the anti-up quark oscillates into an anti-charm quark via a virtual anti-down, anti-strange, or anti-bottom quark. Usually, when a quark emits a W boson causing a flavor change, the W boson decays democratically (i.e. with equal probability into all quantum number neutral, electric charge one shifting pairs of decay products that are energy permitted) before it can be absorbed by another quark. But, in this process there are two W boson exchanges within the neutral mesons between the chain of matter particles and the chain of anti-matter particles, so there are no W boson decay products.
In neutral meson oscillation, the sum of the masses of the particles in the hadrons in the initial state and end state have exactly the same masses, because quarks and anti-quarks have the same mass. Also, the initial and end state charges of the particles in the matter chain and in the anti-matter chain are the same. Likewise, no matter actually gets changed into anti-matter at any point in the process (even though it naively looks like it does), so each chain of interactions preserves baryon number perfectly, although they do change the quark flavor numbers of the meson by +1 and -1 respectively, for the quarks in question.
These meson to anti-meson oscillations are "weird", but they do occur at rates predicted by the Standard Model using constants derived from other weak force interactions with quarks pursuant to the relevant Feynman diagrams for the interaction.
As I have discussed previously, it isn't entirely clear (at least to me) if neutrino oscillation similarly involves a second order weak force process through a virtual charged lepton and a virtual pair of W bosons that can be illustrated with a Feynman diagram, much like the interactions involved in meson to anti-meson oscillations, or if this occurs by some other mechanism.
Charged mesons (and baryons) don't mix in this manner, although charged mesons (and baryons) can, in principle, exhibit CP violation (see this review article at 8), just as neutral mesons are observed to experimentally.
An Aside Re Baryon to Anti-Baryon Oscillations: They Don't Happen
In contrast, neutral baryons do not appear to oscillate between particle and anti-particle states.
Unlike neutral meson oscillations, which can be understood to involve mere quark flavor changes, rather than true matter-antimatter oscillations, and oscillations of neutrinos to other flavors of neutrinos (but not to anti-neutrinos), there is no possible weak force interaction that could turn the three matter particles of a matter baryon into the three anti-matter particles of an anti-matter baryon. (There are no baryons which are their own anti-particles, so there are no simple baryon equivalents to quarkonium mesons discussed below.)
Thus, while meson to anti-meson oscillations appear to occur according to Standard Model flavor changing W boson processes, and observed neutrino oscillations could occur via the same processes, baryon to anti-baryon oscillations, even if they occur, could not occur in that manner.
The data tend to confirm the naive Standard Model prediction that neutral baryons (there are no baryons that are their own antiparticles) do not oscillate with their neutral baryon anti-matter counterparts, a process that would violate baryon number conservation.
A 1994 study determined that if neutron-neutron oscillations occurred at all, they the oscillation period was greater than 8.6*107 seconds (i.e. about two and a half years), even though the mean lifetime of a free neutron is about 881.5 +/- 1.5 seconds (i.e. about 14 minutes and 42 seconds). Thus, if the neutron oscillates into an anti-neutron state at all, it oscillates at a rate about 100,000 times as slowly as it decays in a free state (neutrinos bound in atomic nuclei are stable). More recent studies have dramatically increased the extent of that bound, such as a 2002 study that concluded that oscillations of bound neutrons had a mean period of more than 1.3*10^8 seconds, which was increased to 2.7*10^8 seconds in 2007 at the SuperK experiment. In contrast, as I understand the matter, all four of the oscillating neutral mesons have mean oscillation times on a similar order of magnitude to their mean lifetimes which are 10-8 seconds or less.
Neutron oscillation, like neutrino-less double beta decay, proton decay, flavor changing neutral currents, and other baryon number violating decays, are baryon number violating processes that are theoretically attractive in new physics theories for a variety of reasons (particularly related to cosmology). But, very precise tests have again and again demonstrated that baryon number violating phenomena don't happen to the highest modern limits of experimental accuracy.
CP Violation In Oscillating Neutral Mesons: A More Weird Aspect Of Already Weird Particles
Even the rather complicated notion of neutral mesons with anti-particles actually involving mixed oscillation states with equal contributions of matter and antimatter components, is actually not quite right.
For example, neutral kaons do not actually oscillate from a matter to an anti-matter state at exactly the same rate as they oscillate back from an anti-matter state to a matter state. So, the already hard to fathom "canonical" description of the kaon of a blend of the "short kaon" which is the difference between the matter and anti-matter state, and the "long kaon" which is the sum of the matter and anti-matter state, while much closer to the truth is not quite right. The real blend is not 50% matter and 50% anti-matter, taken as either a sum or a difference, but a bit more than 50% matter and a bit less than 50% anti-matter.
This phenomena, called CP violation, which is quantified in the Standard Model by the CP violating phase of the CKM matrix, was first observed indirectly in neutral kaons (in 1964) and has also been observed directly, in each of the oscillating mesons: in neutral kaons since 1999, in the neutral B mesons since 2001, and in the neutral D mesons since 2011. Reasonably accurate estimates of the magnitude of CP violation in B mesons (within a factor of three) based on the Standard Model equations and the CP violation parameter determined from indirect measurements involving neutral kaon decays were in existence by 1980, if not sooner. At 2008 measurement by the Belle experiment that suggested a modest but statistically significant difference in the CP violation parameter for charged and neutral B mesons has not been born out by further experiments.
CP violation via the CKM matrix is the only process in the Standard Model in which an arrow of time is present fundamentally (CP violation is equivalent to treating processes that go forward and backward in time differently), as opposed to in an emergent statistical manner (the Second Law of Thermodynamics). And, even at the level of the most suppressed and rare types of D meson decays and CP violations observed as of 2013, the Standard Model prediction of their frequency has been confirmed.
As a 2013 review article at the Particle Data Group site explains. CP violation has been observed experimentally, only at slight levels and only in the decays of a small subset of mesons:
A 2012 effort to detect CP violation in charged D meson decays did not find it at even a two sigma level, with a result dominated by statistical uncertainty (i.e. basically, by an insufficiently large data set of charged D meson decays). Efforts to find CP violation in the decays of baryons with charm quarks have likewise failed to reach statistical significance as of 2012.
CP violation has not been observed at this time in the decays of scalar, vector, or axial-vector mesons, or in excited meson states such as tensor mesons, or in any charged mesons other than pseudo-scalar charged B mesons.
In neutral kaons, the decay-rate asymmetry is only at the 0.003 level, although it is greater in neutral B mesons, where it is about 0.7 (it is also present in neutral D mesons, but this CP violation had not been documented definitively in time for inclusion in the article). It charged B mesons it is about 0.2.
CP violation in baryons with bottom quarks, which has not yet been observed, is predicted to be on the order of one part per 10^5 or less.
It is possible that there is CP violation in neutrino oscillation as well, but this has not yet been definitively observed although the current experimental hints disfavor zero CP violation in neutrino oscillation and suggest that neutrinos may engage in CP violation more strongly than quarks.
The largest CP violating term in the Wolfenstein parameterization of the CKM matrix is multiplied by A*(lambda^3) in Vtd and Vub elements, by A*(lambda^4) in the Vts element, and by A^2*(lambda^5) in the Vcd element. The CP violating terms in other elements are at the lambda^6 order or less (lambda is roughly 0.23 and A is roughly the square root of two-thirds). The fit of the single CP violating parameter to the observed CP violation in experiments is sufficiently tight to show on a model-independent basis that even if there is some other additional new physics source of CP violation in meson decays, that the CKM matrix CP violating phase is the dominant source of CP violation in those systems (from here at 2).
An Aside Re CP Violation and Matter-Antimatter Asymmetry In the Universe.
The observed level of CP violation is mysterious, however, because, while it exists in the Standard Model, the measured magnitude of CP violation in the Standard Model seems to be too small to explain the matter-antimatter asymmetry of the universe (at least involving baryons and charged leptons) assuming that (1) the universe began at the Big Bang in a pure energy state with no matter or anti-matter bias, (2) baryon number and lepton number are conserved as they are in the Standard Model except in rare high energy sphaleron processes, and (3) the basic outlines of the standard model of cosmology are correct (also here).
The Standard Model also cannot explain how the aggregate baryon number in the universe became non-zero or reached its current estimate value that is known to about one significant digit. We don't know the aggregate lepton number of the universe, the sum of B+L, or the difference B-L, because we don't know the relative proportion of neutrinos and anti-neutrinos in the universe. But, if there are more than a tiny fraction of a percent more neutrinos than anti-neutrinos in the universe, then L is not equal to zero, B+L and B-L have a large absolute value, and the absolute value of L is much greater than the absolute value of B. Very limited experimental hints to date tend to favor the existence of far more anti-neutrinos than neutrinos, which would imply a large negative value of L and B+L, and a very large positive value of B-L.
The Quarkonium Spectrum
The spectrum of quarks with a quark and anti-quark of the same type, called quarkonium, are particularly problematic.
These states were already the subject to an exception to the usual QCD rules governing hadron decay. We know that quarkonium states are usually suppressed in hardonic decays, due to the Zweig rule, also known as OZI suppression, which can also be stated in the form that "diagrams that destroy the initial quark and antiquark are strongly suppressed with respect to those that do not."
Quarkonium mesons are also notable for being particles that are their own anti-particles. If they did oscillate between particle and anti-particle states, the two states would be indistinguishable.
Quarkonium mesons easily blend into linear combinations with each other because (1) bosons can be in the same place at the same time, and (2) they have similar quantum numbers because all quarkonium mesons have zero electric charge, baryon number (quarks minus antiquarks), isospin (net number of up and down quarks and up and down antiquarks), strangeness (net number of strange and antistrange quarks), charm number (charm quarks minus charm antiquarks) and bottom number (bottom quarks minus bottom antiquarks).
There are no mesons that appear to have a purely uu, dd or ss composition. The neutral pion, the neutral rho meson, and the neutral omega meson are believed to be linear combinations of uu and dd mesons (the omitted one of four simple combinations of uu and dd may be a scalar meson). The eta meson and the eta prime meson are believed to be linear combinations of the uu, dd and ss mesons. Many of the lighter scalar and axial-vector meson states without charm or bottom quarks are also presumed to include linear combinatons of uu, dd, and ss quarkonium mesons. There have been proposed nonets of scalar mesons that are chiral partners of the pseudoscalar mesons, for example, although the issues of the quark compositions of these true scalar mesons is not well resolved.
The large masses of charm and bottom quarks relative to the non-quark "glue" mass of mesons makes it harder for the quark content of charmonium and bottomonium-like states to remain ambiguous. These mesons are called XYZ mesons. But, they continue to show signs that they may be mixings of quarkonium states, rather than always being composed of a simple quark-antiquark pair of the same flavor of quark There are about seven charmonium-like states that have been discovered that were not predicted by QCD, and a like number of such states that are predicted to exist but have not been observed. Bottomonium states present similar issues. The XYZ mesons have JPC quantum numbers of 0-+ (pseudo-scalar), 0++ (scalar), 1-- (vector), 1+- (pseudo-vector) and 2++ (tensor) in their J=0, 1 and 2 states. Mesons with combinations 1++ (axial vector) and 2-+ and 2-- are also theoretically permitted.
There are even some indications that there are resonances that are made up of bound states of oscillating mesons and anti-mesons, or of proton-antiproton pairs, that act like quarkonium mesons.
There are competing theories to describe and predict these quarkonium dominated meson spectrums.
Mr. Olsen concludes by stating that:
Time For A Breakthrough?
Implicit in Olsen and Choi's discussion is the recognition that we have a sufficiently large body of non-conforming experimental evidence that we may be close to the critical moment where some major theoretical break through could in one fell sweep explain almost all of the data that is not a clean fit for existing QCD models with some sort of paradigm shift.
Other Issues with QCD
There are other outstanding issues in QCD beyond those identified by Olsen's paper. A few of these follow.
Infrared QCD
The infrared (i.e. low energy) structure of QCD that can be explored only with lattice QCD is also sometimes mysterious with different methods producing different results. Particularly important is the question of whether the QCD potential reaches a theoretical zero at zero distance, or has a "non-trivial fixed point." The issue is closely related to one of the current unsolved "Millenium Problems" in mathematics.
More generally, there are various competing models to explain the internal structure of hadrons, most of which are adequate for some purposes, but not others. We still don't have a definitive single picture that is superior in all circumstances to describe hadron structure.
Similarly, while experiment confirms that quarks are fundamental and point-like down to scales as small as 10^-20 meters, this implicitly contradicts general relativity at sub-Planckian distances well below 10^-34 meters and we can't rule out the possibility that there is some sort of composite structure to the fundamental particles of the Standard Model, or that particles are something distinct from the space-time background, or that there might be discrete structure space-time itself, at the quantum gravity Planck scale or smaller scales.
Right now, there is a "desert" of new physics both over many order of magnitude below the scale of QCD, and also for many orders of magnitude above the electroweak scale to the "GUT scale".
At also appears that in the infrared, it also appears the gluons, which have zero rest mass in the Standard Model, acquire dynamical mass in an amount that is a function of their momentum (higher momentum gluons in low energy QCD have less dynamical mass).
This is quite odd, because usually, in the world of general and special relativity, particles with higher momentum acquire greater relativistic mass. The behavior that we do observe in the case of massive gluons seems more like the macro-scale phenomena in which friction falls in faster moving objects thereby making them easier to push.
The Strong CP Problem
We still aren't sure if there are any deep reasons for the fact that no CP violation is observed in QCD interactions (the CP violations in neutral mesons described above involve weak force interactions), despite the fact that it is a chiral theory and that there is a natural term for it in the QCD Lagrangian. This is called the "strong CP problem." Experimentally, the strongest bound on the QCD constant that would give rise to strong CP violation comes from the measured value of the electric-dipole moment of the neutron. The measured value is tiny and consistent with zero CP violation in strong force interactions.
Of course, like the hierarchy problem, the Strong CP problem is to some extent a presumptuous and philosophical problem. We know what the laws of nature are in this case and can express that by stating that the value of a particular QCD constant is zero or very nearly zero, rather than a number on the order of one that those who view this as a problem would have presumed it would be. Those who see this as a problem presume that there is some good reason that Nature shouldn't have made this choice and are arguing with Her over it. Maybe there is a deep reason for this, but maybe the value of this physical constant is just one more law of nature and we are really just exposing our ignorance of the overall pattern when our expectations don't line up with reality in this case.
Very Difficult Calculations
Meanwhile, even basic QCD exercises like estimating a hadron's properties from its components, when they are well defined, suffers from issues of low precision, because while it is possible to measure observable hadron properties precisely, it is very hard to do QCD calculations with enough terms to make the theoretical work highly precise. This in turn leads the values for input parameters like the strong coupling constant and quark masses to be fuzzy as well.
Recent progress has been made, however, in using new calculation methods like Monte Carlo methods and the amplituhedron, to reduce the computational effort associated with these calculations. Often, a variety of other simplifying tricks, from using pure "Yang-Mills theory" rather than the specific real world case of Standard Model QCD, or estimating physical outcomes by extrapolating from models using different masses or numbers of quark flavors than we see in the real world, are also used to approximate the impossible to solve exactly and directly equations of Standard Model QCD.
Discrepancies Between Theory and Experiment.
As I've noted previously, sometimes perturbative QCD predictions differ materially from observed results even at energy scales where it should be reliable. This may simply be because the calculations are so hard to do right as explained above. None of the discrepancies that seem to be present at this point involve cases where we are sufficiently confident of our calculations of the exact QCD prediction to be newsworthy at this point.
Other Lattice QCD points
Another review of the status of QCD and in particular Lattice QCD can be found here. It notes success in estimating hadron masses, and the problem that theoretical uncertainty in QCD contributions is the biggest contributor to uncertainty regarding the magnetic moment of the muon which differs quite a bit in terms of standard deviations, but very little in terms of absolute amount, from the theoretically predicted value.
Conclusion
While QCD has not yet definitively failed any tests of the Standard Model theory, and instead, has been repeatedly validated, it has also been subject to much less precise experimental tests than any other part of the Standard Model. The absence of any really viable alternative to QCD has been key to its survival and lack controversy in beyond the Standard Model physics discussions. But, few areas of the Standard Model have more wiggle room for beyond the Standard Model new physics.
Most of QCD is just fine. The spectrum of observed three quark baryon states, and of two quark meson states involving quarks of different flavors, largely matches naive QCD expectation. The proton mass has been calculated from first principles to more than 1% accuracy. The strong force coupling constant at the Z boson mass is known to a four significant digit accuracy, and estimates of the masses of the top, bottom, and charm quark are improving in precision greatly compared to just a few years ago. As discussed at greater length in the section on meson oscillation below where I discuss baryon number violating neutron oscillation, the Standard Model QCD rule that baryon number is conserved has held up to intense experimental scrutiny (as has the Standard Model principle that lepton number is conserved).
There is also no trouble on the horizon with our understanding of how QCD is involved in the nuclear binding force within atoms. There are, however, theoretical discussions of a couple of alternative understandings of it that implicate the meson spectrum issues discussed below. Traditionally pions have been tapped as the force carrier between protons and neutrons, but now, other light scalar mesons such as scalar meson f(500) have been suggested as alternative carriers of the residual nuclear force between nucleons in an atom.
But, as explored below at greater length, there are also areas where QCD is falling short. It predicts that exotic hadrons which are not observed are possible, while not easily explaining a variety of neutral quark-anti-quark states (and hadrons that look like them) called mesons where different combinations of particular kinds of quarks and antiquarks seem to blend ito each other.
Missing Exotic States Predicted By QCD
Current experiments have allowed us to observe hadrons up to 10 GeV. But, many "exotic" states that QCD naively seems to allow in the mass range where observations should be possible (including less exotic predicted quarkonium states discussed in the next section), have not yet been detected.
There have still not been definitive sighting of glueballs, of tetraquarks, of pentaquarks, or of H-dibaryons. The implication of our failure to see them despite the fact that QCD predicts their existence and properties with considerable precision, is that we may be missing a solid understanding of why QCD discourages or highly suppress these states. Such QCD rules might be emergent from the existing QCD rules of the Standard Model in a way that we have not yet understood, or it could reflect something missing in those equations or in the other rules of the Standard Model that are used to apply them.
Similarly, no well established resonances have JPC quantum number combinations (total angular momentum, parity and in the case of electrically neutral mesons, charge parity) that have no obvious source in any kind of quark model with purely qq mesons. In the case of hypothetical mesons with J=0, 1 or 2, these are the JPC quantum numbers: O--, O+-, 1-+ and 2+- to name just those with J=0, 1 or 2. As one professor explains: "These latter quantum numbers are known as explicitly exotic quantum numbers. If a state with these quantum numbers is found, we know that it must be something other than a normal, qq¯ meson." At higher levels of integer J, the combination +- is prohibited for even integer values and -+ is prohibited for odd integer values. These combinations might be created by bound states of a quark, antiquark and a gluon, each of which contribute to the J, P and C of the overall composite particle that are called "hybrid mesons" and are not observed. Lattice QCD has calculated masses, widths and decay channels for these hybrids, just as it has for glueballs (aka gluonium).
But, these well defined and predicted resonances are simply not observed at those masses in experiments, suggesting that for some unknown reason, there are emergent or unstated rules of QCD that prohibit or highly suppress resonances that QCD naively permits such as gluonium (aka glueballs), or hybrid mesons, or true tetraquarks or true pentaquarks, or H-dibaryon states (at least in isolation, as opposed to blended with other states in linear combinations that produce qq model consistent aggregate states).
A few resonances have been observed that are probably "meson molecules" in which two mesons are bound by residual strong force much like protons and neutrons in an atomic nucleus, however, have been observed. This is the least exotic and least surprising of the QCD structures other than plain vanilla mesons, baryons and atomic nuclei observed to date, since it follows obviously from the same principles that explain the nuclear binding force that derives from the strong force mediated by gluons between quarks based on their "color charge."
Not very surprisingly, because top quarks have a mean lifetime an order of magnitude shorter than the mean strong force interaction time, mesons or baryons that include top quarks have not been observed. Still, the mean lifetime of the top quark is not so short that one wouldn't expect at least some highly suppressed top quarks to briefly hadronize when they end up in rare cases having lives much longer than the mean lifetime, so while the suppression of top hadrons is unsurprising, the magnitude of that suppression is a bit of a surprise.
Surprising Meson Spectrums
Meanwhile, many mesons have been observed whose quantum numbers, decay patterns, and masses taken together are not a good fit for simple models in which mesons are made up of a particular quark and a particular anti-quark which have either aligned spins (and hence have total angular moment J=1 called vector mesons) or oppositely aligned spins (and hence have total angular momentum J=0 called pseudoscalar mesons).
Standard Model QCD is sophisticated enough to deal with the variations from a simpler model seen in neutral quarks that can experience matter-antimatter oscillations to great precision. But, Standard Model QCD still struggled to explain a spectrum of mesons that appear to be made up of various forms of "quarkonium" (i.e. mesons made up of a quark and an anti-quark of the same quark flavor) which blend into each other in ways not fully understood or easily predicted. A variety of competing theories seek to explain these phenomena after the fact within the context of QCD, but nobody predicted how this would happen in advance.
Matter-Antimatter Meson Oscillations: Weird But Understood
Oscillations of neutral mesons with their anti-particles is a phenomena of quantum physics that has been known since 1955. M. Gell-Mann and A. Pais, “Behavior of Neutral Particles under Charge Conjugation,” Phys. Rev. 97, 1387 (1955). This seminal paper observed that:
[W]ithin the framework of the tentative schemes under consideration, the θ0 must be considered as a "particle mixture" exhibiting two distinct lifetimes, that each lifetime is associated with a different set of decay modes[.]This prediction was experimentally confirmed for neutral kaons in 1956. K. Lande, E. Booth, J. Impeduglia, L. Lederman, and W. Chinowsky, "Observation of Long Lived Neutral V Particles", Phys. Rev. 103, 1901 (1956).
It turns out that there are at least two dominant means of blending the particle and anti-particle state of the neutral kaon, K0. One have a mixed bound state that is the sum of the particle and anti-particle state (divided by the square root of two), which is called the long kaon, or KL, or a mixed bound state that is the difference between the particle and the anti-particle state (divided by the square root of two), which is called the short kaon, or KS.
It isn't clear to me the extent to which particle and anti-particle states of D0, B0, and B0s mesons mix into long and short bound states with distinct lifetimes and decays, in the way that the K0 meson does. It could be that this is discouraged by the mass gap between the up and charm quarks in the neutral D meson (about 554-1), between the down and bottom quarks in the neutral B meson (about 871-1), and between the strange and bottom quark in the neutral strange B meson (about 44-1) relative to the gap between the down and strange quark in the neutral kaon (a ratio of about 17-22 according to the PDG) lead these neutral mesons to mix their particle and antiparticle states less strongly than the K0meson.
In addition to the K0 meson, there are three other mesons which are bound states of a quark and an anti-quark of different flavors, and a neutral electric charge, which oscillate between matter and anti-matter states of the meson: the D0 (i.e. neutral D meson, made up of a charm quark and anti-strange quark), the B0(i.e. neutral B meson, made up of a down quark and anti-bottom quark)(reported in H. Albrecht et al. (ARGUS collaboration), Phys. Lett. B 192, 245 (1987)), and the B0s(i.e. neutral strange B meson, made up of a strange quark and anti-bottom quark)(reported in A. Abulencia et al. (CDF collaboration), Phys. Rev. Lett. 97, 242003 (2006)). The last of these oscillations to be observed experimentally was in the D0, which was announced in a March 5, 2013 paper from LHCb. This announcement was the culmination of prior studies starting in 2007. As an author of that paper explained: "First evidence came from both the BaBar and Belle Collaborations in 2007, with further proof soon supplied by the CDF Collaboration and other additional measurements. A global combination of these pioneering results established the existence of these oscillations. Now, LHCb has presented the first clear observation based on a single measurement."
While these oscillations appear at the hadron level as a case of matter turning into anti-matter, the conventional explanation for this phenomena, as explained by an LHCb investigator is that this is actually a second order weak force interaction.
For example, in the case of the D0oscillation, what happens is that the charm quark oscillates into an up quark via an intermediate virtual down, strange or bottom quark, while the anti-up quark oscillates into an anti-charm quark via a virtual anti-down, anti-strange, or anti-bottom quark. Usually, when a quark emits a W boson causing a flavor change, the W boson decays democratically (i.e. with equal probability into all quantum number neutral, electric charge one shifting pairs of decay products that are energy permitted) before it can be absorbed by another quark. But, in this process there are two W boson exchanges within the neutral mesons between the chain of matter particles and the chain of anti-matter particles, so there are no W boson decay products.
In neutral meson oscillation, the sum of the masses of the particles in the hadrons in the initial state and end state have exactly the same masses, because quarks and anti-quarks have the same mass. Also, the initial and end state charges of the particles in the matter chain and in the anti-matter chain are the same. Likewise, no matter actually gets changed into anti-matter at any point in the process (even though it naively looks like it does), so each chain of interactions preserves baryon number perfectly, although they do change the quark flavor numbers of the meson by +1 and -1 respectively, for the quarks in question.
These meson to anti-meson oscillations are "weird", but they do occur at rates predicted by the Standard Model using constants derived from other weak force interactions with quarks pursuant to the relevant Feynman diagrams for the interaction.
As I have discussed previously, it isn't entirely clear (at least to me) if neutrino oscillation similarly involves a second order weak force process through a virtual charged lepton and a virtual pair of W bosons that can be illustrated with a Feynman diagram, much like the interactions involved in meson to anti-meson oscillations, or if this occurs by some other mechanism.
Charged mesons (and baryons) don't mix in this manner, although charged mesons (and baryons) can, in principle, exhibit CP violation (see this review article at 8), just as neutral mesons are observed to experimentally.
An Aside Re Baryon to Anti-Baryon Oscillations: They Don't Happen
In contrast, neutral baryons do not appear to oscillate between particle and anti-particle states.
Unlike neutral meson oscillations, which can be understood to involve mere quark flavor changes, rather than true matter-antimatter oscillations, and oscillations of neutrinos to other flavors of neutrinos (but not to anti-neutrinos), there is no possible weak force interaction that could turn the three matter particles of a matter baryon into the three anti-matter particles of an anti-matter baryon. (There are no baryons which are their own anti-particles, so there are no simple baryon equivalents to quarkonium mesons discussed below.)
Thus, while meson to anti-meson oscillations appear to occur according to Standard Model flavor changing W boson processes, and observed neutrino oscillations could occur via the same processes, baryon to anti-baryon oscillations, even if they occur, could not occur in that manner.
The data tend to confirm the naive Standard Model prediction that neutral baryons (there are no baryons that are their own antiparticles) do not oscillate with their neutral baryon anti-matter counterparts, a process that would violate baryon number conservation.
A 1994 study determined that if neutron-neutron oscillations occurred at all, they the oscillation period was greater than 8.6*107 seconds (i.e. about two and a half years), even though the mean lifetime of a free neutron is about 881.5 +/- 1.5 seconds (i.e. about 14 minutes and 42 seconds). Thus, if the neutron oscillates into an anti-neutron state at all, it oscillates at a rate about 100,000 times as slowly as it decays in a free state (neutrinos bound in atomic nuclei are stable). More recent studies have dramatically increased the extent of that bound, such as a 2002 study that concluded that oscillations of bound neutrons had a mean period of more than 1.3*10^8 seconds, which was increased to 2.7*10^8 seconds in 2007 at the SuperK experiment. In contrast, as I understand the matter, all four of the oscillating neutral mesons have mean oscillation times on a similar order of magnitude to their mean lifetimes which are 10-8 seconds or less.
Neutron oscillation, like neutrino-less double beta decay, proton decay, flavor changing neutral currents, and other baryon number violating decays, are baryon number violating processes that are theoretically attractive in new physics theories for a variety of reasons (particularly related to cosmology). But, very precise tests have again and again demonstrated that baryon number violating phenomena don't happen to the highest modern limits of experimental accuracy.
CP Violation In Oscillating Neutral Mesons: A More Weird Aspect Of Already Weird Particles
Even the rather complicated notion of neutral mesons with anti-particles actually involving mixed oscillation states with equal contributions of matter and antimatter components, is actually not quite right.
For example, neutral kaons do not actually oscillate from a matter to an anti-matter state at exactly the same rate as they oscillate back from an anti-matter state to a matter state. So, the already hard to fathom "canonical" description of the kaon of a blend of the "short kaon" which is the difference between the matter and anti-matter state, and the "long kaon" which is the sum of the matter and anti-matter state, while much closer to the truth is not quite right. The real blend is not 50% matter and 50% anti-matter, taken as either a sum or a difference, but a bit more than 50% matter and a bit less than 50% anti-matter.
This phenomena, called CP violation, which is quantified in the Standard Model by the CP violating phase of the CKM matrix, was first observed indirectly in neutral kaons (in 1964) and has also been observed directly, in each of the oscillating mesons: in neutral kaons since 1999, in the neutral B mesons since 2001, and in the neutral D mesons since 2011. Reasonably accurate estimates of the magnitude of CP violation in B mesons (within a factor of three) based on the Standard Model equations and the CP violation parameter determined from indirect measurements involving neutral kaon decays were in existence by 1980, if not sooner. At 2008 measurement by the Belle experiment that suggested a modest but statistically significant difference in the CP violation parameter for charged and neutral B mesons has not been born out by further experiments.
CP violation via the CKM matrix is the only process in the Standard Model in which an arrow of time is present fundamentally (CP violation is equivalent to treating processes that go forward and backward in time differently), as opposed to in an emergent statistical manner (the Second Law of Thermodynamics). And, even at the level of the most suppressed and rare types of D meson decays and CP violations observed as of 2013, the Standard Model prediction of their frequency has been confirmed.
As a 2013 review article at the Particle Data Group site explains. CP violation has been observed experimentally, only at slight levels and only in the decays of a small subset of mesons:
CP violation has not yet been observed in the decay of any baryon, nor in the decay of any unflavored meson (such as the η), nor in processes involving the top quark, nor in flavor-conserving processes such as electric dipole moments, nor in the lepton sector.The article notes that in addition to the four types of neutral pseduo-scalar meson decays where CP violation has been observed, CP violation has also been detected at the five sigma level, at approximately the predicted amounts, in the decays of charged pseudo-scalar B mesons (i.e. mesons made of a bottom quark and an anti-up quark and visa versa). The LHCb experiment first reported CP violation in charged B meson decays in 2012, a result that has been confirmed by the BaBar experiment in both existence and magnitude.
A 2012 effort to detect CP violation in charged D meson decays did not find it at even a two sigma level, with a result dominated by statistical uncertainty (i.e. basically, by an insufficiently large data set of charged D meson decays). Efforts to find CP violation in the decays of baryons with charm quarks have likewise failed to reach statistical significance as of 2012.
CP violation has not been observed at this time in the decays of scalar, vector, or axial-vector mesons, or in excited meson states such as tensor mesons, or in any charged mesons other than pseudo-scalar charged B mesons.
In neutral kaons, the decay-rate asymmetry is only at the 0.003 level, although it is greater in neutral B mesons, where it is about 0.7 (it is also present in neutral D mesons, but this CP violation had not been documented definitively in time for inclusion in the article). It charged B mesons it is about 0.2.
CP violation in baryons with bottom quarks, which has not yet been observed, is predicted to be on the order of one part per 10^5 or less.
It is possible that there is CP violation in neutrino oscillation as well, but this has not yet been definitively observed although the current experimental hints disfavor zero CP violation in neutrino oscillation and suggest that neutrinos may engage in CP violation more strongly than quarks.
The largest CP violating term in the Wolfenstein parameterization of the CKM matrix is multiplied by A*(lambda^3) in Vtd and Vub elements, by A*(lambda^4) in the Vts element, and by A^2*(lambda^5) in the Vcd element. The CP violating terms in other elements are at the lambda^6 order or less (lambda is roughly 0.23 and A is roughly the square root of two-thirds). The fit of the single CP violating parameter to the observed CP violation in experiments is sufficiently tight to show on a model-independent basis that even if there is some other additional new physics source of CP violation in meson decays, that the CKM matrix CP violating phase is the dominant source of CP violation in those systems (from here at 2).
An Aside Re CP Violation and Matter-Antimatter Asymmetry In the Universe.
The observed level of CP violation is mysterious, however, because, while it exists in the Standard Model, the measured magnitude of CP violation in the Standard Model seems to be too small to explain the matter-antimatter asymmetry of the universe (at least involving baryons and charged leptons) assuming that (1) the universe began at the Big Bang in a pure energy state with no matter or anti-matter bias, (2) baryon number and lepton number are conserved as they are in the Standard Model except in rare high energy sphaleron processes, and (3) the basic outlines of the standard model of cosmology are correct (also here).
The Standard Model also cannot explain how the aggregate baryon number in the universe became non-zero or reached its current estimate value that is known to about one significant digit. We don't know the aggregate lepton number of the universe, the sum of B+L, or the difference B-L, because we don't know the relative proportion of neutrinos and anti-neutrinos in the universe. But, if there are more than a tiny fraction of a percent more neutrinos than anti-neutrinos in the universe, then L is not equal to zero, B+L and B-L have a large absolute value, and the absolute value of L is much greater than the absolute value of B. Very limited experimental hints to date tend to favor the existence of far more anti-neutrinos than neutrinos, which would imply a large negative value of L and B+L, and a very large positive value of B-L.
The Quarkonium Spectrum
The spectrum of quarks with a quark and anti-quark of the same type, called quarkonium, are particularly problematic.
These states were already the subject to an exception to the usual QCD rules governing hadron decay. We know that quarkonium states are usually suppressed in hardonic decays, due to the Zweig rule, also known as OZI suppression, which can also be stated in the form that "diagrams that destroy the initial quark and antiquark are strongly suppressed with respect to those that do not."
Quarkonium mesons are also notable for being particles that are their own anti-particles. If they did oscillate between particle and anti-particle states, the two states would be indistinguishable.
Quarkonium mesons easily blend into linear combinations with each other because (1) bosons can be in the same place at the same time, and (2) they have similar quantum numbers because all quarkonium mesons have zero electric charge, baryon number (quarks minus antiquarks), isospin (net number of up and down quarks and up and down antiquarks), strangeness (net number of strange and antistrange quarks), charm number (charm quarks minus charm antiquarks) and bottom number (bottom quarks minus bottom antiquarks).
There are no mesons that appear to have a purely uu, dd or ss composition. The neutral pion, the neutral rho meson, and the neutral omega meson are believed to be linear combinations of uu and dd mesons (the omitted one of four simple combinations of uu and dd may be a scalar meson). The eta meson and the eta prime meson are believed to be linear combinations of the uu, dd and ss mesons. Many of the lighter scalar and axial-vector meson states without charm or bottom quarks are also presumed to include linear combinatons of uu, dd, and ss quarkonium mesons. There have been proposed nonets of scalar mesons that are chiral partners of the pseudoscalar mesons, for example, although the issues of the quark compositions of these true scalar mesons is not well resolved.
The large masses of charm and bottom quarks relative to the non-quark "glue" mass of mesons makes it harder for the quark content of charmonium and bottomonium-like states to remain ambiguous. These mesons are called XYZ mesons. But, they continue to show signs that they may be mixings of quarkonium states, rather than always being composed of a simple quark-antiquark pair of the same flavor of quark There are about seven charmonium-like states that have been discovered that were not predicted by QCD, and a like number of such states that are predicted to exist but have not been observed. Bottomonium states present similar issues. The XYZ mesons have JPC quantum numbers of 0-+ (pseudo-scalar), 0++ (scalar), 1-- (vector), 1+- (pseudo-vector) and 2++ (tensor) in their J=0, 1 and 2 states. Mesons with combinations 1++ (axial vector) and 2-+ and 2-- are also theoretically permitted.
There are even some indications that there are resonances that are made up of bound states of oscillating mesons and anti-mesons, or of proton-antiproton pairs, that act like quarkonium mesons.
There are competing theories to describe and predict these quarkonium dominated meson spectrums.
Mr. Olsen concludes by stating that:
The QCD exotic states that are much preferred by theorists, such as pentaquarks, the H-dibaryon, and meson hybrids with exotic JPC values continue to elude confirmation even in experiments with increasingly high levels of sensitivity.A recent review pre-print by Choi reaches the same conclusion as Olsen: Evidence for exotic hadrons predicted by QCD is absent; evidence for hadronic states that QCD has not anticipated (essentially the same ones, with some minor differences) is abundant and has not yet been well explained theoretically.
On the other hand, a candidate pp bound state and a rich spectroscopy of quarkoniumlike states that do not fit into the remaining unassigned levels for cc charmonium and bb bottomonium states has emerged.
No compelling theoretical picture has yet been found that provides a compelling description of what is seen, but, since at least some of these states are near D(*)D* or B(*)B* thresholds and couple to S-wave combinations of these states, molecule-like confi gurations have to be important components of their wavefunctions. This has inspired a new field of flavor chemistry" that is attracting considerable attention both by the experimental and theoretical hadron physics communities.
Time For A Breakthrough?
Implicit in Olsen and Choi's discussion is the recognition that we have a sufficiently large body of non-conforming experimental evidence that we may be close to the critical moment where some major theoretical break through could in one fell sweep explain almost all of the data that is not a clean fit for existing QCD models with some sort of paradigm shift.
Other Issues with QCD
There are other outstanding issues in QCD beyond those identified by Olsen's paper. A few of these follow.
Infrared QCD
The infrared (i.e. low energy) structure of QCD that can be explored only with lattice QCD is also sometimes mysterious with different methods producing different results. Particularly important is the question of whether the QCD potential reaches a theoretical zero at zero distance, or has a "non-trivial fixed point." The issue is closely related to one of the current unsolved "Millenium Problems" in mathematics.
More generally, there are various competing models to explain the internal structure of hadrons, most of which are adequate for some purposes, but not others. We still don't have a definitive single picture that is superior in all circumstances to describe hadron structure.
Similarly, while experiment confirms that quarks are fundamental and point-like down to scales as small as 10^-20 meters, this implicitly contradicts general relativity at sub-Planckian distances well below 10^-34 meters and we can't rule out the possibility that there is some sort of composite structure to the fundamental particles of the Standard Model, or that particles are something distinct from the space-time background, or that there might be discrete structure space-time itself, at the quantum gravity Planck scale or smaller scales.
Right now, there is a "desert" of new physics both over many order of magnitude below the scale of QCD, and also for many orders of magnitude above the electroweak scale to the "GUT scale".
At also appears that in the infrared, it also appears the gluons, which have zero rest mass in the Standard Model, acquire dynamical mass in an amount that is a function of their momentum (higher momentum gluons in low energy QCD have less dynamical mass).
This is quite odd, because usually, in the world of general and special relativity, particles with higher momentum acquire greater relativistic mass. The behavior that we do observe in the case of massive gluons seems more like the macro-scale phenomena in which friction falls in faster moving objects thereby making them easier to push.
The Strong CP Problem
We still aren't sure if there are any deep reasons for the fact that no CP violation is observed in QCD interactions (the CP violations in neutral mesons described above involve weak force interactions), despite the fact that it is a chiral theory and that there is a natural term for it in the QCD Lagrangian. This is called the "strong CP problem." Experimentally, the strongest bound on the QCD constant that would give rise to strong CP violation comes from the measured value of the electric-dipole moment of the neutron. The measured value is tiny and consistent with zero CP violation in strong force interactions.
Of course, like the hierarchy problem, the Strong CP problem is to some extent a presumptuous and philosophical problem. We know what the laws of nature are in this case and can express that by stating that the value of a particular QCD constant is zero or very nearly zero, rather than a number on the order of one that those who view this as a problem would have presumed it would be. Those who see this as a problem presume that there is some good reason that Nature shouldn't have made this choice and are arguing with Her over it. Maybe there is a deep reason for this, but maybe the value of this physical constant is just one more law of nature and we are really just exposing our ignorance of the overall pattern when our expectations don't line up with reality in this case.
Very Difficult Calculations
Meanwhile, even basic QCD exercises like estimating a hadron's properties from its components, when they are well defined, suffers from issues of low precision, because while it is possible to measure observable hadron properties precisely, it is very hard to do QCD calculations with enough terms to make the theoretical work highly precise. This in turn leads the values for input parameters like the strong coupling constant and quark masses to be fuzzy as well.
Recent progress has been made, however, in using new calculation methods like Monte Carlo methods and the amplituhedron, to reduce the computational effort associated with these calculations. Often, a variety of other simplifying tricks, from using pure "Yang-Mills theory" rather than the specific real world case of Standard Model QCD, or estimating physical outcomes by extrapolating from models using different masses or numbers of quark flavors than we see in the real world, are also used to approximate the impossible to solve exactly and directly equations of Standard Model QCD.
Discrepancies Between Theory and Experiment.
As I've noted previously, sometimes perturbative QCD predictions differ materially from observed results even at energy scales where it should be reliable. This may simply be because the calculations are so hard to do right as explained above. None of the discrepancies that seem to be present at this point involve cases where we are sufficiently confident of our calculations of the exact QCD prediction to be newsworthy at this point.
Other Lattice QCD points
Another review of the status of QCD and in particular Lattice QCD can be found here. It notes success in estimating hadron masses, and the problem that theoretical uncertainty in QCD contributions is the biggest contributor to uncertainty regarding the magnetic moment of the muon which differs quite a bit in terms of standard deviations, but very little in terms of absolute amount, from the theoretically predicted value.
Conclusion
While QCD has not yet definitively failed any tests of the Standard Model theory, and instead, has been repeatedly validated, it has also been subject to much less precise experimental tests than any other part of the Standard Model. The absence of any really viable alternative to QCD has been key to its survival and lack controversy in beyond the Standard Model physics discussions. But, few areas of the Standard Model have more wiggle room for beyond the Standard Model new physics.
Subscribe to:
Posts (Atom)