What is the universe made of?
A very large part of the mass-energy of the universe according to the six parameter lambda CDM model that predicts the patterns of cosmic microwave background radiation which we observe at about 2.7 degrees Kelvin, is attributable to "dark energy" and "dark matter", and a tiny little bit in principle (which the model disregards) is attributable to ambiant radiation. The rest is "ordinary" matter of the kind described by the Standard Model of Particle Physics which is the subject of this post.
The best evidence we have available to us suggests that the universe, at a very fine grained level, is almost perfectly electromagnetically neutral. Since conservation of net charge is maintained in all interactions, this has always been true going back as far as the laws of physics hold.
The best evidence we have available also suggests that protons, neutrons and electrons make up virtually all of the ordinary matter in the universe (i.e. other than dark matter and dark energy), with only an infinitessimal share of it at any one time consisting of mesons, hadrons other than protons and neutrons, muons and taus, all of which are extremely unstable.
Thus, there is almost exactly one electron for every proton in the universe.
Each proton is made up of two up quarks and one down quark. It is possible to estimate the number of neutrons relative to the number of protons as well. About 90% of all known (non-dark matter) atoms are protium (H-1), with
proton and electron but no neutron, and over 98% of the remainder is helium-4,
with two protons, two neutrons, and two electrons. In general, for heavier atoms, there are far more protons than neutrons in the naturally occuring isotypes in the proportions in which they appear in nature. So, the ratio of protons to neutrons is probably between 19-1 and 20-1.
Thus, about 65% of the quarks in the universe are up quarks, about 35% of the quarks in the universe are down quarks, a tiny fraction of a percent of the quarks in the universe at any given moment are strange, charm, bottom or top quarks, and any even smaller fraction of a fraction of a percent of quarks in the universe at any given time are antiquarks.
In the Standard Model, baryon number, which is the number of quarks minus the number of antiquarks, divided by three, is perectly conserved. Likewise, lepton number which is the number of leptons (i.e. electrons, muons, taus and neutrinos) minus the number of antileptons, is likewise perfectly conserved.
How are neutrinos created?
Neutrinos can be created in two known ways.
A neutrino-antineutrino pair can be created from the decay of a Z boson. A Z boson is a heavy electromagnetically neutral weak force boson that couples proportionally to the weak force coupling constant and a particle's weak isospin, to all massive fundamental particles in the Standard Model a bit like a heavy photon.
Far more commonly, neutrinos are created when a W boson decays to a charge lepton and a neutrino or antineutrino. When the W+ boson decays, it often decays to a positron and electron neutrino, to an antimuon and muon neutrino, or to an antitau and tau neutrino. In the more common situation, the decay of a W- boson emitted in connection with nuclear beta decay, the W- boson decays to an electron and electron antineutrino, to a muon and a muon antineutrino, or to a tau and a tau antineutrino.
Of course, when a muon or tau are produced, they decay with a high probability to a neutrino and a W+ boson, which in turn decays to another neutrino and a charged antilepton, which in turn annihilates with charged lepton or decays further, often into two antineutrinos and a charged lepton.
The beta decay channel is by far the most common means by which neutrinos are created. It is fair to assume that there is one antineutrino in existence for every electron (and for each muon and tau) in existence, in addition to an additional antineutrino for every ordinary neutrino in existence that does not have a positron, antimuon or antitau counterpart.
Thus, the vast majority of neutrinos are actually antineutrinos. Likewise, the vast majority of antimatter particles in the universe are antineutrinos.
The mass proportions of ordinary matter in the universe
Protons and neutrons each have masses about 2000 times that of the electron. So about 99.997% of the non-dark matter in the universe is made up of protons and neutrons (and less than 1% of that is attributable to the rest mass of the quarks in those hadrons - the rest arises dynamically from the strong nuclear force exchange of gluons between them which is mostly localized in the central 1/3rd of a proton or neutron's diameter).
Almost all of the rest of the non-dark matter in the universe comes from electrons. Electrons, in turn have masses of about 1,000,000 to 1,000,000,000 times that of the three known kinds of neutrinos and antineutrinos. Thus, anti-matter makes up something on the order of between one part in two billion and one part in two trillion of the non-dark matter, non-dark energy in the universe by weight, although it is hard to know how many neutrino-antineutrino pairs have been created through sequences of W+ boson decays or Z boson decays and not annihilated each other. This gross asymmetry of matter and antimatter in the universe is one of the great unsolved questions of physics.
Quarks and charged leptons have a powerful tendency to rapidly decay to the first generation versions of these particles (up quarks, down quarks, and electrons). But, once you have an antineutrino of a particular type, it oscillates between the three different kinds of antineutrinos and the parameters of those oscillations are just on the brink of being determined. So, we don't know very accurately what proportions of the different antineutrino types are in the universe.
A few personal conjectures on neutrino mass and matter-antimatter asymmetry
One of the other great unsolved questions in physics is why neutrinos are so much less massive than all of the other Standard Model fermions.
My intuition is that the answer to this question has a deep connection to the matter-antimatter asymmetry in the universe and probably also to the fact that the up quark is stable and the down quark is not unless found neutrons confined in an atomic nuclei.
Since neutrons decay into protons, this decay must be balanced by a negatively charged leptons and in order to conserve lepton number and electromagnetic charge, an electromagnetically neutral antilepton. If neutrons decayed into protons, it would take a charged antilepton and an electromagnetically neutral lepton to balance the books.
One of the reasons I doubt that neutrinos are their own antiparticles and have Majorana mass is that their essential function in beta decay is to be antileptons that can balance lepton number. If a neutrino and an antineutrino were the same thing, this wouldn't work. Their intrinsic antimatter character is critical to the role that antineutrinos play in particle physics.
One way to describe an antiparticle is as an ordinary particle going backward in time.
One way to interpret an annihilation event when a charged particle and charged antiparticle come into contact, and give rise to a photon with energy equal to their combined mass-energy, is that a single particle moving forward in time is knocked backward in time by the incredibly powerful punch of a superenergetic photon. In this interpretation, the amount of energy necessary to make a particle moving forward in time reverse direction is equal to two times its rest mass time the speed of light squared, plus an adjustment for its momentum. The energy released in a matter-antimatter annihilation is many orders of magnitude greater than the energy released in a nuclear fusion reaction involving the same mass of reactants.
If you apply the intuition of this interpretation to W- boson decay, you would reason heuristically that a W- boson wants to decay into two particles of roughly equal mass energy. On one side of the balance is the mass-energy necessary to create an electron. On the other side of the balance is the mass-energy necessary to create a neutrino and then convert it from a particle into the antiparticle that is necessary to keep the interaction's lepton number balanced. The feat of creating even a tiny amount of antimatter counterbalances the much easier act of creating of ordinary matter in the form of an electron on the other side of the balance.
The neutrinos then seek a hiearchy of masses between the three generations of neutrinos in a manner similar to that of charged leptons and quarks - but the need to cross the matter-antimatter barrier profoundly suppresses the amount of mass transmitted from charged leptons to antineutrinos via W boson exchange relative to the parallel process for quarks (outlined as a conjecture here). Effectively, because of this matter-antimatter barrier, neutrinos are only receiving mass contributions from other neutrinos, and charged leptons are receiving contributions only from other charged leptons, unlike up-like quarks which receive contributions from all of the other down-like quarks they can interact with, and down-like quarks which receive contributions from all of the other up-like quarks they can interact with.
I also suspect that the matter-antimatter imbalance in the universe has been with us since not long at all after the Big Bang, probably at the very least by the end of the inflationary era. Our matter dominanted universe is an arrow of time. I suspect, but can't prove that there is another universe that exists in the time before the Big Bang, in which causality run in the other direction and what we call antimatter is just as predominant as what we call ordinary matter is in our universe. Our universe is rushing away from the Big Bang in one direction in time, and the other universe is rushing away from the Big Bang in the other direction in time. At the "time zero" boundary within the Big Bang, pure energy condenses into matter-antimatter particle pairs with the matter particles ending up on our side of t=0 and the antimatter particles undering up on their side of t=0, because the fundamental essence of matter is that it moves forward in time (as we reckon it) and the fundamental essence of antimatter is that it moves backward in time.
Once you start with a matter dominated universe, annhilation of stray particles of charged antimatter, and ordinary W boson and Z boson decays perpetuate a matter dominated universe with the sole residual exception being about one part per two billion to two trillion of the mass-energy of the universe in the form of antineutrinos.
The questions aren't answered by the Standard Model itself. They may not even be answerable questions except to the extent that the observed masses of particles and their frequencies coincide, or do not coincide, with a more rigorous version of these heuristic ideas, or to the extent that this kind of thinking also fosters a train of thought that leads to other conclusions that are somehow more rigorously testable.
But, the notion that focusing on the antimatter character of most neutrinos in accounting for their tiny mass, rather than on their lack of electromagnetic charge, may be a useful exercise.
An alternative, although not entirely independent heuristic, could also play a role. The constant process of alternating between left parity and right parity modes while retaining the same character on the particle-antiparticle dimension, possibly due to the Higgs field, may be an important process in the generation of the rest masses of the fundamental particles. Since neutrinos can only change between a left parity and right parity mode by simultaneously changing from a particle to an antiparticle mode, which poses a much greater barrier to that transition, their masses are suppressed.
Thursday, March 28, 2013
Wednesday, March 27, 2013
Strassler On Talking About Science
I think it very important for scientific experts to be clear, when they speak in public, about what is known and well-established, what is plausible and widely believed but still needs experimental checks, and what is largely speculative and could very well be false. (For example: The Higgs particle and field are nearly established; inflation is increasingly plausible; any connection between them is speculative.) . . .
Just as we widely agree the Higgs particle must have zero spin, and that the inflaton is quite likely to have zero spin, I’d like to see a consensus emerge that public communication of particle physics, string theory and cosmology should also have zero spin. Too bad that’s still a rather speculative idea.From Matt Strassler's blog.
His point is well taken. I'd would draw the lines between some of the categories he identifies in moderately different places than he does, but his framework is a sound one.
I would suggest, however, that in many cases there are two or three competing positions which are plausible and widely believed by a subset of the scientific community which are not "largely speculative", and that in those cases (e.g. SUSY, MOND and loop quantum gravity), it would be helpful to acknowledge that there are competing theories and to explain their relative levels of acceptance.
Often there will be a majority or plurality position, and one or more minority views held by significant numbers of respectable mainstream scientists, which have not been resolved and may be impossible to resolve for extended periods of time due to a limited experimental data. In these cases, all off the competing theories generally produce very similar predicted phenomenological outcomes within the range of experimental data whose accuracy is not seriously subject to question. indeed, sometimes these differing positions arise from disputes over the validity of alternative experimental methods.
Likewise, it is often important to distinguish between largely speculative ideas that are professionally respectable ideas, even if they are not widely believed by any subset of the scientific community at this point, and "crackpot" ideas that are starkly contradicted by widely accepted empirical evidence and are contrary to widely accepted physical laws, or are internally flawed in deep ways (e.g. they are not mathematically or dimensionally consistent).
This is particularly important in the area of fundamental physics, where the proportion of all published work of professional physicists that is largely speculative is much larger than in many other academic disciplines.
A huge share of the published work in theoretical physics analyze toy models of possible laws of nature that are largely speculative or even known to be contrary to empirical evidence, but are not "crackpot" ideas. They are published with an eye towards understanding the implications of that class of mathematical models to see if they could possibly be made to correspond to empirical evidence if further developed, and to determine what implictations the "new physics" in those models beyond the Standard Model and general relativity might show. These papers are intermediate steps in the massive undertaking of looking for a final and complete set of the laws of nature and are driven by the known imperfections, at least from a point of view of a comprehensive and rigorous set of laws of nature, with the status quo. But, for the most part, they don't even pretend to be plausible and widely believed evidence based inferrences about the way that the world already is right now.
A person not familiar with this state of the published and peer reviewed literature in theoretical physics could easily be led astray into thinking that largely speculative ideas have more importance than they actually do, because these kinds of published and peer reviewed papers setting forth largely speculative ideas are far more rare in many other academic disciplines.
I'd also note that it is often the case that we know with a great deal of confidence that some scientific proposition is X or Y, but don't know which one is correct. The range of possibilities may be "known and well established", but statements about which of the possibilities is actually right may be "largely speculative."
Monday, March 25, 2013
The Lamda CDM Model Says Little About CDM
Cosmic microwave background radiation studies, culminating in an as good as it gets (i.e. inherent theoretical limits on measurement are dominant relative to experimental imprecision), one time measurement by Planck satellite which released all of its data except the polarization data last week, produces results that are fitted to the "Standard Model of Cosmology" called the six parameter lambda CDM model, with a few additional parameters considered.
One often overlooked, but absolutely key point to understand about the lambda CDM model is that it doesn't really meaningfully specify much about the CDM part. It establishes that there must be a certain amount of very generally described non-hot dark matter, but very little more.
One often overlooked, but absolutely key point to understand about the lambda CDM model is that it doesn't really meaningfully specify much about the CDM part. It establishes that there must be a certain amount of very generally described non-hot dark matter, but very little more.
Thursday, March 21, 2013
Precision Cosmic Background Radiation Results In
The Planck satellite team has released the most detailed ever data on cosmic background radiation in the universe which tells us a great deal about the basic parameters of cosmology (more papers here).
The new data shrink the estimated proportion of dark energy relative to matter (i.e. it has a best fit value for the cosmological constant that is a bit smaller).
Taken together with other data, the results strongly favors a cosmology with just three generations of neutrinos with a sum of the three respective mass eigenvalues of between 0.06 and 0.23 eV of mass with the best fit at the bottom of that range. A sterile neutrino of the kind suggested by the reactor anomalies at two Earth bound neutrino experiments (LSND and MiniBooNE) is not a good fit and if there was one would have to have less than 0.5 eV of mass which is considerably smaller than the estimate from reactor anomalies observed to date and other data of 1.3 eV or so. In a nutshell, the Standard Model triumphs once again. See posts on the state of these measurements pre-Planck here and here based on the 9 year WMAP data.
It is also important to note that the cold dark matter in the lambda CDM model doesn't say very much about the nature of the dark matter component of the model at all. It does not specify some specific dark matter model.
Analysis of the possibility of a sterile neutrino by the Planck team is not a good fit and imposes a mass limit of about 0.5 eV on the sterile neutrino species is is considerably less than the mass suggested by reactor anomaly data.
UPDATE: I posted the following as a comment at the Not Even Wrong Blog without links.
The result I read in paper sixteen was Neff=3.30 +/- 0.27 v. Neff 3.046 for the three Standard Model neutrinos. So, their result is a little less than one sigma from the Standard Model value. A four neutrino model would have an Neff of a bit more than 4.05, which is about three sigma from the measured value which is roughly a 99% exclusion and is a confirmation of the Standard Model.
Planck also combines data from multiple sources puts a cap on the sum of three neutrino masses in a three Standard Model neutrino scenario of 0.24 eV (at 95% CI) with a best fit value of 0.06 eV. The floor from non-astronomy experiments is 0.06 eV in a normal neutrino mass hierachy (based on the difference between mass one and mass two, and between mass two and mass three which are both known to about two significant digits) and 0.1 eV in an inverted neutrino mass hierachy. In a normal neutrino mass hierarchy, this puts the mass of the electron neutrino at between 0 and 0.06 eV, with the low end preferred (I personally expect that an electron neutrino is significantly less than the mass difference between the first and second neutrino type of about 0.006 eV).
Note that a particle that is in the hundreds or thousands of eVs would not count towards Neff because it is not light enough to be relativistic at 380,000 years after the Big Bang. So, it really only rules out a light sterile neutrino, rather than a heavy one. The LSND and MiniBooNE reactor anomalies have hinted at a possible fourth generation sterile-ish neutrino of about 1.3 eV +/- about 30%, so the Planck people did a study on the sum of mass limits if there were a disfavored four and not just three relativistic species and came up with a cap on sterile neutrino mass in that scenario of about 0.5 eV +/- 0.1 eV, which is about 2.5 sigma away from the value of the LSND/MiniBooNE anomaly estimates considering the combined uncertainties.
LEP ruled out a fourth species of fertile neutrino of under 45 GeV, and I wouldn’t be going out on a limb to say without actually doing the calculations that a fertile neutrino of 45 GeV to 63 GeV, if it existed, would have wildly thrown off all of the Higgs boson decay cross-sections observed (since a decay to a 45 GeV to 63 GeV neutrino-antineutrino pair from a 125.7 GeV Higgs boson would have been a strongly favored decay path if it existed) and is in fact therefore excluded by the lastest round of LHC data.
The LEP data already excluded fertile neutrinos in the 6 GeV to 20 GeV mass range where there are contradictory direct dark matter detection experiment results at different experiments.
But, a particle that we would normally call a sterile neutrino for other purposes in the Warm Dark Matter mass range of KeV or the Cold Dark Matter mass range of GeV to hundreds of GeV, or anything in between (including any of the possible direct dark matter detection signals or anything that would generate the Fermi line at 130 GeV), would not be a relativistic particle within the meaning of Neff which only counts particles that would move at relativistic speed given their masses at the relevant time.
ADDITONAL UPDATE: The mass difference of neutrino mass one and neutrino mass two is about 0.009 (usually reported squared at about 7.5 * 10^-5 eV) are about 0.5 (usually reported squared at about 2.5 * 10^-3 eV) for a combined 0.509. If the neutrino mass hierarchy is broadly similar to that of the quarks and the charged leptons (it is impossible to fit the values already known to a perfect Koide triple), one would expect an electron neutrino mass on the order of 0.001 eV (i.e. 1 meV) or less.
Planck is the beginning and to a great extent the end of cosmic background radiation physics.
Also, the precision of the Planck data is so much better than anything that has come before it, including the previously state of the art 9-year WMAP data released earlier this year, that you can basically ignore any pre-Planck data on cosmic background radiation in any respect that Planck data addresses the subject. If you use the Particle Data Group approach of computing global averages with weights inversely proportional to margin of error, the relative weights are perhaps 9-1 or more.
Realistically, Planck and successor cosmic background radiation experiments may be the only way to experimentally probe this truly high energy physics regime of the early universe for decades and possible ever. There are good theoretical reasons why we can't directly observe anything older (e.g. star formation happened after the cosmic background radiation arose, so there was nothing to make coherent light emitting objects). And, almost nothing in the current universe or any experiment we could create has higher energies than the pre-cosmic background radiation universe we are probing with this data.
Planck is measuring the entire universe-wide cosmic background radiation data set of one. We can't measure some other universe's cosmic background radiation outside of computer simulations and there is no reason that the cosmic background radiation that is observable from our solar system or anywhere nearby we can send a space probe should change noticably in my lifetime or the lifetime of my children or grandchildren. Future experiments can be more precise, but we understand electromagnetism almost perfectly and know all of the properties of cosmic backgrond radiation that it is even theoretically possible to measure and have measured almost all of them already (or are on the verge of doing so in the next few years) at Planck. Details can be refined, but the big picture won't change. Really:
I'll have to leave for a future post an in depth analysis of the constraints that the Planck findings place on cosmology apart from dark energy proportions, dark matter amounts, and neutrino generations and masses, but I'll discuss a few briefly in this post. There are several really interesting things going on there.
* First, the new Planck data provide much more meaningful experimental constraints on theories of "inflation" shortly after the Big Bang, which after dark matter, is probably the second biggest set of experimental data screaming out for new physics.
Because inflation takes place in the extremely high energy extremely early universe (when it was smaller than one meter and only a tiny fraction of a second old) is hard to make inferrences about in the context of experiments like the LHC and observable astronomy which are many orders of magnitude below the energy densities present in the proposed inflationary era, so "new physics" in this area outside the range of our experience or likely future is far less consequential than dark matter which affects the world we see today. But, "new physics" is still a big deal and may be important to the structure of a "Theory of Everything" or a quantum gravity theory (e.g. string theory vacua), at the very least by ruling out theories that have high energy behavior inconsistent with the experimental boundaries of inflation scenarios.
A lot of inflation theories that have been viable candidates, receiving serious discussion almost ever since the need for inflation in a cosmology theory was discovered in the 1970s (around the same time that the Standard Model was formulated), have been ruled out by the latest round of Planck data. Planck strongly disfavors power law inflation, the simplest hybrid inflationary models, simple monomial models with n > 2, single fast roll inflation scenarios, multiple stage inflation scenarios, inflation scenarios with flat or concave potentials, dynamical dark energy, time variations of the fine structure constant are all strongly disfavored. Any theory that would create non-Gaussian statistics of the CMB anisotropies, non-flat universes, tensor modes, or statistically discernable deviations from isotropy at L >50 are ruled out.
If your theory was phenomenologically distinct from "single slow roll inflation scenarios with convex potential" in any non-subtle way, you were wrong, thanks for playing.
I will need to read more to fully understand these implications myself, but more inflation theories have died today than on any previous day in history and than will on any day to come in the future (since there are fewer inflation theories left than the number of inflation theories killed today). A book length catalog (300 pages) of the pre-March 21 ranks of inflation theories is available at arxiv. What is inflation?
* Second, the lack of scale invariance in the power law of cosmic background radiation has been confirmed parameterized at a value of about 0.96 with 1.00 being a pure scale invariant power law. The lamda CDM model has a parameter to describe this deviation, but no mechanism to make it happen. This is a prediction of many inflation models.
* Third, something weird seems to be going on between L's 20 and 30. This is the only material respect in which the Planck data deviate from the lamda CDM model. Intuitively, it seems very plausible that the source of the L's 20 to 30 deviation and the source of the lack of scale invariance could be the same. Some small second order effect not captured by the six parameter lamda CDM model appears to be involved here.
For example, both the lack of scale invariance and the weirdness from L's 20 to 30 are both plausible consequences of the place on the spectrum from hot dark matter to cold dark matter than a dark matter particle resides.
Roughly speaking, in simple single dark matter particle models, the mass of the particle (or the dominant particle if there are multiple kinds but one has a predominant impact on phenomenology in the way the first generation fermions that form protons, neutrons and atoms in the Standard Model do) governs where deviations in large scale structure related to scale arise. Hot dark matter has almost no large scale structure. Warm dark matter gives rise to roughly the amount of large scale structure we observe. Cold dark matter gives rise to more dwarf galaxies and large scale structure that is more finely grained than we observe.
All of this, of course, is model dependent and the generalizations are based on a simple, almost completely non-interacting dark sector with just one kind of particle and no significant new forces from those know to use already. A single instance of inflation alone is enough to get the observed scale invariance in a lamda CDM model, but doesn't explain the L's 20 to 30 anomaly, which could have an entirely different source (or simply be random variation that is improbable but has no deeper cause, or experimental error).
Some persepective on this anomaly from this blog:
* Fourth, the fact that space-time is "flat" to a precision of 0.1% is remarkable given that we conceive of general relativity as a warping of space-time. Overwhelmingly, this warping of space-time due to gravity is local rather than global.
What drives the conclusions about inflation?
The preference for a simple model is driven by several factors:
(1) The data is a good fit to a simple power law with a not quite scale invariant exponent of about 0.96 rather than 1.0 (with a five sigma difference from a 1.0 value) that shows no statistically significant tendency to change over time (i.e. the best fit value for the running of the spectral index is about 1.5 sigma from zero at -0.0134 +/- 0.0090).
(2) The best fit value for a tensor contribution has its best fit at or nearly at zero. The absence of any indication of a tensor mode in the inflaton as opposed to a mere scalar inflaton seems to be another important factor that is driving the exclusion of other models. "In a model admitting tensor fluctuations, the 95% CL bound on the tensor-to-scalar ratio is r0.002 < 0.12 (< 0.11) using Planck+WP (plus high-L`). This bound on r implies an upper limit for the inflation energy scale of 1.9*10^16 GeV . . . at 95% CL." (3) The best fit values of inflation scenarios are likewise almost maximally concave (i.e. potential drops more in the early part of a decline in inflaton potential than later on). The Planck report concludes by noting that: "The simplest inflationary models have passed an exacting test with the Planck data. The full mission data including Planck’s polarization measurements will help answer further fundamental questions, including the possibilities for nonsmooth power spectra, the energy scale of inflation, and extensions to more complex models."
Evidence for a GUT?
The coincidence of the Planck upper limits on inflation energy scale with the completely independently derived grand unification scale based upon the running of the Standard Model (or SUSY) coupling constants is impressive. Even if SUSY is not the way the coupling constants converge, the notion of a grand unification at inflation energies by some means (perhaps by considering quantum gravity theories) is aesthetically very tempting.
Mostly Off Topic Other Items Of Interest:
More On Why We Don't Need SUSY
Woit has an interesting post on a talk by LHC physicist Joe Lyyken on why the "hierachy problem" that SUSY seeks to solve isn't actually a problem with anything but how theoretical physicists are thinking about the issue.
Dark Matter and MOND
* Somewhat off topic, in January of this year, an interesting new MOND paper by MOND inventor Milgrom and two co-authors was published (arguing that much of the dark matter effects are due to a modification of the law of gravity rather than dark matter particles) setting forth a MOND cosmology.
* The dominant unresolved question in physics remains the need to understand dark matter phenomena. As I've said before, and as Planck confirms once again, a simple cosmological constant completely explains dark energy within the context of the same theory of General Relativity that we've had for a century now - dark energy, rather than being mysterious, is a solved problem.
General relativity does not explain dark matter phenomena which are operationally defined as deviations from the predictions of general relativity that are observed by astronomers that don't relate to "inflation" in cosmology. The Standard Model provides no dark matter candidates and the LHC is foreclosing more of them. The lamda CDM model separately accounts for mass from baryons, neutrinos, radiation and effective mass-energy from the cosmological constant and has dark matter left over, but this six parameter fit to cosmic background radiation data collected as WMAP and Planck, for example, does very little to specify the nature of the dark matter component. Direct dark matter searches that have shown any dark matter signals contradict each other and are condicted by searches that have come up empty in roughly the 10 GeV to 100 GeV range for all but the very lowest cross-sections of interaction (well below that of neutrinos).
Simulations have shown that WIMPS or other simple Cold Dark Matter scenarios produce more dwaft galaxies than we observe and none of the Cold Dark Matter models can rival MOND in closely approximating almost all galaxy level dark matter effects in a predictive manner with just a single experimentally measured constant. The cuspy dark matter halos predicted by Cold Dark Matter models are likewise contrary to what we observe, which is inferred halo distributions of dark matter that look more like rugby balls with their long axis passing through a galaxy's central black hole and poking up out from the plane of the galaxy's rotation.
The new data shrink the estimated proportion of dark energy relative to matter (i.e. it has a best fit value for the cosmological constant that is a bit smaller).
Taken together with other data, the results strongly favors a cosmology with just three generations of neutrinos with a sum of the three respective mass eigenvalues of between 0.06 and 0.23 eV of mass with the best fit at the bottom of that range. A sterile neutrino of the kind suggested by the reactor anomalies at two Earth bound neutrino experiments (LSND and MiniBooNE) is not a good fit and if there was one would have to have less than 0.5 eV of mass which is considerably smaller than the estimate from reactor anomalies observed to date and other data of 1.3 eV or so. In a nutshell, the Standard Model triumphs once again. See posts on the state of these measurements pre-Planck here and here based on the 9 year WMAP data.
It is also important to note that the cold dark matter in the lambda CDM model doesn't say very much about the nature of the dark matter component of the model at all. It does not specify some specific dark matter model.
Scientific results include robust support for the standard, six parameter lambda CDM model of cosmology and improved measurements for the parameters that define this model, including a highly significant deviation from scale invariance of the primordial power spectrum. The values for some of these parameters and others derived from them are significantly different from those previously determined. Several large scale anomalies in the CMB temperature distribution detected earlier by WMAP are confirmed with higher confidence. Planck sets new limits on the number and mass of neutrinos, and has measured gravitational lensing of CMB anisotropies at 25 sigma. Planck finds no evidence for non-Gaussian statistics of the CMB anisotropies. There is some tension between Planck and WMAP results; this is evident in the power spectrum and results for some of the cosmology parameters. In general, Planck results agree well with results from the measurements of baryon acoustic oscillations.The cosmological parameters paper is probably the most interesting in terms of providing concrete results. With regard to neutrinos it finds that "Using BAO and CMB data, we find Neff = 3.30 0 +/- 0.27 for the effective number of relativistic degrees of freedom, and an upper limit of 0.23 eV for the sum of neutrino masses. Our results are in excellent agreement with big bang nucleosynthesis and the standard value of Neff = 3.046. . . . Since the sum of neutrino masses must be greater than approximately 0.06 eV in the normal hierarchy scenario and 0.1 eV in the degenerate hierarchy (Gonzalez-Garcia et al. 2012), the allowed neutrino mass window is already quite tight and could be closed further by current or forthcoming observations (Jimenez et al. 2010; Lesgourgues et al. 2013)." The best fit value for the sum of neutrino masses in the Planck data is 0.06 eV, but the data are not terribly precise.
We find no evidence for extra relativistic species, beyond the three species of (almost) massless neutrinos and photons. The main effect of massive neutrinos is a suppression of clustering on scales larger than the horizon size at the non-relativisitic transition. . . . Using Planck data in combination with polarization measured by WMAP and high-L` anisotropies from ACT and SPT allows for a constraint on the sum of the neutrino species masses of < 0.66 eV (95% CL) based on the [Planck+WP+highL] model. Curiously, this constraint is weakened by the addition of the lensing likelihood the sum of the neutrino species masses of < 0.85 eV (95% CL), reflecting mild tensions between the measured lensing and temperature power spectra, with the former preferring larger neutrino masses than the latter. Possible origins of this tension are explored further in Planck Collaboration XVI (2013). . . . The signal-to-noise on the lensing measurement will improve with the full mission data, including polarization, and it will be interesting to see how this story develops.
– using a likelihood approach that combines Planck CMB and lensing data, CMB data from ACT and SPT at high L`s, and WMAP polarized CMB data at low L`s, we have estimated the values of a “vanilla” 6-parameter lambda CDM model with the highest accuracy ever. These estimates are highly robust, as demonstrated by the use of multiple methods based both on likelihood and on component-separated maps.
– The parameters of the Planck best-fit 6-parameter lambda CDM are significantly different than previously estimated. In particular, with respect to pre-Planck values, we find a weaker cosmological constant (by 2 %), more baryons (by 3 %), and more cold dark matter (by 5 %). The spectral index of primordial fluctuations is firmly established to be below unity, even when extending the CDM model to more parameters.
– we find no significant improvements to the best-fit model when extending the set of parameters beyond 6, implying no need for new physics to explain the Planck measurements.
– The Planck best-fit model is in excellent agreement with the most current BAO data. However, it requires a Hubble constant that is significantly lower ( 67 km s^-1 Mpc^-1) than expected from traditional measurement techniques, raising the possibility of systematic effects in the latter.
– An exploration of parameter space beyond the basic set leads to: (a) firmly establishing the effective number of relativistic species (neutrinos) at 3; (b) constraining the flatness of space-time to a level of 0.1%; (c) setting significantly improved constraints on the total mass of neutrinos, the abundance of primordial Helium, and the running of the spectral index of the power spectrum.
– we find no evidence at the current level of analysis for tensor modes, nor for a dynamical form of dark energy, nor for time variations of the fine structure constant. . . .
– we find important support for single-field slow-roll inflation via our constraints on running of the spectral index, curvature and fNL.
– The Planck data squeezes the region of the allowed standard inflationary models, preferring a concave potential: power law inflation, the simplest hybrid inflationary models, and simple monomial models with n > 2, do not provide a good fit to the data.
– we find no evidence for statistical deviations from isotropy at L >50, to very high precision.
– we do find evidence for deviations from isotropy at low L`s. In particular, we find a coherent deficit of power with respect to our best-fit lambda CDMmodel at L`s between 20 and 30.
– We confirm the existence of the so-called WMAP anomalies.
Analysis of the possibility of a sterile neutrino by the Planck team is not a good fit and imposes a mass limit of about 0.5 eV on the sterile neutrino species is is considerably less than the mass suggested by reactor anomaly data.
UPDATE: I posted the following as a comment at the Not Even Wrong Blog without links.
The result I read in paper sixteen was Neff=3.30 +/- 0.27 v. Neff 3.046 for the three Standard Model neutrinos. So, their result is a little less than one sigma from the Standard Model value. A four neutrino model would have an Neff of a bit more than 4.05, which is about three sigma from the measured value which is roughly a 99% exclusion and is a confirmation of the Standard Model.
Planck also combines data from multiple sources puts a cap on the sum of three neutrino masses in a three Standard Model neutrino scenario of 0.24 eV (at 95% CI) with a best fit value of 0.06 eV. The floor from non-astronomy experiments is 0.06 eV in a normal neutrino mass hierachy (based on the difference between mass one and mass two, and between mass two and mass three which are both known to about two significant digits) and 0.1 eV in an inverted neutrino mass hierachy. In a normal neutrino mass hierarchy, this puts the mass of the electron neutrino at between 0 and 0.06 eV, with the low end preferred (I personally expect that an electron neutrino is significantly less than the mass difference between the first and second neutrino type of about 0.006 eV).
Note that a particle that is in the hundreds or thousands of eVs would not count towards Neff because it is not light enough to be relativistic at 380,000 years after the Big Bang. So, it really only rules out a light sterile neutrino, rather than a heavy one. The LSND and MiniBooNE reactor anomalies have hinted at a possible fourth generation sterile-ish neutrino of about 1.3 eV +/- about 30%, so the Planck people did a study on the sum of mass limits if there were a disfavored four and not just three relativistic species and came up with a cap on sterile neutrino mass in that scenario of about 0.5 eV +/- 0.1 eV, which is about 2.5 sigma away from the value of the LSND/MiniBooNE anomaly estimates considering the combined uncertainties.
LEP ruled out a fourth species of fertile neutrino of under 45 GeV, and I wouldn’t be going out on a limb to say without actually doing the calculations that a fertile neutrino of 45 GeV to 63 GeV, if it existed, would have wildly thrown off all of the Higgs boson decay cross-sections observed (since a decay to a 45 GeV to 63 GeV neutrino-antineutrino pair from a 125.7 GeV Higgs boson would have been a strongly favored decay path if it existed) and is in fact therefore excluded by the lastest round of LHC data.
The LEP data already excluded fertile neutrinos in the 6 GeV to 20 GeV mass range where there are contradictory direct dark matter detection experiment results at different experiments.
But, a particle that we would normally call a sterile neutrino for other purposes in the Warm Dark Matter mass range of KeV or the Cold Dark Matter mass range of GeV to hundreds of GeV, or anything in between (including any of the possible direct dark matter detection signals or anything that would generate the Fermi line at 130 GeV), would not be a relativistic particle within the meaning of Neff which only counts particles that would move at relativistic speed given their masses at the relevant time.
ADDITONAL UPDATE: The mass difference of neutrino mass one and neutrino mass two is about 0.009 (usually reported squared at about 7.5 * 10^-5 eV) are about 0.5 (usually reported squared at about 2.5 * 10^-3 eV) for a combined 0.509. If the neutrino mass hierarchy is broadly similar to that of the quarks and the charged leptons (it is impossible to fit the values already known to a perfect Koide triple), one would expect an electron neutrino mass on the order of 0.001 eV (i.e. 1 meV) or less.
Planck is the beginning and to a great extent the end of cosmic background radiation physics.
Also, the precision of the Planck data is so much better than anything that has come before it, including the previously state of the art 9-year WMAP data released earlier this year, that you can basically ignore any pre-Planck data on cosmic background radiation in any respect that Planck data addresses the subject. If you use the Particle Data Group approach of computing global averages with weights inversely proportional to margin of error, the relative weights are perhaps 9-1 or more.
Realistically, Planck and successor cosmic background radiation experiments may be the only way to experimentally probe this truly high energy physics regime of the early universe for decades and possible ever. There are good theoretical reasons why we can't directly observe anything older (e.g. star formation happened after the cosmic background radiation arose, so there was nothing to make coherent light emitting objects). And, almost nothing in the current universe or any experiment we could create has higher energies than the pre-cosmic background radiation universe we are probing with this data.
Planck is measuring the entire universe-wide cosmic background radiation data set of one. We can't measure some other universe's cosmic background radiation outside of computer simulations and there is no reason that the cosmic background radiation that is observable from our solar system or anywhere nearby we can send a space probe should change noticably in my lifetime or the lifetime of my children or grandchildren. Future experiments can be more precise, but we understand electromagnetism almost perfectly and know all of the properties of cosmic backgrond radiation that it is even theoretically possible to measure and have measured almost all of them already (or are on the verge of doing so in the next few years) at Planck. Details can be refined, but the big picture won't change. Really:
In the early 1990s, the COBE satellite gave us the first precision, all-sky map of the cosmic microwave background, down to a resolution of about 7 degrees. About a decade ago, WMAP managed to get that down to about half-a-degree resolution. But Planck? Planck is so sensitive that the limits to what it can see aren’t set by instruments, but by the fundamental astrophysics of the Universe itself! In other words, it will be impossible to ever take better pictures of this stage of the Universe than Planck has already taken.Inflation and Cosmology Findings
I'll have to leave for a future post an in depth analysis of the constraints that the Planck findings place on cosmology apart from dark energy proportions, dark matter amounts, and neutrino generations and masses, but I'll discuss a few briefly in this post. There are several really interesting things going on there.
* First, the new Planck data provide much more meaningful experimental constraints on theories of "inflation" shortly after the Big Bang, which after dark matter, is probably the second biggest set of experimental data screaming out for new physics.
Because inflation takes place in the extremely high energy extremely early universe (when it was smaller than one meter and only a tiny fraction of a second old) is hard to make inferrences about in the context of experiments like the LHC and observable astronomy which are many orders of magnitude below the energy densities present in the proposed inflationary era, so "new physics" in this area outside the range of our experience or likely future is far less consequential than dark matter which affects the world we see today. But, "new physics" is still a big deal and may be important to the structure of a "Theory of Everything" or a quantum gravity theory (e.g. string theory vacua), at the very least by ruling out theories that have high energy behavior inconsistent with the experimental boundaries of inflation scenarios.
A lot of inflation theories that have been viable candidates, receiving serious discussion almost ever since the need for inflation in a cosmology theory was discovered in the 1970s (around the same time that the Standard Model was formulated), have been ruled out by the latest round of Planck data. Planck strongly disfavors power law inflation, the simplest hybrid inflationary models, simple monomial models with n > 2, single fast roll inflation scenarios, multiple stage inflation scenarios, inflation scenarios with flat or concave potentials, dynamical dark energy, time variations of the fine structure constant are all strongly disfavored. Any theory that would create non-Gaussian statistics of the CMB anisotropies, non-flat universes, tensor modes, or statistically discernable deviations from isotropy at L >50 are ruled out.
If your theory was phenomenologically distinct from "single slow roll inflation scenarios with convex potential" in any non-subtle way, you were wrong, thanks for playing.
I will need to read more to fully understand these implications myself, but more inflation theories have died today than on any previous day in history and than will on any day to come in the future (since there are fewer inflation theories left than the number of inflation theories killed today). A book length catalog (300 pages) of the pre-March 21 ranks of inflation theories is available at arxiv. What is inflation?
Dark energy is broadly similar to inflation, and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower, 10−12 GeV, roughly 27 orders of magnitude less than the scale of inflation.Basically, inflation this involves a scalar field called the inflaton that is dissipated in the inflation process.
According to inflation theory, the inflaton field provided the mechanism to drive a period of rapid expansion from 10−35 to 10−34 seconds after the initial expansion that formed the universe.
The inflaton field's lowest energy state may or may not be a zero energy state. This depends on the chosen potential energy density of the field. Prior to the expansion period, the inflaton field was at a higher-energy state. Random quantum fluctuations triggered a phase transition whereby the inflaton field released its potential energy as matter and radiation as it settled to its lowest-energy state. This action generated a repulsive force that drove the portion of the universe that is observable to us today to expand from approximately 10−50 metres in radius at 10−35 seconds to almost 1 metre in radius at 10−34 seconds.
Inflaton conforms to the convention for field names, and joins such terms as photon and gluon. The process is "inflation"; the particle is the "inflaton".
* Second, the lack of scale invariance in the power law of cosmic background radiation has been confirmed parameterized at a value of about 0.96 with 1.00 being a pure scale invariant power law. The lamda CDM model has a parameter to describe this deviation, but no mechanism to make it happen. This is a prediction of many inflation models.
* Third, something weird seems to be going on between L's 20 and 30. This is the only material respect in which the Planck data deviate from the lamda CDM model. Intuitively, it seems very plausible that the source of the L's 20 to 30 deviation and the source of the lack of scale invariance could be the same. Some small second order effect not captured by the six parameter lamda CDM model appears to be involved here.
For example, both the lack of scale invariance and the weirdness from L's 20 to 30 are both plausible consequences of the place on the spectrum from hot dark matter to cold dark matter than a dark matter particle resides.
Roughly speaking, in simple single dark matter particle models, the mass of the particle (or the dominant particle if there are multiple kinds but one has a predominant impact on phenomenology in the way the first generation fermions that form protons, neutrons and atoms in the Standard Model do) governs where deviations in large scale structure related to scale arise. Hot dark matter has almost no large scale structure. Warm dark matter gives rise to roughly the amount of large scale structure we observe. Cold dark matter gives rise to more dwarf galaxies and large scale structure that is more finely grained than we observe.
All of this, of course, is model dependent and the generalizations are based on a simple, almost completely non-interacting dark sector with just one kind of particle and no significant new forces from those know to use already. A single instance of inflation alone is enough to get the observed scale invariance in a lamda CDM model, but doesn't explain the L's 20 to 30 anomaly, which could have an entirely different source (or simply be random variation that is improbable but has no deeper cause, or experimental error).
Some persepective on this anomaly from this blog:
Planck sees the same large scale anomalies as WMAP, thus confirming that they are real rather than artifacts of some systematic error or foreground contamination (I believe Planck even account for possible contamination from our own solar system, which WMAP didn't do). These anomalies include not enough power on large angular scales (â„“≤30 ), an asymmetry between the power in two hemispheres, a colder-than-expected large cold spot, and so on.
The problem with these anomalies is that they lie in the grey zone between being not particularly unusual and being definitely something to worry about. Roughly speaking, they're unlikely at around a 1% level. This means that how seriously you take them depends a lot on your personal prejudices priors. One school of thought – let's call it the "North American school" – tends to downplay the importance of anomalies and question the robustness of the statistical methods by which they were analysed. The other – shall we say "European" – school tends instead to play them up a bit: to highlight the differences with theory and to stress the importance of further investigation. Neither approach is wrong, because as I said this is a grey area. But the Planck team, for what it's worth, seem to be in the "European" camp.
* Fourth, the fact that space-time is "flat" to a precision of 0.1% is remarkable given that we conceive of general relativity as a warping of space-time. Overwhelmingly, this warping of space-time due to gravity is local rather than global.
What drives the conclusions about inflation?
The preference for a simple model is driven by several factors:
(1) The data is a good fit to a simple power law with a not quite scale invariant exponent of about 0.96 rather than 1.0 (with a five sigma difference from a 1.0 value) that shows no statistically significant tendency to change over time (i.e. the best fit value for the running of the spectral index is about 1.5 sigma from zero at -0.0134 +/- 0.0090).
(2) The best fit value for a tensor contribution has its best fit at or nearly at zero. The absence of any indication of a tensor mode in the inflaton as opposed to a mere scalar inflaton seems to be another important factor that is driving the exclusion of other models. "In a model admitting tensor fluctuations, the 95% CL bound on the tensor-to-scalar ratio is r0.002 < 0.12 (< 0.11) using Planck+WP (plus high-L`). This bound on r implies an upper limit for the inflation energy scale of 1.9*10^16 GeV . . . at 95% CL." (3) The best fit values of inflation scenarios are likewise almost maximally concave (i.e. potential drops more in the early part of a decline in inflaton potential than later on). The Planck report concludes by noting that: "The simplest inflationary models have passed an exacting test with the Planck data. The full mission data including Planck’s polarization measurements will help answer further fundamental questions, including the possibilities for nonsmooth power spectra, the energy scale of inflation, and extensions to more complex models."
Evidence for a GUT?
The coincidence of the Planck upper limits on inflation energy scale with the completely independently derived grand unification scale based upon the running of the Standard Model (or SUSY) coupling constants is impressive. Even if SUSY is not the way the coupling constants converge, the notion of a grand unification at inflation energies by some means (perhaps by considering quantum gravity theories) is aesthetically very tempting.
Mostly Off Topic Other Items Of Interest:
More On Why We Don't Need SUSY
Woit has an interesting post on a talk by LHC physicist Joe Lyyken on why the "hierachy problem" that SUSY seeks to solve isn't actually a problem with anything but how theoretical physicists are thinking about the issue.
Dark Matter and MOND
* Somewhat off topic, in January of this year, an interesting new MOND paper by MOND inventor Milgrom and two co-authors was published (arguing that much of the dark matter effects are due to a modification of the law of gravity rather than dark matter particles) setting forth a MOND cosmology.
* The dominant unresolved question in physics remains the need to understand dark matter phenomena. As I've said before, and as Planck confirms once again, a simple cosmological constant completely explains dark energy within the context of the same theory of General Relativity that we've had for a century now - dark energy, rather than being mysterious, is a solved problem.
General relativity does not explain dark matter phenomena which are operationally defined as deviations from the predictions of general relativity that are observed by astronomers that don't relate to "inflation" in cosmology. The Standard Model provides no dark matter candidates and the LHC is foreclosing more of them. The lamda CDM model separately accounts for mass from baryons, neutrinos, radiation and effective mass-energy from the cosmological constant and has dark matter left over, but this six parameter fit to cosmic background radiation data collected as WMAP and Planck, for example, does very little to specify the nature of the dark matter component. Direct dark matter searches that have shown any dark matter signals contradict each other and are condicted by searches that have come up empty in roughly the 10 GeV to 100 GeV range for all but the very lowest cross-sections of interaction (well below that of neutrinos).
Simulations have shown that WIMPS or other simple Cold Dark Matter scenarios produce more dwaft galaxies than we observe and none of the Cold Dark Matter models can rival MOND in closely approximating almost all galaxy level dark matter effects in a predictive manner with just a single experimentally measured constant. The cuspy dark matter halos predicted by Cold Dark Matter models are likewise contrary to what we observe, which is inferred halo distributions of dark matter that look more like rugby balls with their long axis passing through a galaxy's central black hole and poking up out from the plane of the galaxy's rotation.
Monday, March 18, 2013
Higgs boson global fit to Standard Model is tight
There are many indirect and direct experimental indications that the new particle H discovered by the ATLAS and CMS Collaborations has spin zero and (mostly) positive parity, and that its couplings to other particles are correlated with their masses. Beyond any reasonable doubt, it is a Higgs boson, and here we examine the extent to which its couplings resemble those of the single Higgs boson of the Standard Model. Our global analysis of its couplings to fermions and massive bosons determines that they have the same relative sign as in the Standard Model. We also show directly that these couplings are highly consistent with a dependence on particle masses that is linear to within a few %, and scaled by the conventional electroweak symmetry-breaking scale to within 10%. We also give constraints on loop-induced couplings, on the total Higgs decay width, and on possible invisible decays of the Higgs boson under various assumptions. . .
The data now impose severe constraints on composite alternatives to the elementary Higgs boson of the Standard Model. However, they do not yet challenge the predictions of supersymmetric models, which typically make predictions much closer to the Standard Model values.
From here.
[They find] "the combined Higgs signal strength to be very close to the Standard Model value: mu equals 1.02 +0.11-0.12." The best fit of the data's decay with is almost the same (i.e. a plus or minus one one sigma confidence interval of about 3.7 MeV to 4.5 MeV).
The best experimental fit to the Higgs vacuum expectation valuem which is theoretically 246.22 GeV in the Standard Model, is currently 244 +20-10 GeV.
New Neutrinoless Double Beta Decay Limits
The Neutel 13 conference did not report any confirmed detections of neutrinoless double beta decay and updated the minimum half-life that is experimentally excluded based on results from the Kamland-Zen experment.
Efforts are underway to improve the precision of the observation at Kamland-Zen significantly (perhaps evne by a factor of 100) in the next few years.
The new combined limit it about twice that of the EXO-200 result of a year ago, and the exclusion of the claimed Heidleberg-Moscow neutrinoless double beta decay detection (called KK for lead investigator H. V. Klapdor-Kleingrothaus) now exceeds two sigma. The Heidlberg-Moscow experiment claimed finding at a six sigma level that would imply an effective Majorana mass of 0.2-0.6 eV (reference here) but rather than being replicated, it has been contradicted by two other experiments.
Neutrinoless double beta decay measurements are important within neutrino physics because they are necessary to determine in the neutrino has Majorana mass or only Dirac mass, and because they are predicted in a wide variety of beyond the Standard Model physics proposals including most supersymmetry (SUSY) theories. See also here.
Generically, SUSY models with larger characteristic energy scales show greater levels of neutrinoless double beta decay. And, the latest LHC results suggest that if SUSY does exist, it must have a high characteristic energy scale. So, the combination of lower bounds on the SUSY energy scale from the LHC which are rising (to the mid-hundreds of GeV), and upper bounds on the SUSY energy scale from the neutrinoless double beta decay rate which are falling (to less than one TeV), put the squeeze on SUSY parameter space for all SUSY models except those that are fine tuned to suppress neutrinoless double beta decay.
Increasingly strict boundaries on the parameters of a Majorana mass other than the mass itself from experimental data are explored here.
The half-life is measured to be larger than 1.9E25 years at 90%CL. If they combine this with EXO-200 results, the lower limit goes up to 3.4E25y. This corresponds to a 120-250 meV for the majorana neutrino mass. From this combined result one sees that there is inconsistency with the earlier claim of observing a signal in KK. . . . They . . . obtained lower limits on the zero-neutrino mode, which combined with EXO-200 gives an inconsistency with the KKDC claim at more than 97.5% confidence level.This compares to a limited reported on this blog on June 5, 2012 limit of 1.6 * 10^25 years at EXO-200 (contradicting the KKDC claim at a 68% CI), and 2.6 *10^24 years at Kamland-Zen.
Efforts are underway to improve the precision of the observation at Kamland-Zen significantly (perhaps evne by a factor of 100) in the next few years.
The new combined limit it about twice that of the EXO-200 result of a year ago, and the exclusion of the claimed Heidleberg-Moscow neutrinoless double beta decay detection (called KK for lead investigator H. V. Klapdor-Kleingrothaus) now exceeds two sigma. The Heidlberg-Moscow experiment claimed finding at a six sigma level that would imply an effective Majorana mass of 0.2-0.6 eV (reference here) but rather than being replicated, it has been contradicted by two other experiments.
Neutrinoless double beta decay measurements are important within neutrino physics because they are necessary to determine in the neutrino has Majorana mass or only Dirac mass, and because they are predicted in a wide variety of beyond the Standard Model physics proposals including most supersymmetry (SUSY) theories. See also here.
Generically, SUSY models with larger characteristic energy scales show greater levels of neutrinoless double beta decay. And, the latest LHC results suggest that if SUSY does exist, it must have a high characteristic energy scale. So, the combination of lower bounds on the SUSY energy scale from the LHC which are rising (to the mid-hundreds of GeV), and upper bounds on the SUSY energy scale from the neutrinoless double beta decay rate which are falling (to less than one TeV), put the squeeze on SUSY parameter space for all SUSY models except those that are fine tuned to suppress neutrinoless double beta decay.
Increasingly strict boundaries on the parameters of a Majorana mass other than the mass itself from experimental data are explored here.
Friday, March 15, 2013
How wide is the Higgs boson?
In particle physics, two numbers are used to describe a particle's resonance. One is mass and is usually stated in electron-volt units, and the other is width, which uses the same units to describe the rate at which a particle decays (with smaller widths pointing to faster decays). On a chart of collider results, a resonance with a narrow width is a very sharp peak.
Particle width is one fairly global way to distinguish a Standard Model Higgs boson from similar beyond the Standard Model impostors without getting too far into the weeds of looking at every possible individual decay cross-section data comparison. Analysis of experimental data involving data including the estimated Higgs boson width, for example, has helped to rule out a simple fourth generation Standard Model variant.
The Standard Model Higgs boson, if it has a 125 GeV mass, would be predicted to have a 4.07 MeV width with an uncertainty of +/- 4%, which is derived from the sum of its cross-sections of particular kinds of decays. (The width of the Standard Model Higgs boson is a non-linear function of its mass).
By comparison, in the latest global fits made with the data announced this week, the width of the W boson is 2.091 +/- 0.001 GeV and the width of the Z boson is 2.4954 +/- 0.0014 GeV. The width of the top quark is 2.0 (+0.7-0.6) GeV. The width of the Standard Model Higgs boson is about 1/500th of these amounts. If the Standard Model Higgs boson had instead had a mass of 200 GeV its width would have been about 2 GeV.
Those arise from a combination of Standard Model Higgs boson branching fractions (bottom-bottom 58%, WW 21.6%, digluon 8.5%, tau-tau 6.4%, ZZ 2.7%, charm-charm 2.7%, and diphoton 0.22%), and production mechanism cross-sections (gluon fusion 15.3 +/- 2.6 pb, vector boson fusion 1.2 pb, W boson associated production 0.57 pb, and Z boson associated production 0.32 pb) (all per the linked article in the following sentence). The width is too small to measure directly at the LHC or any other existing collider.
A June 2012 paper estimates the width of the 125 GeV Higgs boson observed at the LHC and in light of observations at Tevatron to be 6.1 ( +7.7/-2.9) MeV (i.e. 3.2-13.8 MeV), about 50% more than the Standard Model expectation and possibly as much as three times the Standard Model expectation. This estimate uses the Standard Model Higgs boson predicted cross-sections and decays as a benchmark and then compares the measured values of those cross-sections to the Standard Model predictions and adjusted the total cross-section accordingly.
A February 26, 2013 paper provided an analysis with slightly more up to date numbers and somewhat more involved analysis that placed an approximately 14% bound on exotic decays at the one sigma level, using the same methods.
I have not yet seen a published post-Moriond paper with an updated Higgs boson width estimate. This month's announcements from the Moriond Conference, and in particular the greatly reduced diphoton cross section relative to the Standard Model expectation at CMS which previously showed a large excess in this decay cross-section, have demonstrated cross sections closer to the Standard Model expectation. The new data has also refined the mass estimate upwards somewhat.
Taken together, this new data should reduce the estimated width of the Higgs boson from 6.1 MeV towards 4.1 MeV and narrow the error bars around this mean value.
Particle width is one fairly global way to distinguish a Standard Model Higgs boson from similar beyond the Standard Model impostors without getting too far into the weeds of looking at every possible individual decay cross-section data comparison. Analysis of experimental data involving data including the estimated Higgs boson width, for example, has helped to rule out a simple fourth generation Standard Model variant.
The Standard Model Higgs boson, if it has a 125 GeV mass, would be predicted to have a 4.07 MeV width with an uncertainty of +/- 4%, which is derived from the sum of its cross-sections of particular kinds of decays. (The width of the Standard Model Higgs boson is a non-linear function of its mass).
By comparison, in the latest global fits made with the data announced this week, the width of the W boson is 2.091 +/- 0.001 GeV and the width of the Z boson is 2.4954 +/- 0.0014 GeV. The width of the top quark is 2.0 (+0.7-0.6) GeV. The width of the Standard Model Higgs boson is about 1/500th of these amounts. If the Standard Model Higgs boson had instead had a mass of 200 GeV its width would have been about 2 GeV.
Those arise from a combination of Standard Model Higgs boson branching fractions (bottom-bottom 58%, WW 21.6%, digluon 8.5%, tau-tau 6.4%, ZZ 2.7%, charm-charm 2.7%, and diphoton 0.22%), and production mechanism cross-sections (gluon fusion 15.3 +/- 2.6 pb, vector boson fusion 1.2 pb, W boson associated production 0.57 pb, and Z boson associated production 0.32 pb) (all per the linked article in the following sentence). The width is too small to measure directly at the LHC or any other existing collider.
A February 26, 2013 paper provided an analysis with slightly more up to date numbers and somewhat more involved analysis that placed an approximately 14% bound on exotic decays at the one sigma level, using the same methods.
I have not yet seen a published post-Moriond paper with an updated Higgs boson width estimate. This month's announcements from the Moriond Conference, and in particular the greatly reduced diphoton cross section relative to the Standard Model expectation at CMS which previously showed a large excess in this decay cross-section, have demonstrated cross sections closer to the Standard Model expectation. The new data has also refined the mass estimate upwards somewhat.
Taken together, this new data should reduce the estimated width of the Higgs boson from 6.1 MeV towards 4.1 MeV and narrow the error bars around this mean value.
Global Electroweak Fits Refined by LHC
Many observable Standard Model predictions have both experimental values and theoretically predicted values based on other Standard Model constants. The Higgs boson mass was the last (non-neutrino) constant in the Standard Model to be observed and this has allowed for a global fit of all of the electroweak force influenced observables in the Standard Model in a way that minimizes overall experimental measurement deviation from Standard Model predictions.
This analysis has now been done. All of the dozens of Standard Model observables in the fit are consistent with each other. A global fit nudges some of the mean values of the constants while reduciing the uncertainty in many of them significantly. Using a Higgs boson mass of 125.7 +/- 0.4 GeV (which is assumed to be a Standard Modle higgs boson), some of the values most notably nudged by the inclusion of the Higgs boson mass in the fits are:
W boson mass: 80.385 +/- 0.015 GeV => 80.367 +/- 0.007 GeV (change is 1.2 sigma)
Z boson mass: 91.1875 +/- 0.0021 GeV => 91.1878 +/- 0.0021 GeV (change is 0.14 sigma)
Top quark mass: 173.18 +/- 0.94 GeV => 173.52 +/- 0.88 GeV (change is 0.36 sigma).
The best fit for the strong force coupling constant at the energy scale equal to the square of the Z boson mass is 0.1191 +/- 0.0028.
The inclusion of the Higgs boson mass in the fits did not alter estimates of the charm quark mass or bottom quark mass.
The greatest tensions in the fits relate to b meson observables. One is 2.5 sigma above the Standard Model prediction and another is 2.4 sigma below the Standard Model prediction. The nineteen other observables included in the global fit were all within two sigma of the Standard Model prediction in a global fit of the experimental data and mostly much closer. These tensions are present with or without including the measured Higgs boson mass which slightly reduces those tensions.
Implications for possible Higgs boson mass formulas:
This makes the mean value of the quantity (2W+Z)/2 equal to 125.96 GeV +/- 0.004 and makes the value of (W+top quark mass)/2 equal to 126.94 GeV +/- 0.44. Both combinations have been suggested as Higgs boson mass formulas.
The former is 0.26 GeV from the measured Higgs boson mass, which is about 0.65 sigma, either considering only the uncertainty in the Higgs boson mass and considering all uncertainties combined since the W and Z boson masses are known so precisely.
The latter is 1.24 GeV from the measured Higgs boson mass about 3.1 sigma considering only the uncertainty in the Higgs boson measurement, and about 2.1 sigma considering all of the uncertainties, which are similar in size for both the top quark and Higgs boson mass values.
This analysis has now been done. All of the dozens of Standard Model observables in the fit are consistent with each other. A global fit nudges some of the mean values of the constants while reduciing the uncertainty in many of them significantly. Using a Higgs boson mass of 125.7 +/- 0.4 GeV (which is assumed to be a Standard Modle higgs boson), some of the values most notably nudged by the inclusion of the Higgs boson mass in the fits are:
W boson mass: 80.385 +/- 0.015 GeV => 80.367 +/- 0.007 GeV (change is 1.2 sigma)
Z boson mass: 91.1875 +/- 0.0021 GeV => 91.1878 +/- 0.0021 GeV (change is 0.14 sigma)
Top quark mass: 173.18 +/- 0.94 GeV => 173.52 +/- 0.88 GeV (change is 0.36 sigma).
The best fit for the strong force coupling constant at the energy scale equal to the square of the Z boson mass is 0.1191 +/- 0.0028.
The inclusion of the Higgs boson mass in the fits did not alter estimates of the charm quark mass or bottom quark mass.
The greatest tensions in the fits relate to b meson observables. One is 2.5 sigma above the Standard Model prediction and another is 2.4 sigma below the Standard Model prediction. The nineteen other observables included in the global fit were all within two sigma of the Standard Model prediction in a global fit of the experimental data and mostly much closer. These tensions are present with or without including the measured Higgs boson mass which slightly reduces those tensions.
Implications for possible Higgs boson mass formulas:
This makes the mean value of the quantity (2W+Z)/2 equal to 125.96 GeV +/- 0.004 and makes the value of (W+top quark mass)/2 equal to 126.94 GeV +/- 0.44. Both combinations have been suggested as Higgs boson mass formulas.
The former is 0.26 GeV from the measured Higgs boson mass, which is about 0.65 sigma, either considering only the uncertainty in the Higgs boson mass and considering all uncertainties combined since the W and Z boson masses are known so precisely.
The latter is 1.24 GeV from the measured Higgs boson mass about 3.1 sigma considering only the uncertainty in the Higgs boson measurement, and about 2.1 sigma considering all of the uncertainties, which are similar in size for both the top quark and Higgs boson mass values.
Thursday, March 14, 2013
CMS Diphoton Higgs Data Finally In
More Higgs Boson Mass Data
The combined CMS Higgs boson mass estimate now including both ZZ and diphoton channels is 125.8 +/- 0.4 +/- 0.4 GeV. This measurement is indistinguishable from the 125.99 value of (W+W+Z)/2, but is increasingly a bit of a stretch for (top plus W)/2.
The crude average of the two ATLAS measurements averaged with the combined CMS measurement gives a central Higgs mass value of about 125.7 GeV.
As an aside, the vacuum expectation value of the Higgs field is the 1/sqrt(sqrt(2)*GF) ≃ 246 GeV, where GF = 1.16639(2)*10^-5 is the Fermi coupling constant. More precisely, it is 246.2279579 GeV. This value was determined from a measurement of the lifetime of the muon. It has a theoretical value equal to the square of the weak force coupling constant divided by the square of the W boson mass, times the square root of two divided by eight. The Higgs boson mass less one half of the Higgs field vev is to a the same precision of the Higgs boson mass is about 2.5 GeV.
New CMS Cross-Section Data Shows No Diphoton Excess
In the diphoton channel at that mass there are tagged and untagged consistencies with the Standard Model. The untagged is consistent with the Standard Model expectation at about one sigma, and the tagged is consistent at a bit more than 2 sigma, with the overall deviation from the Standard Model in the diphoton channel declining.
As Strassler explains (my main source is a couple of Moriond talk slides): "To repeat: with more data, CMS does not confirm the excess that they saw in July, and does not confirm the excess seen currently by ATLAS. "
The overall fit with the Standard Model in all decay channels is 0.88 +/- 0.21 where 1 is a perfect match with the Standard Model expection, thus the Higgs cross-sections, overall fit the SM Higgs cross-sections at a considerably less than one sigma level. Overall measured ZZ production strength is an almost perfect fit to the Standard Model expectation and overall measured diphoton production strength is within one sigma of the Standard Model expectation with a full dataset. The relative WW v. ZZ production rates were a near perfect match for the Standard Model expectation.
All of the other LHC Higgs data to date was summarized here.
The combined CERN experiment has announced that this is really a Higgs boson and not merely a Higgish boson.
All But One BSM Higgs Boson Model Tested Is Ruled Out By CMS Data
The data fit the SM Higgs boson and disfavors all alternative models tested thus far at the 2 sigma confidence level or more except 0h+ (i.e. the light, neutral, even parity SUSY Higgs boson). There was no detailed discussion of how a SUSY 0H+ was ruled out. This suggests that the other four SUSY Higgs bosons, if SUSY is correct and similar to the MSSM, must all have similar masses and those masses may be far in excess of the LHC detection limits.
So, even if SUSY is out there at extremely high energy scales, at LHC energies, the Standard Model may turn out to be completely consistent with all experimental evidence.
Failure to find any BSM experimental data at all at the LHC, which is where we are headed so far, would tremendeously constrain any alternatives to it. Even though it doesn't strictly speaking, rule out all string theories and supersymmetry theories, it effectively makes all leading BSM theories far less interesting and shifts the focus from Beyond The Standard Model to Within The Standard Model (WSM) to find deep theories that imply the gum and duct tape version we have to explain it all.
The combined CMS Higgs boson mass estimate now including both ZZ and diphoton channels is 125.8 +/- 0.4 +/- 0.4 GeV. This measurement is indistinguishable from the 125.99 value of (W+W+Z)/2, but is increasingly a bit of a stretch for (top plus W)/2.
The crude average of the two ATLAS measurements averaged with the combined CMS measurement gives a central Higgs mass value of about 125.7 GeV.
As an aside, the vacuum expectation value of the Higgs field is the 1/sqrt(sqrt(2)*GF) ≃ 246 GeV, where GF = 1.16639(2)*10^-5 is the Fermi coupling constant. More precisely, it is 246.2279579 GeV. This value was determined from a measurement of the lifetime of the muon. It has a theoretical value equal to the square of the weak force coupling constant divided by the square of the W boson mass, times the square root of two divided by eight. The Higgs boson mass less one half of the Higgs field vev is to a the same precision of the Higgs boson mass is about 2.5 GeV.
New CMS Cross-Section Data Shows No Diphoton Excess
In the diphoton channel at that mass there are tagged and untagged consistencies with the Standard Model. The untagged is consistent with the Standard Model expectation at about one sigma, and the tagged is consistent at a bit more than 2 sigma, with the overall deviation from the Standard Model in the diphoton channel declining.
As Strassler explains (my main source is a couple of Moriond talk slides): "To repeat: with more data, CMS does not confirm the excess that they saw in July, and does not confirm the excess seen currently by ATLAS. "
The overall fit with the Standard Model in all decay channels is 0.88 +/- 0.21 where 1 is a perfect match with the Standard Model expection, thus the Higgs cross-sections, overall fit the SM Higgs cross-sections at a considerably less than one sigma level. Overall measured ZZ production strength is an almost perfect fit to the Standard Model expectation and overall measured diphoton production strength is within one sigma of the Standard Model expectation with a full dataset. The relative WW v. ZZ production rates were a near perfect match for the Standard Model expectation.
All of the other LHC Higgs data to date was summarized here.
The combined CERN experiment has announced that this is really a Higgs boson and not merely a Higgish boson.
All But One BSM Higgs Boson Model Tested Is Ruled Out By CMS Data
The data fit the SM Higgs boson and disfavors all alternative models tested thus far at the 2 sigma confidence level or more except 0h+ (i.e. the light, neutral, even parity SUSY Higgs boson). There was no detailed discussion of how a SUSY 0H+ was ruled out. This suggests that the other four SUSY Higgs bosons, if SUSY is correct and similar to the MSSM, must all have similar masses and those masses may be far in excess of the LHC detection limits.
So, even if SUSY is out there at extremely high energy scales, at LHC energies, the Standard Model may turn out to be completely consistent with all experimental evidence.
Failure to find any BSM experimental data at all at the LHC, which is where we are headed so far, would tremendeously constrain any alternatives to it. Even though it doesn't strictly speaking, rule out all string theories and supersymmetry theories, it effectively makes all leading BSM theories far less interesting and shifts the focus from Beyond The Standard Model to Within The Standard Model (WSM) to find deep theories that imply the gum and duct tape version we have to explain it all.
An Ansatz Re The Basis Of Koide's Formula
Why should an e-u-d Koide triplet be real? Does a massless up quark make sense?
This post is really a response to Mitchell's post in response to my Koide triple post of yesterday that he links in that comment to that post. I wanted to simply make a comment at his blog, but would have had to broken my discussion into multiple comments there due to blogger limitations and decided to just do a response post instead.
This lengthy post lays out at considerable length and with a fair amount of detail and some examples my personal conjectures about the kind of theoretical ideas that I think could be at work in given rise to the observed phenomenological relationships of the Standard Model.
I aspire to have done this in a way that illustrates that I genuinely do have some sense of the kind of things that could be going on at a deeper level in the Standard Model that would make sense. I also aspire to have done so in a manner in which someone with more skill than I could conceivably translate into a pretty respectable Withing The Standard Model (WSM) extension of it that builds on its well established existing approaches to resolve a number of unsolved questions for theoretical physics that lurk there.
As a WSM rather than a BSM this analysis proposes ways to secure more theoretical coherence and elegance in the SM, but does not predict much, if anything, in the way of "new physics." The problems it solves in the SM are "why" questions, not "what" questions.
If these mechanisms were better understood along these lines, however, it would take the steam out of the sails of many BSM models that are mostly motivated by "why" questions and only provide new physics at something close to GUT scales that can't be practically tested or applied, anyway. Acceptance of this kind of WSM theory might also reprioritize the kinds of theoretical and experimental work in physics that would seem most fruitful to pursue first given limited resources.
The gist of the concerns Mitchell raised and the details of the underlying theoretical ideas are below the jump.
This post is really a response to Mitchell's post in response to my Koide triple post of yesterday that he links in that comment to that post. I wanted to simply make a comment at his blog, but would have had to broken my discussion into multiple comments there due to blogger limitations and decided to just do a response post instead.
This lengthy post lays out at considerable length and with a fair amount of detail and some examples my personal conjectures about the kind of theoretical ideas that I think could be at work in given rise to the observed phenomenological relationships of the Standard Model.
I aspire to have done this in a way that illustrates that I genuinely do have some sense of the kind of things that could be going on at a deeper level in the Standard Model that would make sense. I also aspire to have done so in a manner in which someone with more skill than I could conceivably translate into a pretty respectable Withing The Standard Model (WSM) extension of it that builds on its well established existing approaches to resolve a number of unsolved questions for theoretical physics that lurk there.
As a WSM rather than a BSM this analysis proposes ways to secure more theoretical coherence and elegance in the SM, but does not predict much, if anything, in the way of "new physics." The problems it solves in the SM are "why" questions, not "what" questions.
If these mechanisms were better understood along these lines, however, it would take the steam out of the sails of many BSM models that are mostly motivated by "why" questions and only provide new physics at something close to GUT scales that can't be practically tested or applied, anyway. Acceptance of this kind of WSM theory might also reprioritize the kinds of theoretical and experimental work in physics that would seem most fruitful to pursue first given limited resources.
The gist of the concerns Mitchell raised and the details of the underlying theoretical ideas are below the jump.
Wednesday, March 13, 2013
Is There An Electron, Up, Down Koide Triple?
Koide's Formula and Related Extensions
Koide's formula in its original form asserts that:
(sqrt(electron mass)+sqrt(muon mass)+sqrt(tau mass))^2/(electron mass+muon mass+tau mass)=2/3.
This is true to the highest levels of precision determined to date, which for the charged leptons is very great.
A Koide triple is any three sets of particle masses that satisfy that relationship.
The hypothesis that there are Koide triples among the quarks, which is not inconsistent with the data to current level of precision (which isn't very great) is that the following are Koide triples:
top, bottom, charm
bottom, charm, strange
charm, strange, down
A related observation is that the combined mass of the bottom, charm, strange triple is almost precisely three times the mass of the tau, muon, electron triple (a notion that corresponds to the fact that in weak force decays three times as many quarks, one for each color, are produced as leptons).
Koide's Formula, the Up Quark Mass and a Possibile Up, Down, Electron Triple.
Implications of zero mass or neutrino scale mass for up quarks.
The final conceivable triple following that patterns are charm, strange, up, and strange, down, up. Koide's formula predicts a near zero value for the up quark mass from a c, s, u triple. But, if that value is carried through to the down quark in the s, u, d triple, it produces a value within the measured range of the down quark mass.
Using central values of t=172.9 GeV (a hair low with the latest data) and b=4.19 GeV. Then,
Koide(t,b,c) implies c=1.356 GeV (PDG value 1.180-1.340 GeV)
Koide(b,c,s) implies s= 92 MeV (PDG value 80-130 MeV)
Koide(c,s,u) implies u= 36 KeV (PDG value 1,700 to 3,100 KeV)
Koide(s,u,d) implies d= 5.3 MeV (PDG value 4.1-5.7 MeV)
You can also form a Koide triple of an electron, up and down if you use an electron mass of about 0.511 MeV, an up quark mass of zero, and a down quark mass of 6.7 MeV.
And, if you use a value of zero rather than 36 KeV for the up quark, and use the 6.7 MeV value for the down quark predicted by the electron, up, down triple, the formula predicts a strange quark mass of 92 MeV.
This strange quark mass derived from the electron, up, down triple and the assmption that the up quark has a zero mass is consistent with the experimentally measured mass value of the strange quark, is consistent with a "top quark down" calculation of the strange quark mass, and is consistent with an estimate based upon a mass for the bottom, charm, strange triple that is the charged lepton mass triple.
Even a modest mass of 36 KeV for the up quark makes a significant different in the estimated value of the down quark mass via an electron, up and down Koide triple or a strange, up and down Koide triple. But, an up quark mass on the order of magnitude of 1 eV or less does not throw off the Koide triple by more than can be easily made up with tiny tweaks to calibration points elsewhere.
This is important because there are a variety of theoretical reasons why an up quark with a non-zero but negligible rest mass, even if it was just 1 eV, would involve a far more modest tweak to the Standard Model than a truly zero mass up quark.
The Koide's formula's prediction does not alter the experimentally estimated combined up and down quark mass.
The 6.7 MeV estimate for the down quark mass from applying Koide's formula naively is also not far from the Particle Data Group (PDG) mid-range value for the up quark and mid-range value for the down quark mass combined, which is 7.3 MeV. The sum of the lower extremes of the PDG estimates for the up and down quark masses is 5.8 MeV and the sum of the upper extremes of the PDG estimates for the up and down quark masses is 8.8 MeV. Twice the PDG estimate of the mean up and down quark masses is 6.0 MeV to 9.6 MeV, a range within which the 6.7 MeV Koide's formula value fits comfortably.
Thus, the Koide formula predicted value for the sum of the up and down quark masses when the up quark is assumed to have a mass of zero is well within the PDG value. Koide's formula simply allocates all of the combined mass to the down quark rather than assigning a mass to the up quark of 35% to 60% of the down quark mass. It also does nothing to alter the longstanding assumption based largely on the fact that the proton is lighter than the neutron, that the down quark is heavier than the up quark.
Reconsidering the experimental estimate of the up quark mass.
Keep in mind that up quarks are always confined and can't be measured in isolation the way that top quarks can be, and that almost all of the mass in hadrons (two quark mesons and three quark baryons) is derived from the strong force binding energy carried by gluons and not from the quarks themselves. This is particularly true in the case of hadrons that have only up and down quarks like the proton and the neutron for which the measured hadron masses that contribute to the estimates are most precise.
Since up quarks are always confined, any estimate of the up quark mass is necessarily model dependent. Yet, computations of quantities like the proton or neutron mass from first principles using QCD alone have a precision of only about 1%, making them far less precise than the experimentally measured masses of hadrons.
Also, the experimental uncertainty in the mass of all quarks except the up quark equals or exceeds the low end experimental value per the PDG of the up quark mass. So, in hadrons with some quarks other than up quarks in them, the up quark number has an impact on the total mass which is generally lower than the total uncertainty in the fundamental quark mass contribution to the hadron's total mass.
The strength of the strong force, weak force, electromagnetic forces are so great at the scale of a hadron relative to the masses of the quarks involved in all but the most exotic hadrons with heavy quarks in them, that an up quark's color charge, weak isospin and electromagnetic charge all have more relevance to its behavior when confined in a hadron than its fundamental mass (except insofar is its mass influences its weak force decays).
Obviously, if the Koide's formula prediction conflicts with the experimental data then there is simply something wrong with the formula. But, the existence of consistent predictions from an electron, up, down triple with those of series of quark triples, and with the predictions of quark masses from the masses of the lepton triple all argue for revisiting the model dependent assumptions that went into making the PDG estimate of the up quark's mass (which is in any case has extremes that vary by a factor of two anyway).
Figuring out how the up quark mass was estimated and what practical implications the up quark mass has in the Standard Model is clearly near the top of my to do list.
Implications of a zero mass for the up quark.
If the up quark mass were assumed to be zero, as a non-measured Standard Model constant, rather than an experimentally measured one, and the other quark masses were estimated based upon this model dependent assumption, how would the estimated quark masses different and what experiments, if any, that were the basis for the PDG estimate would be contradicted?
Some of the issues of how up quark mass is determined and what this implies in practice when doing QCD are discussed in this 2004 paper and another paper in 2010 and in 2011 by the same author, Michael Creutz who together with the authors of this 2002 paper are interested in the possibility that a massless up quark could explain the strong CP problem. This 2003 paper (possibly identical) also investigates the possibility of a massless up quark and makes a mass calculation for the up quark using lattice QCD. This paper from 2001 disfavors that solution in a model limited to two quarks (the 2002-2003 analysis is a three quark flavor analysis).
This 1997 paper gets into the guts of mass renormalization for quarks. A 2009 model dependent estimate of the up and down quark masses shows how these quantities are derived in QCD. A 2011 paper uses the up-down mass difference and applies it to neutrino scattering. This 2011 paper discusses relevant source data in the context of a BSM Higgs mass generation idea.
(I've omit papers by Koide himself in this review). Thinking similar to that of Koide's on mass matrixes is found in a 2013 paper and in this 2012 paper and this 2012 paper. A BSM model from 2011 explores similar ideas. A 1999 paper considers implications for quintessence theories.
Current light quark mass ratio estimates don't differ materially from those devised by Weinberg and discussed in this 1986 paper whose abstract stated:
This 1978 paper's abstract states:
A zero or non-zero mass for the up quark might help explain why proton decay is so surprisingly rare.
A zero mass for the up quark together with the extended Koide's formula that motivates it, would imply that the masses of the six quarks and all three charged leptons can be calculated via the extended Koide's formula (including the mass relationship of the charged lepton triple to one of the quark triples and the electron, up down triple) and the assumption that the up quark has zero mass from the mass of the electron using high school algebra to accuracies greater than those available for any of the experimentally measured quark masses (even the top quark whose mass is currently known to 0.6% accuracy).
This would reduce the number of experimentally measured physical constants related to fermion mass in the Standard Model + Extended Koide Model from fifteen to four (the electron and the three neutrino masses).
If the supposition that the Higgs boson mass is equal to half of the sum of the masses of the W+, W- and Z bosons (which is currently accurate to within all current bounds of experimental precision and is closer to the experimentally measured mark than any of the prediscovery mass predictions for the Higgs boson mass), then the number of experimentally measured physical constants related to mass in the Standard Model would fall from three to two, one of which (the Weinberg angle that relates to W and Z boson masses) isn't even a mass value itself.
Thus, we could be on the verge of going from having eighteen measured Standard Model mass constants to having just six, and having much more accurate theoretical values than experimental values for many of those constants.
This would also motivate strongly a Koide derived formula for neutrino masses that if devised and confirmed by experimental evidence would cut the number of experimentally measured mass constants in the Standard Model from six to not more than five (one of which is an angle rather than a mass), and possibly to as few as three if a way to derive the neutrino masses from first principles using the masses of the other fermions and Standard Model bosons (and perhaps the PMNS and/or CKM matrix elements and/or the Standard Model coupling constants) was devised.
Extensions To A Standard Model With Four Generations
Extending the Koide's formula allows one to make useful, constrained and testable predictions regarding a fourth generation of Standard Model particles.
Fourth generation Standard Model particles that would have the masses a naive extension of Koide's formula would imply are experimentally forbidden because the lepton sector is inconsistent with experimental data. This is a conclusion that has already been reached for the large part by the fundamental physics community already based on other grounds.
Fourth Generation Koide Quarks
If one extends the formula based upon recent data on the mass of the bottom and top quarks and presumes that there is a b', t, b triple, and uses masses of 173,400 GeV for the top quark and 4,190 for the bottom quark, then the predicted b' mass would be 3,563 GeV and the predicted t' mass would be about 83.75 TeV (i.e. 83,750 GeV).
Since they would be produced a t'-anti-t' and b'-anti-b' pairs, it would take about 167.5 TeV of energy to produce a t' and 7.1 TeV of energy to produce a b'. Producing a t' would be far beyond the capababilities of the LHC. But, it could conceivably produce a few b' quark events of the Koide's formula predicted mass. These would be unmistakeable unless the extreme speeds of the decay products prevented them from decaying (as a result of special relativity effects) until they reached a point beyond the most remote LHC detectors. This probably wouldn't happen for a b' decay which is within the design parameters of the LHC, but might happen in the case of a fluke t' decay, which is far outside of its design parameters.
The up to the minute direct exclusion range at the LHC for the b' and t' is that there can be no b' with a mass of less than 670 GeV and no t' with a mass of less than 656 GeV (per ATLAS) and the comparable exclusions from CMS are similar (well under 1 TeV).
Koide t' and b' quark decays
A simple fourth generation b' quark or t' quark, that otherwise fits the Standard Model, of that mass would decay so rapidly that it woud not hadronize (i.e. not form composite QCD particles via strong force gluon interactions). Instead, the t' would decay almost exclusively to the b' and the b' would decay almost exclusively to the t, with both interactions happening almost instantaneously.
A t' decay to a b' would produce a highly energetic W+ boson that would carry much of the energy of 80 TeV of rest mass being converted into kinetic energy for the W+ and b' produced in the decay, immediately followed by a highly energetic W- boson produced in the b' to t decay in which about 3,390 GeV of kinetic energy was created from rest mass, followed by the usual immediate t quark to b quark decay with an emission of a W+ converting about 169.2 GeV of rest mass into kinetic energy for the W+ and b quark. There would be an exactly parallel set of reactions for the decay chain of the anti-t' particle.
This highly energetic emission and subsequent decays of the t' quark to a b quark would produce 3 W+ bosons and 3 W- bosons at three discrete and equal energy levels would all take place in about 10^-23 seconds. This is because the lifetime of a t quark is 0.5 * 10^-24 seconds, the lifetime of a W boson is 0.3 * 10^-24 seconds, and the lifetime of a fourth generation Standard Model t' or b' quark would be less than that of the top quark (probably much, much less). The bottom quark and a large share of the heavy decay products of the highly energetic W boson decays (such as b', c and b quarks, and tau prime and tau charged leptons and their antiparticles which have lifetimes of 10^-12 to 10^-13 seconds, with b and c quarks hadronizing into exotic and short lived hadrons before decaying further) would in turn decay by about the time that they had traversed a distance roughly equal to the distance from the center of a gold atom to its outmost orbiting electrons within about 10^-12 seconds. Strange quarks decay in about 10^-10 seconds and muons decay in about 10^-6 seconds.
In the absence of special relativity, this would take place within a sphere of a diameter of less than 10^-16 meters (i.e. about 1-2% of the diameter of a nucleus of a gold atom, a number derived from the decay time of 10^-23 seconds for the first three decays times the speed of light), the strange quark decays would start to happen about a foot from the original site of the decay, and the muon decays would peak about 300 meters away. But, since particle decay takes place in the reference frame of the particle, which is moving at speeds near the speed of light, the decays would take place over a far more extended area because time would pass more slowly for the fast moving t' decay products. The extreme kinetic energies of the particles would cause the their decays to happen at much greater distances from the initial t' production and decay site than the ordinary LHC decays - indeed they might make it past the detectors entirely.
Also, while a 167.5 TeV event does involve a lot of energy in a concentrated place, a single event of that size involves only about 3*10-4 joules of energy, about the amount of kinetic energy of a single grape on the verge of hitting the ground after falling from a vine at waist height, so if it made it past the detectors due to special relativistic extensions of decay times it would be virtually invisible to observers in the area around the impact site and beyond the detectors.
So, even if the LHC was able due to a fluke fluctuation that led to a collision more than ten times as energetic as its design limitations with a spectacular decay chain, it might be missed entirely or almost entirely except as a completely unprecedented amount of missing energy that might be attributed to an equipment failure rather than a real physics event because it was so far beyond the designed detection range of the scientific equipment in the facility.
Fourth generation Koide leptons
The extension for charged leptons (a muon, tau, tau prime triple), however, would imply a 43.7 GeV tau prime, which has been excluded at the 95% confidence level for masses of less than 100.8 GeV and with far greater confidence at 43.7 GeV (which would be produced at a significant and easy to measure freuquency in Z boson decays).
A simple Koide's rule formula for neutrinos using the muon neutrino mass (of 7.5 * 10^-5 eV + 0.08 eV +/- 0.09 eV) and tau neutrino mass (of 2.4 * 10^-3 eV + 0.08 eV +/- 0.09 eV) (with absolute masses derived from accurately measured mass differences between types and a 0.51 eV limit on the sum of the electron neutrino, muon neutrino and tau neutrino masses if there are only three kinds of neutrinos - less if there are more generations of neutrinos), would yield a tau prime neutrino mass of far less than 43.7 GeV. A naive extension of Koide's formula with an electron neutrino of near zero mass would lead to a fourth generation neutrino of about 0.05 eV and would have a mass of up to 11.6 eV in the nearly degenerate case where all three neutrino species had almost precisely the same mass. But, this would contradict the cosmological data constraint that limits to 0.51 eV for the sum of the masses all of the species of light neutrinos combined (which is about 1/1,000,000th the rest mass of an electron). So, instead, 0.51 eV would be the realistic upper limit of a Standard Model weakly interacting fourth neutrino generation.
Given the cosmological constraint on the sum of neutrino masses, the possibility that the naive Koide's formula needs a sign modification or something like it for neutrinos (which it probably does) is irrelevant.
Yet, any simple, fourth generation, weakly interacting tau prime neutrinos of any rest mass less than 45 GeV can be excluded on the basis of Z boson decays, so this scenario is definitively excluded if Koide's formula is even remotely an accurate way of estimating the mass of a hypothetical fourth generation Standard Model neutrino.
These theoretical considerations make it highly unlikely that there is any fourth generation of Standard Model fermions at all. The Standard Model makes fermions an entire generation at a time, and this would require a fourth generation charge fermion far in excess of the Koide formula extension predicted value.
Koide's formula in its original form asserts that:
(sqrt(electron mass)+sqrt(muon mass)+sqrt(tau mass))^2/(electron mass+muon mass+tau mass)=2/3.
This is true to the highest levels of precision determined to date, which for the charged leptons is very great.
A Koide triple is any three sets of particle masses that satisfy that relationship.
The hypothesis that there are Koide triples among the quarks, which is not inconsistent with the data to current level of precision (which isn't very great) is that the following are Koide triples:
top, bottom, charm
bottom, charm, strange
charm, strange, down
A related observation is that the combined mass of the bottom, charm, strange triple is almost precisely three times the mass of the tau, muon, electron triple (a notion that corresponds to the fact that in weak force decays three times as many quarks, one for each color, are produced as leptons).
Koide's Formula, the Up Quark Mass and a Possibile Up, Down, Electron Triple.
Implications of zero mass or neutrino scale mass for up quarks.
The final conceivable triple following that patterns are charm, strange, up, and strange, down, up. Koide's formula predicts a near zero value for the up quark mass from a c, s, u triple. But, if that value is carried through to the down quark in the s, u, d triple, it produces a value within the measured range of the down quark mass.
Using central values of t=172.9 GeV (a hair low with the latest data) and b=4.19 GeV. Then,
Koide(t,b,c) implies c=1.356 GeV (PDG value 1.180-1.340 GeV)
Koide(b,c,s) implies s= 92 MeV (PDG value 80-130 MeV)
Koide(c,s,u) implies u= 36 KeV (PDG value 1,700 to 3,100 KeV)
Koide(s,u,d) implies d= 5.3 MeV (PDG value 4.1-5.7 MeV)
You can also form a Koide triple of an electron, up and down if you use an electron mass of about 0.511 MeV, an up quark mass of zero, and a down quark mass of 6.7 MeV.
And, if you use a value of zero rather than 36 KeV for the up quark, and use the 6.7 MeV value for the down quark predicted by the electron, up, down triple, the formula predicts a strange quark mass of 92 MeV.
This strange quark mass derived from the electron, up, down triple and the assmption that the up quark has a zero mass is consistent with the experimentally measured mass value of the strange quark, is consistent with a "top quark down" calculation of the strange quark mass, and is consistent with an estimate based upon a mass for the bottom, charm, strange triple that is the charged lepton mass triple.
Even a modest mass of 36 KeV for the up quark makes a significant different in the estimated value of the down quark mass via an electron, up and down Koide triple or a strange, up and down Koide triple. But, an up quark mass on the order of magnitude of 1 eV or less does not throw off the Koide triple by more than can be easily made up with tiny tweaks to calibration points elsewhere.
This is important because there are a variety of theoretical reasons why an up quark with a non-zero but negligible rest mass, even if it was just 1 eV, would involve a far more modest tweak to the Standard Model than a truly zero mass up quark.
The Koide's formula's prediction does not alter the experimentally estimated combined up and down quark mass.
The 6.7 MeV estimate for the down quark mass from applying Koide's formula naively is also not far from the Particle Data Group (PDG) mid-range value for the up quark and mid-range value for the down quark mass combined, which is 7.3 MeV. The sum of the lower extremes of the PDG estimates for the up and down quark masses is 5.8 MeV and the sum of the upper extremes of the PDG estimates for the up and down quark masses is 8.8 MeV. Twice the PDG estimate of the mean up and down quark masses is 6.0 MeV to 9.6 MeV, a range within which the 6.7 MeV Koide's formula value fits comfortably.
Thus, the Koide formula predicted value for the sum of the up and down quark masses when the up quark is assumed to have a mass of zero is well within the PDG value. Koide's formula simply allocates all of the combined mass to the down quark rather than assigning a mass to the up quark of 35% to 60% of the down quark mass. It also does nothing to alter the longstanding assumption based largely on the fact that the proton is lighter than the neutron, that the down quark is heavier than the up quark.
Reconsidering the experimental estimate of the up quark mass.
Keep in mind that up quarks are always confined and can't be measured in isolation the way that top quarks can be, and that almost all of the mass in hadrons (two quark mesons and three quark baryons) is derived from the strong force binding energy carried by gluons and not from the quarks themselves. This is particularly true in the case of hadrons that have only up and down quarks like the proton and the neutron for which the measured hadron masses that contribute to the estimates are most precise.
Since up quarks are always confined, any estimate of the up quark mass is necessarily model dependent. Yet, computations of quantities like the proton or neutron mass from first principles using QCD alone have a precision of only about 1%, making them far less precise than the experimentally measured masses of hadrons.
Also, the experimental uncertainty in the mass of all quarks except the up quark equals or exceeds the low end experimental value per the PDG of the up quark mass. So, in hadrons with some quarks other than up quarks in them, the up quark number has an impact on the total mass which is generally lower than the total uncertainty in the fundamental quark mass contribution to the hadron's total mass.
The strength of the strong force, weak force, electromagnetic forces are so great at the scale of a hadron relative to the masses of the quarks involved in all but the most exotic hadrons with heavy quarks in them, that an up quark's color charge, weak isospin and electromagnetic charge all have more relevance to its behavior when confined in a hadron than its fundamental mass (except insofar is its mass influences its weak force decays).
Obviously, if the Koide's formula prediction conflicts with the experimental data then there is simply something wrong with the formula. But, the existence of consistent predictions from an electron, up, down triple with those of series of quark triples, and with the predictions of quark masses from the masses of the lepton triple all argue for revisiting the model dependent assumptions that went into making the PDG estimate of the up quark's mass (which is in any case has extremes that vary by a factor of two anyway).
Figuring out how the up quark mass was estimated and what practical implications the up quark mass has in the Standard Model is clearly near the top of my to do list.
Implications of a zero mass for the up quark.
If the up quark mass were assumed to be zero, as a non-measured Standard Model constant, rather than an experimentally measured one, and the other quark masses were estimated based upon this model dependent assumption, how would the estimated quark masses different and what experiments, if any, that were the basis for the PDG estimate would be contradicted?
Some of the issues of how up quark mass is determined and what this implies in practice when doing QCD are discussed in this 2004 paper and another paper in 2010 and in 2011 by the same author, Michael Creutz who together with the authors of this 2002 paper are interested in the possibility that a massless up quark could explain the strong CP problem. This 2003 paper (possibly identical) also investigates the possibility of a massless up quark and makes a mass calculation for the up quark using lattice QCD. This paper from 2001 disfavors that solution in a model limited to two quarks (the 2002-2003 analysis is a three quark flavor analysis).
This 1997 paper gets into the guts of mass renormalization for quarks. A 2009 model dependent estimate of the up and down quark masses shows how these quantities are derived in QCD. A 2011 paper uses the up-down mass difference and applies it to neutrino scattering. This 2011 paper discusses relevant source data in the context of a BSM Higgs mass generation idea.
(I've omit papers by Koide himself in this review). Thinking similar to that of Koide's on mass matrixes is found in a 2013 paper and in this 2012 paper and this 2012 paper. A BSM model from 2011 explores similar ideas. A 1999 paper considers implications for quintessence theories.
Current light quark mass ratio estimates don't differ materially from those devised by Weinberg and discussed in this 1986 paper whose abstract stated:
We investigate the current-mass ratios of the light quarks by fitting the squares of meson masses to second order in chiral-symmetry breaking, determining corrections to Weinberg's first-order values: mu/md=0.56, ms/md=20.1. We find that to this order, ms/md is a known function of mu/md. The values of the quark-mass ratios can be constrained by limiting the size of second-order corrections to the squares of meson masses. We find that for specific values of presently unmeasured phenomenological parameters one can have a massless u quark. In that case 30% of the squares of meson masses arise from operators second order in chiral-symmetry breaking.A 1979 estimate is also not that different in its early estimation of light quark masses as is this 1996 paper or a 1994 paper. The 1994 paper's abstract stated that: "the claim that
mu = 0 leads to a coherent picture for the low energy structure of QCD is examined in detail. It is pointed out that this picture leads to violent flavour asymmetries in the matrix elements of the scalar and pseudoscalar operators, which are in conflict with the hypothesis that the light quark masses may be treated as perturbations."
This 1978 paper's abstract states:
We consider, within the framework of current algebra, the possibility that the up-quark mass vanishes (as an alternative to the axion). We argue that the contrary current-algebra value, mu/md=1/1.8, is unreliable. A critical analysis leads to the conclusion that mu=0 is not unreasonable and furthermore leads to a surprisingly good prediction for the δ-meson mass.A massless up quark has been considered a viable option seriously considered since 1978. It was considered an open possible solution to the strong CP problem in 1994 it found that:
We conclude that at the level of precision (order of magnitude) of nonperturbative QCD calculations available to us at present, low-energy phenomenology is completely compatible with a vanishing value of the high-energy up quark mass.
Only a nonperturbative calculation in QCD can prove or disprove the phenomenological viability of mu = 0. Therefore, in view of the recent progress in numerical methods in lattice gauge theory, we would like to encourage a detailed analysis of the possibility of a massless up quark by these methods.A 2000 paper considered ways to test the massless up quark hypothesis using lattice QCD methods and does this 2002 paper which finds that lattice calculations disfavor a massless up quark but that experiments don't resolve the issues apart from a first principles analysis. A 2001 paper notes that useful theoretical QCD predictions can be done with massless quarks entirely. A 2007 paper quantifies the impact of quark mass on QCD predictions from massless models.
A zero or non-zero mass for the up quark might help explain why proton decay is so surprisingly rare.
A zero mass for the up quark together with the extended Koide's formula that motivates it, would imply that the masses of the six quarks and all three charged leptons can be calculated via the extended Koide's formula (including the mass relationship of the charged lepton triple to one of the quark triples and the electron, up down triple) and the assumption that the up quark has zero mass from the mass of the electron using high school algebra to accuracies greater than those available for any of the experimentally measured quark masses (even the top quark whose mass is currently known to 0.6% accuracy).
This would reduce the number of experimentally measured physical constants related to fermion mass in the Standard Model + Extended Koide Model from fifteen to four (the electron and the three neutrino masses).
If the supposition that the Higgs boson mass is equal to half of the sum of the masses of the W+, W- and Z bosons (which is currently accurate to within all current bounds of experimental precision and is closer to the experimentally measured mark than any of the prediscovery mass predictions for the Higgs boson mass), then the number of experimentally measured physical constants related to mass in the Standard Model would fall from three to two, one of which (the Weinberg angle that relates to W and Z boson masses) isn't even a mass value itself.
Thus, we could be on the verge of going from having eighteen measured Standard Model mass constants to having just six, and having much more accurate theoretical values than experimental values for many of those constants.
This would also motivate strongly a Koide derived formula for neutrino masses that if devised and confirmed by experimental evidence would cut the number of experimentally measured mass constants in the Standard Model from six to not more than five (one of which is an angle rather than a mass), and possibly to as few as three if a way to derive the neutrino masses from first principles using the masses of the other fermions and Standard Model bosons (and perhaps the PMNS and/or CKM matrix elements and/or the Standard Model coupling constants) was devised.
Extensions To A Standard Model With Four Generations
Extending the Koide's formula allows one to make useful, constrained and testable predictions regarding a fourth generation of Standard Model particles.
Fourth generation Standard Model particles that would have the masses a naive extension of Koide's formula would imply are experimentally forbidden because the lepton sector is inconsistent with experimental data. This is a conclusion that has already been reached for the large part by the fundamental physics community already based on other grounds.
Fourth Generation Koide Quarks
If one extends the formula based upon recent data on the mass of the bottom and top quarks and presumes that there is a b', t, b triple, and uses masses of 173,400 GeV for the top quark and 4,190 for the bottom quark, then the predicted b' mass would be 3,563 GeV and the predicted t' mass would be about 83.75 TeV (i.e. 83,750 GeV).
Since they would be produced a t'-anti-t' and b'-anti-b' pairs, it would take about 167.5 TeV of energy to produce a t' and 7.1 TeV of energy to produce a b'. Producing a t' would be far beyond the capababilities of the LHC. But, it could conceivably produce a few b' quark events of the Koide's formula predicted mass. These would be unmistakeable unless the extreme speeds of the decay products prevented them from decaying (as a result of special relativity effects) until they reached a point beyond the most remote LHC detectors. This probably wouldn't happen for a b' decay which is within the design parameters of the LHC, but might happen in the case of a fluke t' decay, which is far outside of its design parameters.
The up to the minute direct exclusion range at the LHC for the b' and t' is that there can be no b' with a mass of less than 670 GeV and no t' with a mass of less than 656 GeV (per ATLAS) and the comparable exclusions from CMS are similar (well under 1 TeV).
Koide t' and b' quark decays
A simple fourth generation b' quark or t' quark, that otherwise fits the Standard Model, of that mass would decay so rapidly that it woud not hadronize (i.e. not form composite QCD particles via strong force gluon interactions). Instead, the t' would decay almost exclusively to the b' and the b' would decay almost exclusively to the t, with both interactions happening almost instantaneously.
A t' decay to a b' would produce a highly energetic W+ boson that would carry much of the energy of 80 TeV of rest mass being converted into kinetic energy for the W+ and b' produced in the decay, immediately followed by a highly energetic W- boson produced in the b' to t decay in which about 3,390 GeV of kinetic energy was created from rest mass, followed by the usual immediate t quark to b quark decay with an emission of a W+ converting about 169.2 GeV of rest mass into kinetic energy for the W+ and b quark. There would be an exactly parallel set of reactions for the decay chain of the anti-t' particle.
This highly energetic emission and subsequent decays of the t' quark to a b quark would produce 3 W+ bosons and 3 W- bosons at three discrete and equal energy levels would all take place in about 10^-23 seconds. This is because the lifetime of a t quark is 0.5 * 10^-24 seconds, the lifetime of a W boson is 0.3 * 10^-24 seconds, and the lifetime of a fourth generation Standard Model t' or b' quark would be less than that of the top quark (probably much, much less). The bottom quark and a large share of the heavy decay products of the highly energetic W boson decays (such as b', c and b quarks, and tau prime and tau charged leptons and their antiparticles which have lifetimes of 10^-12 to 10^-13 seconds, with b and c quarks hadronizing into exotic and short lived hadrons before decaying further) would in turn decay by about the time that they had traversed a distance roughly equal to the distance from the center of a gold atom to its outmost orbiting electrons within about 10^-12 seconds. Strange quarks decay in about 10^-10 seconds and muons decay in about 10^-6 seconds.
In the absence of special relativity, this would take place within a sphere of a diameter of less than 10^-16 meters (i.e. about 1-2% of the diameter of a nucleus of a gold atom, a number derived from the decay time of 10^-23 seconds for the first three decays times the speed of light), the strange quark decays would start to happen about a foot from the original site of the decay, and the muon decays would peak about 300 meters away. But, since particle decay takes place in the reference frame of the particle, which is moving at speeds near the speed of light, the decays would take place over a far more extended area because time would pass more slowly for the fast moving t' decay products. The extreme kinetic energies of the particles would cause the their decays to happen at much greater distances from the initial t' production and decay site than the ordinary LHC decays - indeed they might make it past the detectors entirely.
Also, while a 167.5 TeV event does involve a lot of energy in a concentrated place, a single event of that size involves only about 3*10-4 joules of energy, about the amount of kinetic energy of a single grape on the verge of hitting the ground after falling from a vine at waist height, so if it made it past the detectors due to special relativistic extensions of decay times it would be virtually invisible to observers in the area around the impact site and beyond the detectors.
So, even if the LHC was able due to a fluke fluctuation that led to a collision more than ten times as energetic as its design limitations with a spectacular decay chain, it might be missed entirely or almost entirely except as a completely unprecedented amount of missing energy that might be attributed to an equipment failure rather than a real physics event because it was so far beyond the designed detection range of the scientific equipment in the facility.
Fourth generation Koide leptons
The extension for charged leptons (a muon, tau, tau prime triple), however, would imply a 43.7 GeV tau prime, which has been excluded at the 95% confidence level for masses of less than 100.8 GeV and with far greater confidence at 43.7 GeV (which would be produced at a significant and easy to measure freuquency in Z boson decays).
A simple Koide's rule formula for neutrinos using the muon neutrino mass (of 7.5 * 10^-5 eV + 0.08 eV +/- 0.09 eV) and tau neutrino mass (of 2.4 * 10^-3 eV + 0.08 eV +/- 0.09 eV) (with absolute masses derived from accurately measured mass differences between types and a 0.51 eV limit on the sum of the electron neutrino, muon neutrino and tau neutrino masses if there are only three kinds of neutrinos - less if there are more generations of neutrinos), would yield a tau prime neutrino mass of far less than 43.7 GeV. A naive extension of Koide's formula with an electron neutrino of near zero mass would lead to a fourth generation neutrino of about 0.05 eV and would have a mass of up to 11.6 eV in the nearly degenerate case where all three neutrino species had almost precisely the same mass. But, this would contradict the cosmological data constraint that limits to 0.51 eV for the sum of the masses all of the species of light neutrinos combined (which is about 1/1,000,000th the rest mass of an electron). So, instead, 0.51 eV would be the realistic upper limit of a Standard Model weakly interacting fourth neutrino generation.
Given the cosmological constraint on the sum of neutrino masses, the possibility that the naive Koide's formula needs a sign modification or something like it for neutrinos (which it probably does) is irrelevant.
Yet, any simple, fourth generation, weakly interacting tau prime neutrinos of any rest mass less than 45 GeV can be excluded on the basis of Z boson decays, so this scenario is definitively excluded if Koide's formula is even remotely an accurate way of estimating the mass of a hypothetical fourth generation Standard Model neutrino.
These theoretical considerations make it highly unlikely that there is any fourth generation of Standard Model fermions at all. The Standard Model makes fermions an entire generation at a time, and this would require a fourth generation charge fermion far in excess of the Koide formula extension predicted value.
Aspen Higgs Conference In Progress
A Higgs boson conference is underway in Aspen, Colorado.
Professor Strassler's presentation almost feels as if he is channeling agent Fox Mulder from the X-Files. He just knows that something must be out there (new physics) and will leave no rock unturned looking for it.
Other talks review the state of the experimental data which is mostly old news after the Moriond conference announcements. The news everyone is waiting for is still the CMS data on diphoton decay rates whichis rumored will be announced tomorrow later this week and are rumored to show a reduced excess.
Professor Strassler's presentation almost feels as if he is channeling agent Fox Mulder from the X-Files. He just knows that something must be out there (new physics) and will leave no rock unturned looking for it.
Other talks review the state of the experimental data which is mostly old news after the Moriond conference announcements. The news everyone is waiting for is still the CMS data on diphoton decay rates which
Neutrino mass hierarchy and CP phase years away
It turns out to be very difficult, for a variety of technical reasons, to determine the hierarchy of the neutrino masses, although it is not impossible to figure it out experimentally. As a result we don't have a definitive answer yet, but we should soon. "Multi-megaton scale under ice (or water) detecting atmospheric neutrinos with low energy threshold may establish the mass hierarchy with high confidence level in few years."
Enhancements of experiments at existing facilities would permit us to determine "the angle delta_CP, and resolve the mass hierarchy at 3-sigma for 35% of the possible delta_CP values." With adequate fund, by "the end of the decade, this should push the error on delta_CP to 15-28 degrees, and this is not systematics limited."
Enhancements of experiments at existing facilities would permit us to determine "the angle delta_CP, and resolve the mass hierarchy at 3-sigma for 35% of the possible delta_CP values." With adequate fund, by "the end of the decade, this should push the error on delta_CP to 15-28 degrees, and this is not systematics limited."
Top Quark Physics At Moriond
Today is top quark day at the Moriond conference.
The latest LHC measurement of the top quark mass is 173.3 +/- 0.5 +/- 1.3 GeV. The final Tevatron measurement is 173.2 +/- 0.6 +/- 0.8 GeV, which is slightly more accurate. These are precisiosn of 0.8% and 0.6% respectively. The fact that the two independent measurements confirm each other to within 0.1 GeV when their stated accuries are much larger is also encouraging. The top quark has a lifetime of about 0.5 * 10^-24 seconds, which is generally too short for them to hadronize - they almost always decay to bottom quarks. It has a yukawa coupling to the Higgs boson of approximately one. (The W boson mass world average is about 80.385 +/- 0.015 GeV). A Higgs boson mass of 125.7 GeV (within error bars far greater than the experimental values) favors a W mass at the low end of the range of uncertainty, closer to 80.37 GeV and a top mass on the high end of the range of uncertainty, closer to 173.4 GeV. (As an aside, the Z boson mass is about 91.188 GeV with an uncertainty of perhaps +/- 0.033 GeV).
The measured top quark mass less the measured antitop quark mass is -0.272 +/- 0.196 (stat) +/- 0.122 (syst) GeV, which is about 1.2 sigma from zero, which would generally be considered experimentally consistent with zero (a strong theoretical expectation due to CPT symmetry).
The rate of top quark decays to bottom quarks is consistent with the Standard Model theoretical expectation to a high degree of precision, strongly disfavoring 4th generation quarks, charged Higgs bosons, or the other new physics in the mass vicinity of a top quark. The measured value of bottom quark decays where 1 is the Standard Model expectation is 1.023 +0.036-0.034 with a 95% probability that it is at least 0.945. If one assumes three generations and CKM unitarity and does a global fit the measurement drop to 1.011 +0.018-0.017 and is greater than 0.972 with a 95% probability.
The observed CP violation is within about 0.5 sigma of the Standard Model prediction. All of the top quark results are consist with each other and the Standard Model predictions.
Final analysis of the Tevatron results also found no new physics in its top quark searches.
The latest LHC measurement of the top quark mass is 173.3 +/- 0.5 +/- 1.3 GeV. The final Tevatron measurement is 173.2 +/- 0.6 +/- 0.8 GeV, which is slightly more accurate. These are precisiosn of 0.8% and 0.6% respectively. The fact that the two independent measurements confirm each other to within 0.1 GeV when their stated accuries are much larger is also encouraging. The top quark has a lifetime of about 0.5 * 10^-24 seconds, which is generally too short for them to hadronize - they almost always decay to bottom quarks. It has a yukawa coupling to the Higgs boson of approximately one. (The W boson mass world average is about 80.385 +/- 0.015 GeV). A Higgs boson mass of 125.7 GeV (within error bars far greater than the experimental values) favors a W mass at the low end of the range of uncertainty, closer to 80.37 GeV and a top mass on the high end of the range of uncertainty, closer to 173.4 GeV. (As an aside, the Z boson mass is about 91.188 GeV with an uncertainty of perhaps +/- 0.033 GeV).
The measured top quark mass less the measured antitop quark mass is -0.272 +/- 0.196 (stat) +/- 0.122 (syst) GeV, which is about 1.2 sigma from zero, which would generally be considered experimentally consistent with zero (a strong theoretical expectation due to CPT symmetry).
The rate of top quark decays to bottom quarks is consistent with the Standard Model theoretical expectation to a high degree of precision, strongly disfavoring 4th generation quarks, charged Higgs bosons, or the other new physics in the mass vicinity of a top quark. The measured value of bottom quark decays where 1 is the Standard Model expectation is 1.023 +0.036-0.034 with a 95% probability that it is at least 0.945. If one assumes three generations and CKM unitarity and does a global fit the measurement drop to 1.011 +0.018-0.017 and is greater than 0.972 with a 95% probability.
The observed CP violation is within about 0.5 sigma of the Standard Model prediction. All of the top quark results are consist with each other and the Standard Model predictions.
Final analysis of the Tevatron results also found no new physics in its top quark searches.
Subscribe to:
Posts (Atom)