The new data shrink the estimated proportion of dark energy relative to matter (i.e. it has a best fit value for the cosmological constant that is a bit smaller).
Taken together with other data, the results strongly favors a cosmology with just three generations of neutrinos with a sum of the three respective mass eigenvalues of between 0.06 and 0.23 eV of mass with the best fit at the bottom of that range. A sterile neutrino of the kind suggested by the reactor anomalies at two Earth bound neutrino experiments (LSND and MiniBooNE) is not a good fit and if there was one would have to have less than 0.5 eV of mass which is considerably smaller than the estimate from reactor anomalies observed to date and other data of 1.3 eV or so. In a nutshell, the Standard Model triumphs once again. See posts on the state of these measurements pre-Planck here and here based on the 9 year WMAP data.
It is also important to note that the cold dark matter in the lambda CDM model doesn't say very much about the nature of the dark matter component of the model at all. It does not specify some specific dark matter model.
Scientific results include robust support for the standard, six parameter lambda CDM model of cosmology and improved measurements for the parameters that define this model, including a highly significant deviation from scale invariance of the primordial power spectrum. The values for some of these parameters and others derived from them are significantly different from those previously determined. Several large scale anomalies in the CMB temperature distribution detected earlier by WMAP are confirmed with higher confidence. Planck sets new limits on the number and mass of neutrinos, and has measured gravitational lensing of CMB anisotropies at 25 sigma. Planck finds no evidence for non-Gaussian statistics of the CMB anisotropies. There is some tension between Planck and WMAP results; this is evident in the power spectrum and results for some of the cosmology parameters. In general, Planck results agree well with results from the measurements of baryon acoustic oscillations.The cosmological parameters paper is probably the most interesting in terms of providing concrete results. With regard to neutrinos it finds that "Using BAO and CMB data, we find Neff = 3.30 0 +/- 0.27 for the effective number of relativistic degrees of freedom, and an upper limit of 0.23 eV for the sum of neutrino masses. Our results are in excellent agreement with big bang nucleosynthesis and the standard value of Neff = 3.046. . . . Since the sum of neutrino masses must be greater than approximately 0.06 eV in the normal hierarchy scenario and 0.1 eV in the degenerate hierarchy (Gonzalez-Garcia et al. 2012), the allowed neutrino mass window is already quite tight and could be closed further by current or forthcoming observations (Jimenez et al. 2010; Lesgourgues et al. 2013)." The best fit value for the sum of neutrino masses in the Planck data is 0.06 eV, but the data are not terribly precise.
We find no evidence for extra relativistic species, beyond the three species of (almost) massless neutrinos and photons. The main effect of massive neutrinos is a suppression of clustering on scales larger than the horizon size at the non-relativisitic transition. . . . Using Planck data in combination with polarization measured by WMAP and high-L` anisotropies from ACT and SPT allows for a constraint on the sum of the neutrino species masses of < 0.66 eV (95% CL) based on the [Planck+WP+highL] model. Curiously, this constraint is weakened by the addition of the lensing likelihood the sum of the neutrino species masses of < 0.85 eV (95% CL), reflecting mild tensions between the measured lensing and temperature power spectra, with the former preferring larger neutrino masses than the latter. Possible origins of this tension are explored further in Planck Collaboration XVI (2013). . . . The signal-to-noise on the lensing measurement will improve with the full mission data, including polarization, and it will be interesting to see how this story develops.
– using a likelihood approach that combines Planck CMB and lensing data, CMB data from ACT and SPT at high L`s, and WMAP polarized CMB data at low L`s, we have estimated the values of a “vanilla” 6-parameter lambda CDM model with the highest accuracy ever. These estimates are highly robust, as demonstrated by the use of multiple methods based both on likelihood and on component-separated maps.
– The parameters of the Planck best-fit 6-parameter lambda CDM are significantly different than previously estimated. In particular, with respect to pre-Planck values, we find a weaker cosmological constant (by 2 %), more baryons (by 3 %), and more cold dark matter (by 5 %). The spectral index of primordial fluctuations is firmly established to be below unity, even when extending the CDM model to more parameters.
– we find no significant improvements to the best-fit model when extending the set of parameters beyond 6, implying no need for new physics to explain the Planck measurements.
– The Planck best-fit model is in excellent agreement with the most current BAO data. However, it requires a Hubble constant that is significantly lower ( 67 km s^-1 Mpc^-1) than expected from traditional measurement techniques, raising the possibility of systematic effects in the latter.
– An exploration of parameter space beyond the basic set leads to: (a) firmly establishing the effective number of relativistic species (neutrinos) at 3; (b) constraining the flatness of space-time to a level of 0.1%; (c) setting significantly improved constraints on the total mass of neutrinos, the abundance of primordial Helium, and the running of the spectral index of the power spectrum.
– we find no evidence at the current level of analysis for tensor modes, nor for a dynamical form of dark energy, nor for time variations of the fine structure constant. . . .
– we find important support for single-field slow-roll inflation via our constraints on running of the spectral index, curvature and fNL.
– The Planck data squeezes the region of the allowed standard inflationary models, preferring a concave potential: power law inflation, the simplest hybrid inflationary models, and simple monomial models with n > 2, do not provide a good fit to the data.
– we find no evidence for statistical deviations from isotropy at L >50, to very high precision.
– we do find evidence for deviations from isotropy at low L`s. In particular, we find a coherent deficit of power with respect to our best-fit lambda CDMmodel at L`s between 20 and 30.
– We confirm the existence of the so-called WMAP anomalies.
Analysis of the possibility of a sterile neutrino by the Planck team is not a good fit and imposes a mass limit of about 0.5 eV on the sterile neutrino species is is considerably less than the mass suggested by reactor anomaly data.
UPDATE: I posted the following as a comment at the Not Even Wrong Blog without links.
The result I read in paper sixteen was Neff=3.30 +/- 0.27 v. Neff 3.046 for the three Standard Model neutrinos. So, their result is a little less than one sigma from the Standard Model value. A four neutrino model would have an Neff of a bit more than 4.05, which is about three sigma from the measured value which is roughly a 99% exclusion and is a confirmation of the Standard Model.
Planck also combines data from multiple sources puts a cap on the sum of three neutrino masses in a three Standard Model neutrino scenario of 0.24 eV (at 95% CI) with a best fit value of 0.06 eV. The floor from non-astronomy experiments is 0.06 eV in a normal neutrino mass hierachy (based on the difference between mass one and mass two, and between mass two and mass three which are both known to about two significant digits) and 0.1 eV in an inverted neutrino mass hierachy. In a normal neutrino mass hierarchy, this puts the mass of the electron neutrino at between 0 and 0.06 eV, with the low end preferred (I personally expect that an electron neutrino is significantly less than the mass difference between the first and second neutrino type of about 0.006 eV).
Note that a particle that is in the hundreds or thousands of eVs would not count towards Neff because it is not light enough to be relativistic at 380,000 years after the Big Bang. So, it really only rules out a light sterile neutrino, rather than a heavy one. The LSND and MiniBooNE reactor anomalies have hinted at a possible fourth generation sterile-ish neutrino of about 1.3 eV +/- about 30%, so the Planck people did a study on the sum of mass limits if there were a disfavored four and not just three relativistic species and came up with a cap on sterile neutrino mass in that scenario of about 0.5 eV +/- 0.1 eV, which is about 2.5 sigma away from the value of the LSND/MiniBooNE anomaly estimates considering the combined uncertainties.
LEP ruled out a fourth species of fertile neutrino of under 45 GeV, and I wouldn’t be going out on a limb to say without actually doing the calculations that a fertile neutrino of 45 GeV to 63 GeV, if it existed, would have wildly thrown off all of the Higgs boson decay cross-sections observed (since a decay to a 45 GeV to 63 GeV neutrino-antineutrino pair from a 125.7 GeV Higgs boson would have been a strongly favored decay path if it existed) and is in fact therefore excluded by the lastest round of LHC data.
The LEP data already excluded fertile neutrinos in the 6 GeV to 20 GeV mass range where there are contradictory direct dark matter detection experiment results at different experiments.
But, a particle that we would normally call a sterile neutrino for other purposes in the Warm Dark Matter mass range of KeV or the Cold Dark Matter mass range of GeV to hundreds of GeV, or anything in between (including any of the possible direct dark matter detection signals or anything that would generate the Fermi line at 130 GeV), would not be a relativistic particle within the meaning of Neff which only counts particles that would move at relativistic speed given their masses at the relevant time.
ADDITONAL UPDATE: The mass difference of neutrino mass one and neutrino mass two is about 0.009 (usually reported squared at about 7.5 * 10^-5 eV) are about 0.5 (usually reported squared at about 2.5 * 10^-3 eV) for a combined 0.509. If the neutrino mass hierarchy is broadly similar to that of the quarks and the charged leptons (it is impossible to fit the values already known to a perfect Koide triple), one would expect an electron neutrino mass on the order of 0.001 eV (i.e. 1 meV) or less.
Planck is the beginning and to a great extent the end of cosmic background radiation physics.
Also, the precision of the Planck data is so much better than anything that has come before it, including the previously state of the art 9-year WMAP data released earlier this year, that you can basically ignore any pre-Planck data on cosmic background radiation in any respect that Planck data addresses the subject. If you use the Particle Data Group approach of computing global averages with weights inversely proportional to margin of error, the relative weights are perhaps 9-1 or more.
Realistically, Planck and successor cosmic background radiation experiments may be the only way to experimentally probe this truly high energy physics regime of the early universe for decades and possible ever. There are good theoretical reasons why we can't directly observe anything older (e.g. star formation happened after the cosmic background radiation arose, so there was nothing to make coherent light emitting objects). And, almost nothing in the current universe or any experiment we could create has higher energies than the pre-cosmic background radiation universe we are probing with this data.
Planck is measuring the entire universe-wide cosmic background radiation data set of one. We can't measure some other universe's cosmic background radiation outside of computer simulations and there is no reason that the cosmic background radiation that is observable from our solar system or anywhere nearby we can send a space probe should change noticably in my lifetime or the lifetime of my children or grandchildren. Future experiments can be more precise, but we understand electromagnetism almost perfectly and know all of the properties of cosmic backgrond radiation that it is even theoretically possible to measure and have measured almost all of them already (or are on the verge of doing so in the next few years) at Planck. Details can be refined, but the big picture won't change. Really:
In the early 1990s, the COBE satellite gave us the first precision, all-sky map of the cosmic microwave background, down to a resolution of about 7 degrees. About a decade ago, WMAP managed to get that down to about half-a-degree resolution. But Planck? Planck is so sensitive that the limits to what it can see aren’t set by instruments, but by the fundamental astrophysics of the Universe itself! In other words, it will be impossible to ever take better pictures of this stage of the Universe than Planck has already taken.Inflation and Cosmology Findings
I'll have to leave for a future post an in depth analysis of the constraints that the Planck findings place on cosmology apart from dark energy proportions, dark matter amounts, and neutrino generations and masses, but I'll discuss a few briefly in this post. There are several really interesting things going on there.
* First, the new Planck data provide much more meaningful experimental constraints on theories of "inflation" shortly after the Big Bang, which after dark matter, is probably the second biggest set of experimental data screaming out for new physics.
Because inflation takes place in the extremely high energy extremely early universe (when it was smaller than one meter and only a tiny fraction of a second old) is hard to make inferrences about in the context of experiments like the LHC and observable astronomy which are many orders of magnitude below the energy densities present in the proposed inflationary era, so "new physics" in this area outside the range of our experience or likely future is far less consequential than dark matter which affects the world we see today. But, "new physics" is still a big deal and may be important to the structure of a "Theory of Everything" or a quantum gravity theory (e.g. string theory vacua), at the very least by ruling out theories that have high energy behavior inconsistent with the experimental boundaries of inflation scenarios.
A lot of inflation theories that have been viable candidates, receiving serious discussion almost ever since the need for inflation in a cosmology theory was discovered in the 1970s (around the same time that the Standard Model was formulated), have been ruled out by the latest round of Planck data. Planck strongly disfavors power law inflation, the simplest hybrid inflationary models, simple monomial models with n > 2, single fast roll inflation scenarios, multiple stage inflation scenarios, inflation scenarios with flat or concave potentials, dynamical dark energy, time variations of the fine structure constant are all strongly disfavored. Any theory that would create non-Gaussian statistics of the CMB anisotropies, non-flat universes, tensor modes, or statistically discernable deviations from isotropy at L >50 are ruled out.
If your theory was phenomenologically distinct from "single slow roll inflation scenarios with convex potential" in any non-subtle way, you were wrong, thanks for playing.
I will need to read more to fully understand these implications myself, but more inflation theories have died today than on any previous day in history and than will on any day to come in the future (since there are fewer inflation theories left than the number of inflation theories killed today). A book length catalog (300 pages) of the pre-March 21 ranks of inflation theories is available at arxiv. What is inflation?
Dark energy is broadly similar to inflation, and is thought to be causing the expansion of the present-day universe to accelerate. However, the energy scale of dark energy is much lower, 10−12 GeV, roughly 27 orders of magnitude less than the scale of inflation.Basically, inflation this involves a scalar field called the inflaton that is dissipated in the inflation process.
According to inflation theory, the inflaton field provided the mechanism to drive a period of rapid expansion from 10−35 to 10−34 seconds after the initial expansion that formed the universe.
The inflaton field's lowest energy state may or may not be a zero energy state. This depends on the chosen potential energy density of the field. Prior to the expansion period, the inflaton field was at a higher-energy state. Random quantum fluctuations triggered a phase transition whereby the inflaton field released its potential energy as matter and radiation as it settled to its lowest-energy state. This action generated a repulsive force that drove the portion of the universe that is observable to us today to expand from approximately 10−50 metres in radius at 10−35 seconds to almost 1 metre in radius at 10−34 seconds.
Inflaton conforms to the convention for field names, and joins such terms as photon and gluon. The process is "inflation"; the particle is the "inflaton".
* Second, the lack of scale invariance in the power law of cosmic background radiation has been confirmed parameterized at a value of about 0.96 with 1.00 being a pure scale invariant power law. The lamda CDM model has a parameter to describe this deviation, but no mechanism to make it happen. This is a prediction of many inflation models.
* Third, something weird seems to be going on between L's 20 and 30. This is the only material respect in which the Planck data deviate from the lamda CDM model. Intuitively, it seems very plausible that the source of the L's 20 to 30 deviation and the source of the lack of scale invariance could be the same. Some small second order effect not captured by the six parameter lamda CDM model appears to be involved here.
For example, both the lack of scale invariance and the weirdness from L's 20 to 30 are both plausible consequences of the place on the spectrum from hot dark matter to cold dark matter than a dark matter particle resides.
Roughly speaking, in simple single dark matter particle models, the mass of the particle (or the dominant particle if there are multiple kinds but one has a predominant impact on phenomenology in the way the first generation fermions that form protons, neutrons and atoms in the Standard Model do) governs where deviations in large scale structure related to scale arise. Hot dark matter has almost no large scale structure. Warm dark matter gives rise to roughly the amount of large scale structure we observe. Cold dark matter gives rise to more dwarf galaxies and large scale structure that is more finely grained than we observe.
All of this, of course, is model dependent and the generalizations are based on a simple, almost completely non-interacting dark sector with just one kind of particle and no significant new forces from those know to use already. A single instance of inflation alone is enough to get the observed scale invariance in a lamda CDM model, but doesn't explain the L's 20 to 30 anomaly, which could have an entirely different source (or simply be random variation that is improbable but has no deeper cause, or experimental error).
Some persepective on this anomaly from this blog:
Planck sees the same large scale anomalies as WMAP, thus confirming that they are real rather than artifacts of some systematic error or foreground contamination (I believe Planck even account for possible contamination from our own solar system, which WMAP didn't do). These anomalies include not enough power on large angular scales (ℓ≤30 ), an asymmetry between the power in two hemispheres, a colder-than-expected large cold spot, and so on.
The problem with these anomalies is that they lie in the grey zone between being not particularly unusual and being definitely something to worry about. Roughly speaking, they're unlikely at around a 1% level. This means that how seriously you take them depends a lot on your personal prejudices priors. One school of thought – let's call it the "North American school" – tends to downplay the importance of anomalies and question the robustness of the statistical methods by which they were analysed. The other – shall we say "European" – school tends instead to play them up a bit: to highlight the differences with theory and to stress the importance of further investigation. Neither approach is wrong, because as I said this is a grey area. But the Planck team, for what it's worth, seem to be in the "European" camp.
* Fourth, the fact that space-time is "flat" to a precision of 0.1% is remarkable given that we conceive of general relativity as a warping of space-time. Overwhelmingly, this warping of space-time due to gravity is local rather than global.
What drives the conclusions about inflation?
The preference for a simple model is driven by several factors:
(1) The data is a good fit to a simple power law with a not quite scale invariant exponent of about 0.96 rather than 1.0 (with a five sigma difference from a 1.0 value) that shows no statistically significant tendency to change over time (i.e. the best fit value for the running of the spectral index is about 1.5 sigma from zero at -0.0134 +/- 0.0090).
(2) The best fit value for a tensor contribution has its best fit at or nearly at zero. The absence of any indication of a tensor mode in the inflaton as opposed to a mere scalar inflaton seems to be another important factor that is driving the exclusion of other models. "In a model admitting tensor fluctuations, the 95% CL bound on the tensor-to-scalar ratio is r0.002 < 0.12 (< 0.11) using Planck+WP (plus high-L`). This bound on r implies an upper limit for the inflation energy scale of 1.9*10^16 GeV . . . at 95% CL." (3) The best fit values of inflation scenarios are likewise almost maximally concave (i.e. potential drops more in the early part of a decline in inflaton potential than later on). The Planck report concludes by noting that: "The simplest inflationary models have passed an exacting test with the Planck data. The full mission data including Planck’s polarization measurements will help answer further fundamental questions, including the possibilities for nonsmooth power spectra, the energy scale of inflation, and extensions to more complex models."
Evidence for a GUT?
The coincidence of the Planck upper limits on inflation energy scale with the completely independently derived grand unification scale based upon the running of the Standard Model (or SUSY) coupling constants is impressive. Even if SUSY is not the way the coupling constants converge, the notion of a grand unification at inflation energies by some means (perhaps by considering quantum gravity theories) is aesthetically very tempting.
Mostly Off Topic Other Items Of Interest:
More On Why We Don't Need SUSY
Woit has an interesting post on a talk by LHC physicist Joe Lyyken on why the "hierachy problem" that SUSY seeks to solve isn't actually a problem with anything but how theoretical physicists are thinking about the issue.
Dark Matter and MOND
* Somewhat off topic, in January of this year, an interesting new MOND paper by MOND inventor Milgrom and two co-authors was published (arguing that much of the dark matter effects are due to a modification of the law of gravity rather than dark matter particles) setting forth a MOND cosmology.
* The dominant unresolved question in physics remains the need to understand dark matter phenomena. As I've said before, and as Planck confirms once again, a simple cosmological constant completely explains dark energy within the context of the same theory of General Relativity that we've had for a century now - dark energy, rather than being mysterious, is a solved problem.
General relativity does not explain dark matter phenomena which are operationally defined as deviations from the predictions of general relativity that are observed by astronomers that don't relate to "inflation" in cosmology. The Standard Model provides no dark matter candidates and the LHC is foreclosing more of them. The lamda CDM model separately accounts for mass from baryons, neutrinos, radiation and effective mass-energy from the cosmological constant and has dark matter left over, but this six parameter fit to cosmic background radiation data collected as WMAP and Planck, for example, does very little to specify the nature of the dark matter component. Direct dark matter searches that have shown any dark matter signals contradict each other and are condicted by searches that have come up empty in roughly the 10 GeV to 100 GeV range for all but the very lowest cross-sections of interaction (well below that of neutrinos).
Simulations have shown that WIMPS or other simple Cold Dark Matter scenarios produce more dwaft galaxies than we observe and none of the Cold Dark Matter models can rival MOND in closely approximating almost all galaxy level dark matter effects in a predictive manner with just a single experimentally measured constant. The cuspy dark matter halos predicted by Cold Dark Matter models are likewise contrary to what we observe, which is inferred halo distributions of dark matter that look more like rugby balls with their long axis passing through a galaxy's central black hole and poking up out from the plane of the galaxy's rotation.
'If your theory was phenomenologically distinct from "single slow roll inflation scenarios with convex potential" in any non-subtle way, you were wrong, thanks for playing.'
ReplyDeleteWe (I mean interested parties) should try to locate specific lagrangians which implement this type of model, so we know what sort of fields and interactions it implies at the particle physics level. For example, does it imply that even a minimal SM-like model must have a second scalar to act as the inflaton; or can you obtain the above type of inflation from the Higgs?
In principle, yes. In practice, there are easier unsolved problems to address where the data is richer, like dark matter and quantum gravity, that have a better shot of putting us in a position to say anything sensible about an inflaton.
ReplyDeleteGravitoweak unification may be particular promising in offering inflaton candidates.
A systematic survey of the constraints the new Planck results place on light BSM particles is found at http://arxiv.org/abs/1303.5379 which looks at limits for spin-0, spin-1/2, spin-1, spin-3/2 and spin-2 particles. It also notes that the polarization data from Planck should reduce the uncertainty in Neff from 3.3 +/- 0.27 to something on the order of +/- 0.02, providing powerful limits on additional species of light particles relative to the current Planck data.
ReplyDeleteThe Planck polarization data will be released in early 2014.
ReplyDeletehttp://arstechnica.com/science/2013/03/first-planck-results-the-universe-is-still-weird-and-interesting/
The Neff data was later revised upward to about 3.8
ReplyDelete