Saturday, February 13, 2016

More Constraints On 750 GeV Resonance Models

The 750 GeV resonance observed at ATLAS and CMS and announced last December has spawned more than 750 papers.  One of the more impressive new papers uses the impact the a hypothetical new scalar or pseudoscalar boson and its vector-like fermion intermediate states would have on the running on the strong force coupling constant that has been measured to date, along with limits from direct LHC searches for decaying heavy particles, to very tightly constrain the parameter space of any realistic model.

Bottom line: if the 750 GeV resonance is real, there need to be more new particles awaiting us at masses of less than 1,000 GeV, which is right around the corner.  Otherwise, this resonance is almost surely a statistical fluke.

Dark matter theorists come close to reproducing the baryonic Tully-Fisher relation with reverse engineered simulation

Dark matter theorists recently made the most successful effort to reproduce the baryonic Tully-Fisher relation which related the rate at which a galaxy rotates to the amount of ordinary matter in it, within a fairly limited subset of galaxy masses that excludes very small and very large galaxies.

Furthermore, they have made a prediction using their model that can be used to test its accuracy on new data for low mass galaxies. Preliminary data, however, tends to show that this prediction does not reflect the data. The paper's abstract states:
The scaling of disk galaxy rotation velocity with baryonic mass (the "Baryonic Tully-Fisher" relation, BTF) has long confounded galaxy formation models. It is steeper than the M ~ V^3 scaling relating halo virial masses and circular velocities and its zero point implies that galaxies comprise a very small fraction of available baryons. 
Such low galaxy formation efficiencies may in principle be explained by winds driven by evolving stars, but the tightness of the BTF relation argues against the substantial scatter expected from such vigorous feedback mechanism. 
We use the APOSTLE/EAGLE simulations to show that the BTF relation is well reproduced in LCDM simulations that match the size and number of galaxies as a function of stellar mass. In such models, galaxy rotation velocities are proportional to halo virial velocity and the steep velocity-mass dependence results from the decline in galaxy formation efficiency with decreasing halo mass needed to reconcile the CDM halo mass function with the galaxy luminosity function. Despite the strong feedback, the scatter in the simulated BTF is smaller than observed, even when considering all simulated galaxies and not just rotationally-supported ones. 
The simulations predict that the BTF should become increasingly steep at the faint end, although the velocity scatter at fixed mass should remain small. Observed galaxies with rotation speeds below ~40 km/s seem to deviate from this prediction. We discuss observational biases and modeling uncertainties that may help to explain this disagreement in the context of LCDM models of dwarf galaxy formation.
L.V. Sales, et al., "The low-mass end of the baryonic Tully-Fisher relation" (February 5, 2016) (emphasis and paragraph breaks added).

It isn't entirely clear from the paper what Sales, et al. did in the APOSTLE/EAGLE simulation that resolved the problems that had confounded previous galaxy formation models for the past several decades. Clearly, previous studies were doing something wrong. As this paper explains (citations omitted):
[T]he literature is littered with failed attempts to reproduce the Tully-Fisher relation in a cold dark matter-dominated universe. Direct galaxy formation simulations,for example, have for many years consistently produced galaxies so massive and compact that their rotation curves were steeply declining and, generally, a poor match to observation. Even semi-analytic models, where galaxy masses and sizes can be adjusted to match observation, have had difficulty reproducing the Tully-Fisher relation, typically predicting velocities at given mass that are significantly higher than observed unless somewhat arbitrary adjustments are made to the response of the dark halo.
There is some explanation of what they have done differently, but it isn't very specific:
The situation, however, has now started to change, notably as a result of improved recipes for the subgrid treatment of star formation and its associated feedback in direct simulations. As a result, recent simulations have shown that rotationally-supported disks with realistic surface density profiles and relatively flat rotation curves can actually form in cold dark matter halos when feedback is strong enough to effectively regulate ongoing star formation by limiting excessive gas accretion and removing low-angular momentum gas.

These results are encouraging but the number of individual systems simulated so far is small, and it is unclear whether the same codes would produce a realistic galaxy stellar mass function or reproduce the scatter of the Tully-Fisher relation when applied to a cosmologically significant volume. The role of the dark halo response to the assembly of the galaxy has remained particularly contentious, with some authors arguing that substantial modification to the innermost structure of the dark halo, in the form of a constant density core or cusp expansion, is needed to explain the disk galaxy scaling relations, while other authors find no compelling need for such adjustment.

The recent completion of ambitious simulation programmes such as the EAGLE project, which follow the formation of thousands of galaxies in cosmological boxes ≈ 100 Mpc on a side, allow for a reassessment of the situation. The subgrid physics modules of the EAGLE code have been calibrated to match the observed galaxy stellar mass function and the sizes of galaxies at z = 0, but no attempt has been made to match the BTF relation, which is therefore a true corollary of the model. The same is true of other relations, such as color bimodality, morphological diversity, or the stellar-mass Tully-Fisher relation of bright galaxies, which are successfully reproduced in the model. Combining EAGLE with multiple realizations of smaller volumes chosen to resemble the surroundings of the Local Group of Galaxies (the APOSTLE project), we are able to study the resulting BTF relation over four decades in galaxy mass. In particular, we are able to examine the simulation predictions for some of the faintest dwarfs, where recent data have highlighted potential deviations from a power-law BTF and/or increased scatter in the relation.
In other words, if you set the code to produce the right sized galaxies using lamdaCDM and cherry pick the data to limit your observations to those whose halos cause them to look like the galaxies near the Milky Way, you get results that match the Tully-Fisher relation and also have other properties that match observation.

But, it isn't clear which variables are being calibrated in what respects, and it isn't clear what the data from the realizations of the simulation that don't look like the Local Group of Galaxies are being produced and discarded.

The fact that simply selecting the right size and general pattern of the dark matter halos is sufficient to reproduce baryonic Tully-Fisher and other relations is not a trivial finding.  But, it still doesn't solve the question of how to reproduce these dark matter halo sizes and patterns without putting them into the model by hand through the calibration and selection process.

We learn a little more later on, but it is still very vague:
We refer the reader to the main EAGLE papers for further details, but list here the main code features, for completeness. In brief, the code includes the “Anarchy” version of SPH, which includes the pressure-entropy variant proposed by Hopkins (2013); metal-dependent radiative cooling/heating, reionization of Hydrogen and Helium (at redshift z = 11.5 and z = 3.5, respectively), star formation with a metallicity dependent density threshold, stellar evolution and metal production, stellar feedback via stochastic thermal energy injection, and the growth of, and feedback from, supermassive black holes. The free parameters of the subgrid treatment of these mechanisms in the EAGLE code have been adjusted so as to provide a good match to the galaxy stellar mass function, the typical sizes of disk galaxies, and the stellar mass-black hole mass relation, all at z ≈ 0.
But, we aren't told what choices were made for which free parameters to accomplish this which would seem to be vital issue in determining the validity of the model and understanding what is going on to achieve this result, except that we are told is that:
1504^3 dark matter particles each of mass 9.7 × 106M ; the same number of gas particles each of initial mass 1.8×106M ; and a Plummer-equivalent gravitational softening length of 700 proper pc (switching to comoving for redshifts higher than z =2.8). The cosmology adopted is that of Planck Collaboration et al. (2014), with ΩM = 0.307, ΩΛ = 0.693, Ωb = 0.04825, h =0.6777 and σ8 = 0.8288.

The second set of simulations is the APOSTLE suite of zoom-in simulations, which evolve 12 volumes tailored to match the spatial distribution and kinematics of galaxies in the Local Group. Each volume was chosen to contain a pair of halos with individual virial mass in the range 5 × 10^11 - 2.5×10^12 M . The pairs are separated by a distance comparable to that between the Milky Way (MW) and Andromeda (M31) galaxies (800 ± 200 kpc) and approach with radial velocity consistent with that of the MW-M31 pair (0-250 km/s).

The APOSTLE volumes were selected from the DOVE N-body simulation, which evolved a cosmological volume of 100 Mpc on a side in the WMAP-7 cosmology. The APOSTLE runs were performed at three different numerical resolutions; low (AP-LR), medium (AP-MR) and high (AP-HR), differing by successive factors of ≈ 10 in particle mass and ≈ 2 in gravitational force resolution. All 12 volumes have been run at medium and low resolutions, but only two high-res simulation volumes have been completed.

We use the SUBFIND algorithm to identify “galaxies”; i.e., self-bound structures in a catalog of friends-of-friends (FoF) halos built with a linking length of 0.2 times the mean interparticle separation. We retain for analysis only the central galaxy of each FoF halo, and remove from the analysis any system contaminated by lower resolution particles in the APOSTLE runs. Baryonic galaxy masses (stellar plus gas) are computed within a fiducial “galaxy radius”, defined as rgal = 0.15 r200. We have verified that this is a large enough radius to include the great majority of the star-forming cold gas and stars bound to each central galaxy.
The masses of the dark matter and gas particles are absurdly large (comparable to intermediate sized black holes) to the point where dark matter or gas particles of this size are observationally rule out, because the simulation isn't capable computationally of handling a realistic number of particles (at least a trillion times more) with a far lower mass each.

The two particle masses assumed, the "softening" factor, and the minimum virial mass required for convergence Mconv200 (a total of four free parameters at each of four levels of resolution and sixteen free parameters for the simulation as a whole in addition to parameters inherent in the programs themselves) are arbitrarily set in a results driven manner for each resolution of the EAGLE and APOSTLE simulations.  In other words, the results that the simulations produce are reverse engineered from a moderately realistic set of model rules for parts of the process, rather than predicted from first principles in any realistic cold dark matter model.

Again, this isn't to say that the paper hasn't made a breakthrough by coming up with the first simulation that can match key aspects of reality merely by tweaking the parameters of the dark matter halos, at least within its domain of applicability.  And, if subsequent steps allow investigators to devise a dark matter model that works and isn't contradicted by observational evidence, that's great. But, this paper standing alone certainly isn't an unqualified success either.

Modified gravity theories compared.

More than thirty years ago, non-relativistic MOND did pretty much exactly the same thing with a one line equation that has only one free parameter and a larger range of applicability in that extended from solar system sized and smaller systems to the largest single galaxies that we observe.

Three decades later, it still works basically as advertised so long as one used baryonic rather than luminous matter and one makes an adjustment when one galaxy is in the gravitational field of another galaxy.  While it gets the results wrong in galactic clusters (which EAGLE and APOSTLE can't handle either), it underestimates the dark matter effects there, so it can be cured with cluster-specific dark matter (although realistically, the real problem is that the model needs another parameter or two and some adjustments to its formula to address extremely large systems).

Also, non-relativistic MOND made numerous predictions that were consistent with subsequent data collection.  But, this is a test that the EAGLE/APOSTLE approach already seems poised to fail the instant that it is proposed, just like previous dark matter models which have repeatedly proved to be dismal failures at predicting new unobserved phenomena in advance.

This isn't to say that non-relativistic MOND is right. It isn't. Like the EAGLE/APOSTLE simulation, it's a reverse engineered toy model that fails miserably in areas it wasn't designed to address like strong gravitational fields (e.g. predicting the movement of particles outside the plane of a spiral galaxy's disk or predicting strong field behavior).  And, even the relativistic version of MOND called TeVeS that resolves many of the most glaring problems with non-relativistic MOND in the strong field limit probably isn't right either, and still fails at the galactic cluster scale and above.

But, other modified gravity theories such as those proposed by Moffat are relativistic and do work at all scales, even if they look a little clunky and have another parameter or two in addition to MOND's simple parameter which is coincidentally similar to a plausible simple mix of relevant physical constants.

Even more remarkably, Mr. Deur's revision of how massless graviton self-interaction works relative to the predictions of classical general relativity models (derived by analogy to QCD, the theory of the strong force transmitted by self-interacting gluons) looks like a promising way to achieve all of the objectives of modified gravity theories with essentially no free parameters other than the gravitational coupling constant.  It solves the problems of dark matter, and all or most of the problem of dark energy at all scales, without creating strong field pathologies, and while making new predictions that are consistent with observation in the case of elliptical galaxies.  It does this all in one fell swoop that is theoretically well motivated and doesn't require us to invent any new particles or forces or dimensions of space-time or discrete space-time elements.  Sooner or later, my strong intuition is that this solution will turn out to be the right one, even though it make take a generation or so for this to happen.

If Deur is right, the main problems of quantum gravity may turn out to have arisen mainly because we were trying to design a theory that was equivalent to general relativity, when general relativity itself was actual wrong in a subtle way that is relevant mostly only in the very weak field limit that insures that any quantum gravity theory trying to replicate it will be pathological.

Friday, February 12, 2016

Strong Field Predictions Of General Relativity Confirmed

Background

Black holes and the existence of gravity waves were two of the most notable predictions of the theory of general relativity devised by Albert Einstein almost exactly a century ago (although the implications of that theory took much longer to work out with most of the main conclusions that we have reached so far in place by the 1970s).

Black holes are concentrations of matter that are so strongly bound by gravity that not even light can escape them.* They can range in mass from about 3 times the mass of the Sun to 10,000,000,000 time the mass of the Sun in supermassive black holes at the center of the largest galaxies (in principal, there is no upper limit on the mass of a black hole, but no larger black holes have ever been inferred to exist).**  In Newtonian gravity, photons aren't affected by gravity and even if they were, gravity can never get strong enough to prevent them from escaping a massive object because Newtonian gravity involve linear rather than non-linear field strengths.

Indirect experimental evidence (such as gravitational lensing) has long ago indicated that black holes exist and measured their masses.

In Newtonian gravity, gravity's effects are transmitted instantly at all distances.  In general relativity, gravity's effects are transmitted via gravitational waves in space-time that propagate at the speed of light "c".

What did LIGO See?

The LIGO gravity wave experiment formally announced yesterday that it had detected the merger of two roughly equal mass black holes with a combined mass of 65 times the mass of our Sun about 1.3 billion light years away from Earth that converted roughly 5% of their combined mass into gravity waves (of course, there was immense momentum energy in addition to rest mass present in the binary wave system).  The resulting combined black hole was a Kerr black hole which means that it has angular momentum (a Schwartzchild black hole is a special case of a Kerr black hole with zero angular momentum).

The black holes were each about 100 miles in diameter before merging, and less than two minutes before their merger this binary black hole system was spiraling at almost the speed of light at a distance of about 600 miles (a disk of space about the size of Alaska).

The power of the gravitational waves emitted by the extraordinary event that LIGO observed was greater than the combined power of the light emitted by every star in the universe at that moment.  By comparison, the gravitational waves emitted by the entire solar system have a power of about 200 watts (two ordinary light bulbs). The final ping of gravity waves when the black holes finally merged had a frequency roughly the same as the sound wave of a middle D note on a piano.

For a matter of seconds or minutes after the merger, the black hole would have been a bit "bumpy" by the combined force of gravity would swiftly smooth it out into an equilibrium smoothly curved shape.

The statistical significance of the detection event was 5.1 sigma (i.e. 5.1 standard deviations in excess of the null hypothesis that no gravitational wave event was detected) which rates as a scientific discovery. It is the first direct observation of gravity waves (which had previously been inferred from the behavior of binary star systems observed with telescopes) and the most direct observation to date that has been made of black holes.

Gravity Wave Detectors

The LIGO experiment detects gravity waves by looking at the interference pattern generated by two laser beams traveling about 4 kilometers each and back at right angles to each other at two locations, one in Washington State and the other in Louisiana, which are screened out for all manner of forms of background noise.

The LIGO experiment is sensitive to distortions of space-time on a scale of 1/1000th of the diameter of an atom, something made possible only by the immense precision with which we understand and can measure electromagnetic phenomena using the Standard Model.  What LIGO detected was a distortion in the actual physical distance from the two detectors to the four reference points of about that magnitude in a pattern that identified the strength and direction of the source generating the gravity waves.  The gravity wave event that was detected was not accompanied by a surge in cosmic neutrinos (which are associated with supernovas and star collision/mergers, but not with black hole mergers).

About half a dozen other gravity wave detection experiments are set to come on line over the next few years.  Some are space-based, one more is land based, and one uses continuous observations of pulsars in the Milky Way to great, in effect, a galaxy sized gravity wave telescope.

The experiments are largely complementary to each other rather than being competitors.  Each experiment is sensitive to a different range of gravity wave frequencies, with LIGO measuring only the highest frequency gravity waves.  For example, the LISA gravity wave experiment (in space) would not have been able to detect this event because the gravity wave was too large for its instrumentation which is tuned to less dramatic gravity waves to see.

Scientists had doubted that LIGO would be the first to detect gravity waves because it takes such an extraordinary event for it to receive a signal that it can confirm is a gravity wave with the 5 sigma significance needed to constitute a discovery.  These events were predicted to be rare and LIGO simply got lucky in having such an event occur at the right time.  Lower frequency gravitational waves are suspected to be more frequent because less dramatic events can create them.  But, LIGO has also detected several more potential gravitational wave events during its several year existence, although those detections were less statistically significant.

Significance

Twelve papers were generated by LIGO based upon the experiment.  The most notable for my purposes was the one examining the extent to which the observed gravitational waves matched the predictions of General Relativity in strong gravitational fields.

Strong gravitational fields have been the subject of a great deal of speculations about ways that general relativity could be tweaked while still remaining consistent with general relativity, in part, because experimental evidence did not constrain deviations from general relativity very strictly.

A couple of recent papers, one 95 pages long, and one four pages long, have examined how observations form LIGO could test alternatives to General Relativity, which make different predictions about the kinds of strong field gravity waves that would be generated by events like this one.

The ultra-precision LIGO results are consistent with General Relativity up to the limits of its margin of error with no real tension between theory and experiment, and accordingly, greatly constrain the parameter space of alternative theories of gravity that can still be consistent with experimental observations.  For example, these experiments place an experimental bound on how heavy gravitons can be in a "massive graviton" theory.

Similarly, direct experimental observations of gravitational waves, by providing a direct observation of the mechanism by which gravity is transmitted, powerfully disfavors alternative theories of gravity in which gravitational effects are non-local or transmitted instantaneously.  These limits may become even more power when gravity wave detectors capable to seeing the weaker gravity waves generated by events involving stars that allow gravitational wave measurements to be correlated with evidence from telescopes that see photons, cosmic ray detectors and neutrino detectors from the same event.

Understanding strong gravitational fields is relevant to understanding gravitational singularities like the Big Bang, cosmological inflation, black holes, galaxy formation, and the way that galactic clusters work.

It may end up being important to understanding quantum gravity theories which generally predict that many singularities in general relativity (a non-quantum "classical" theory of physics) which means circumstances in which infinities show up as results in equations, actually just produce very large numbers that are not infinite when quantum effects are considered.  Some quantum gravity theories also reproduce general relativity in the classical limit in the medium sized gravitational fields, while deviating from general relativity somewhat in the extreme strong field and extreme weak field limits. So, quantum gravity theories that differ from general relativity in the strong field limit can be constrained.

Extensive measurements of general relativity at work in the strong field limit may also provide insights to quantum gravity researchers who are looking for some additional, experimentally supported axiom to address the problem of the non-renormalizability of naive quantum gravity theories.  For example, if gravity wave observations in the strong field limit become precise enough, the possibility that the gravitational constant G runs or does not run with energy scale in the way that the Standard Model coupling constants do, this could provide an axiom that could be used to formula workable quantum gravity theories.  But, the LIGO observation, while ultra-precise, isn't sufficiently precise to place strong bounds on that possibility.

What it does not tell us.

On the other hand, not all ill understood aspects of gravitational phenomena in which gravity are important can better understood by looking at the strong field of general relativity that govern black holes and the Big Bang.

Phenomena like dark matter and dark energy are relevant only in the context of extremely weak gravitational fields.  An improved understanding of gravitational strong fields only limits resolution of dark matter and dark energy phenomena to the extent that a solution to this weak field issues has a side effect that would have phenomenological effect in the strong field regime as well.

Also, it is important to note that the what LIGO has seen is completely different from the effects of tensor modes of primordial gravitational B waves which the BICEP-2 experiment reported that it had seen signs of in the cosmic background radiation (which later proved to be unsupported by the available data).

Searches like those at BICEP-2 are looking for patterns in the overall distribution of matter and energy in the universe that are associated with particular cosmological inflation scenarios in the early moments after the Big Bang, rather than the gravity wave produced by a single, more recent event in one particular part of the universe that experiments like LIGO and LISA are designed to detect.

Impact Of Future Gravity Wave Observations

Gravity wave telescopes provide a new way to conduct astronomy, to supplement telescopes that look at electromagnetic waves in wavelengths on the infrared side as low as cosmic background radiation and radio waves and on the ultraviolet side to frequencies a bit beyond the blue of visual light. Cosmic ray telescopes and neutrino telescopes detect tiny bits of matter like electrons or neutrinos or individual interstellar gas or dust atoms or molecules that are hurled across the universe at high speeds from distant stars  (the term "cosmic rays" is misnomer since cosmic rays generally don't involve mere photons).

Over the next few decades, as more events are observed at a wider range of frequencies as the new gravitational wave detection experiments come on line, these constraints on the strong field behavior of Nature relative to General Relativity will become much more tight.

Footnotes Regarding Black Holes

* It is not uncommon to say that black holes are the most dense objects in the universe, and that this is why light cannot escape them.  Density means matter divided by volume.  And, for the most conventional definition of the volume of a black hole, i.e. the volume within its "event horizon" which which light cannot escape, this is not true except for the smallest of black holes. All, but the smallest of black holes are not the most dense objects in the universe, and necessarily, the reason that light cannot escape a black hole is not that it is the most dense object in the universe.

For example, photons routinely escape from neutron stars and from atomic nuclei which are more dense than all but the smallest of black holes.  Yet, we routinely directly observe the light from neutron stars with telescopes, and the photons emitted from atomic nuclei are what keep the electrons that are moving in a cloud around those atomic nuclei from flying away.

Neutron stars, which have a mass just under the threshold for them to gravitationally collapse into a black hole can have a mass of about 3 times the Sun packed into a density roughly the same as an atomic nucleus before they collapse to form black holes.  The most dense objects in the universe are black holes just over this threshold.

But, the volume of a black hole, as measured by its event horizon, grows more rapidly than its mass, due to the non-linear nature of gravity in general relativity.  As a result, black holes that are significantly heavier than the neutron star-black hole threshold, such as the roughly 30 times the Sun mass black holes seen by LIGO are significantly less dense in mass per event horizon contained volume than neutron stars or atomic nuclei.  The density of the supermassive black holes at the center of galaxies like the Milky Way and its satellite galaxies, measured in mass per event horizon contained volume, is on the order of the density of liquid water or ordinary Earth rocks.

The internal mass distribution of matter within a black hole is unknown and may be unknowable.  In general relativity, all of the observable properties of a Kerr black hole in equilibrium, such as the one created upon the merger that LIGO observed a few minutes to a few years or so after the merger, can be determined from its mass and its angular momentum.  (The fact that this is possible is a problem for quantum gravity theories for which the law that information cannot be created or destroyed is an axiom that requires considerable theoretical attention.)

The observational reality that there is a well known maximum density of matter approximately equal to the density of an atomic nucleus, neutron star or small stellar mass black hole is generally considered to be a mere empirical fact that emerges from other physical laws.  But, one can imagine a Copernican revolution arising from a theory in which a maximum density of mass-energy per volume (appropriately defined) is a law of nature.  The black hole-neutron star transition point also provides a physical calibration point which is ultimately a function of an equilibrium between a function of the gravitational constant G and a function of the strong force coupling constant of the Standard Model.

** It is theoretically possible for a black hole of less than 3 solar masses to exist, either because it is created by means other than being created exclusively from self-generated gravitational collapse, or because it was once larger and evaporated via Hawking radiation (because actually, what escapes from black holes is not nothing, but merely almost nothing with a little bit of Hawking radiation escaping).  Generally speaking, cosmic background radiation adds more mass to a stellar or larger black hole than Hawking radiation takes away.  But, in principal, at some point in time when this wasn't the case, a stellar black hole could evaporate to less than 3 solar masses while retaining its black hole status.

In real life, no one has ever observed a black hole of this kind which would be called a "primordial black hole" and most primordial black holes created around the time of the Big Bang probably would have evaporated via Hawking radiation by now, but primordial black holes of 10^14 kg or more would not have evaporated and primordial black holds of 10^23 kg or less can't be excluded by gravitational lensing observations.

Thus, primordial black holes, if they did exist, would have masses comparable to asteroids and have been proposed as dark matter candidates (although few dark matter theorists view them as a very serious dark matter candidate for a variety of reasons).

Primordial black holes would have a radius of 145 femto-meters (the size of several tightly packed uranium atoms sitting side by side) to 0.145 millimeters (the thickness of a strand or hair or one coat of paint).