Saturday, February 13, 2016

Dark matter theorists come close to reproducing the baryonic Tully-Fisher relation with reverse engineered simulation

Dark matter theorists recently made the most successful effort to reproduce the baryonic Tully-Fisher relation which related the rate at which a galaxy rotates to the amount of ordinary matter in it, within a fairly limited subset of galaxy masses that excludes very small and very large galaxies.

Furthermore, they have made a prediction using their model that can be used to test its accuracy on new data for low mass galaxies. Preliminary data, however, tends to show that this prediction does not reflect the data. The paper's abstract states:
The scaling of disk galaxy rotation velocity with baryonic mass (the "Baryonic Tully-Fisher" relation, BTF) has long confounded galaxy formation models. It is steeper than the M ~ V^3 scaling relating halo virial masses and circular velocities and its zero point implies that galaxies comprise a very small fraction of available baryons. 
Such low galaxy formation efficiencies may in principle be explained by winds driven by evolving stars, but the tightness of the BTF relation argues against the substantial scatter expected from such vigorous feedback mechanism. 
We use the APOSTLE/EAGLE simulations to show that the BTF relation is well reproduced in LCDM simulations that match the size and number of galaxies as a function of stellar mass. In such models, galaxy rotation velocities are proportional to halo virial velocity and the steep velocity-mass dependence results from the decline in galaxy formation efficiency with decreasing halo mass needed to reconcile the CDM halo mass function with the galaxy luminosity function. Despite the strong feedback, the scatter in the simulated BTF is smaller than observed, even when considering all simulated galaxies and not just rotationally-supported ones. 
The simulations predict that the BTF should become increasingly steep at the faint end, although the velocity scatter at fixed mass should remain small. Observed galaxies with rotation speeds below ~40 km/s seem to deviate from this prediction. We discuss observational biases and modeling uncertainties that may help to explain this disagreement in the context of LCDM models of dwarf galaxy formation.
L.V. Sales, et al., "The low-mass end of the baryonic Tully-Fisher relation" (February 5, 2016) (emphasis and paragraph breaks added).

It isn't entirely clear from the paper what Sales, et al. did in the APOSTLE/EAGLE simulation that resolved the problems that had confounded previous galaxy formation models for the past several decades. Clearly, previous studies were doing something wrong. As this paper explains (citations omitted):
[T]he literature is littered with failed attempts to reproduce the Tully-Fisher relation in a cold dark matter-dominated universe. Direct galaxy formation simulations,for example, have for many years consistently produced galaxies so massive and compact that their rotation curves were steeply declining and, generally, a poor match to observation. Even semi-analytic models, where galaxy masses and sizes can be adjusted to match observation, have had difficulty reproducing the Tully-Fisher relation, typically predicting velocities at given mass that are significantly higher than observed unless somewhat arbitrary adjustments are made to the response of the dark halo.
There is some explanation of what they have done differently, but it isn't very specific:
The situation, however, has now started to change, notably as a result of improved recipes for the subgrid treatment of star formation and its associated feedback in direct simulations. As a result, recent simulations have shown that rotationally-supported disks with realistic surface density profiles and relatively flat rotation curves can actually form in cold dark matter halos when feedback is strong enough to effectively regulate ongoing star formation by limiting excessive gas accretion and removing low-angular momentum gas.

These results are encouraging but the number of individual systems simulated so far is small, and it is unclear whether the same codes would produce a realistic galaxy stellar mass function or reproduce the scatter of the Tully-Fisher relation when applied to a cosmologically significant volume. The role of the dark halo response to the assembly of the galaxy has remained particularly contentious, with some authors arguing that substantial modification to the innermost structure of the dark halo, in the form of a constant density core or cusp expansion, is needed to explain the disk galaxy scaling relations, while other authors find no compelling need for such adjustment.

The recent completion of ambitious simulation programmes such as the EAGLE project, which follow the formation of thousands of galaxies in cosmological boxes ≈ 100 Mpc on a side, allow for a reassessment of the situation. The subgrid physics modules of the EAGLE code have been calibrated to match the observed galaxy stellar mass function and the sizes of galaxies at z = 0, but no attempt has been made to match the BTF relation, which is therefore a true corollary of the model. The same is true of other relations, such as color bimodality, morphological diversity, or the stellar-mass Tully-Fisher relation of bright galaxies, which are successfully reproduced in the model. Combining EAGLE with multiple realizations of smaller volumes chosen to resemble the surroundings of the Local Group of Galaxies (the APOSTLE project), we are able to study the resulting BTF relation over four decades in galaxy mass. In particular, we are able to examine the simulation predictions for some of the faintest dwarfs, where recent data have highlighted potential deviations from a power-law BTF and/or increased scatter in the relation.
In other words, if you set the code to produce the right sized galaxies using lamdaCDM and cherry pick the data to limit your observations to those whose halos cause them to look like the galaxies near the Milky Way, you get results that match the Tully-Fisher relation and also have other properties that match observation.

But, it isn't clear which variables are being calibrated in what respects, and it isn't clear what the data from the realizations of the simulation that don't look like the Local Group of Galaxies are being produced and discarded.

The fact that simply selecting the right size and general pattern of the dark matter halos is sufficient to reproduce baryonic Tully-Fisher and other relations is not a trivial finding.  But, it still doesn't solve the question of how to reproduce these dark matter halo sizes and patterns without putting them into the model by hand through the calibration and selection process.

We learn a little more later on, but it is still very vague:
We refer the reader to the main EAGLE papers for further details, but list here the main code features, for completeness. In brief, the code includes the “Anarchy” version of SPH, which includes the pressure-entropy variant proposed by Hopkins (2013); metal-dependent radiative cooling/heating, reionization of Hydrogen and Helium (at redshift z = 11.5 and z = 3.5, respectively), star formation with a metallicity dependent density threshold, stellar evolution and metal production, stellar feedback via stochastic thermal energy injection, and the growth of, and feedback from, supermassive black holes. The free parameters of the subgrid treatment of these mechanisms in the EAGLE code have been adjusted so as to provide a good match to the galaxy stellar mass function, the typical sizes of disk galaxies, and the stellar mass-black hole mass relation, all at z ≈ 0.
But, we aren't told what choices were made for which free parameters to accomplish this which would seem to be vital issue in determining the validity of the model and understanding what is going on to achieve this result, except that we are told is that:
1504^3 dark matter particles each of mass 9.7 × 106M ; the same number of gas particles each of initial mass 1.8×106M ; and a Plummer-equivalent gravitational softening length of 700 proper pc (switching to comoving for redshifts higher than z =2.8). The cosmology adopted is that of Planck Collaboration et al. (2014), with ΩM = 0.307, ΩΛ = 0.693, Ωb = 0.04825, h =0.6777 and σ8 = 0.8288.

The second set of simulations is the APOSTLE suite of zoom-in simulations, which evolve 12 volumes tailored to match the spatial distribution and kinematics of galaxies in the Local Group. Each volume was chosen to contain a pair of halos with individual virial mass in the range 5 × 10^11 - 2.5×10^12 M . The pairs are separated by a distance comparable to that between the Milky Way (MW) and Andromeda (M31) galaxies (800 ± 200 kpc) and approach with radial velocity consistent with that of the MW-M31 pair (0-250 km/s).

The APOSTLE volumes were selected from the DOVE N-body simulation, which evolved a cosmological volume of 100 Mpc on a side in the WMAP-7 cosmology. The APOSTLE runs were performed at three different numerical resolutions; low (AP-LR), medium (AP-MR) and high (AP-HR), differing by successive factors of ≈ 10 in particle mass and ≈ 2 in gravitational force resolution. All 12 volumes have been run at medium and low resolutions, but only two high-res simulation volumes have been completed.

We use the SUBFIND algorithm to identify “galaxies”; i.e., self-bound structures in a catalog of friends-of-friends (FoF) halos built with a linking length of 0.2 times the mean interparticle separation. We retain for analysis only the central galaxy of each FoF halo, and remove from the analysis any system contaminated by lower resolution particles in the APOSTLE runs. Baryonic galaxy masses (stellar plus gas) are computed within a fiducial “galaxy radius”, defined as rgal = 0.15 r200. We have verified that this is a large enough radius to include the great majority of the star-forming cold gas and stars bound to each central galaxy.
The masses of the dark matter and gas particles are absurdly large (comparable to intermediate sized black holes) to the point where dark matter or gas particles of this size are observationally rule out, because the simulation isn't capable computationally of handling a realistic number of particles (at least a trillion times more) with a far lower mass each.

The two particle masses assumed, the "softening" factor, and the minimum virial mass required for convergence Mconv200 (a total of four free parameters at each of four levels of resolution and sixteen free parameters for the simulation as a whole in addition to parameters inherent in the programs themselves) are arbitrarily set in a results driven manner for each resolution of the EAGLE and APOSTLE simulations.  In other words, the results that the simulations produce are reverse engineered from a moderately realistic set of model rules for parts of the process, rather than predicted from first principles in any realistic cold dark matter model.

Again, this isn't to say that the paper hasn't made a breakthrough by coming up with the first simulation that can match key aspects of reality merely by tweaking the parameters of the dark matter halos, at least within its domain of applicability.  And, if subsequent steps allow investigators to devise a dark matter model that works and isn't contradicted by observational evidence, that's great. But, this paper standing alone certainly isn't an unqualified success either.

Modified gravity theories compared.

More than thirty years ago, non-relativistic MOND did pretty much exactly the same thing with a one line equation that has only one free parameter and a larger range of applicability in that extended from solar system sized and smaller systems to the largest single galaxies that we observe.

Three decades later, it still works basically as advertised so long as one used baryonic rather than luminous matter and one makes an adjustment when one galaxy is in the gravitational field of another galaxy.  While it gets the results wrong in galactic clusters (which EAGLE and APOSTLE can't handle either), it underestimates the dark matter effects there, so it can be cured with cluster-specific dark matter (although realistically, the real problem is that the model needs another parameter or two and some adjustments to its formula to address extremely large systems).

Also, non-relativistic MOND made numerous predictions that were consistent with subsequent data collection.  But, this is a test that the EAGLE/APOSTLE approach already seems poised to fail the instant that it is proposed, just like previous dark matter models which have repeatedly proved to be dismal failures at predicting new unobserved phenomena in advance.

This isn't to say that non-relativistic MOND is right. It isn't. Like the EAGLE/APOSTLE simulation, it's a reverse engineered toy model that fails miserably in areas it wasn't designed to address like strong gravitational fields (e.g. predicting the movement of particles outside the plane of a spiral galaxy's disk or predicting strong field behavior).  And, even the relativistic version of MOND called TeVeS that resolves many of the most glaring problems with non-relativistic MOND in the strong field limit probably isn't right either, and still fails at the galactic cluster scale and above.

But, other modified gravity theories such as those proposed by Moffat are relativistic and do work at all scales, even if they look a little clunky and have another parameter or two in addition to MOND's simple parameter which is coincidentally similar to a plausible simple mix of relevant physical constants.

Even more remarkably, Mr. Deur's revision of how massless graviton self-interaction works relative to the predictions of classical general relativity models (derived by analogy to QCD, the theory of the strong force transmitted by self-interacting gluons) looks like a promising way to achieve all of the objectives of modified gravity theories with essentially no free parameters other than the gravitational coupling constant.  It solves the problems of dark matter, and all or most of the problem of dark energy at all scales, without creating strong field pathologies, and while making new predictions that are consistent with observation in the case of elliptical galaxies.  It does this all in one fell swoop that is theoretically well motivated and doesn't require us to invent any new particles or forces or dimensions of space-time or discrete space-time elements.  Sooner or later, my strong intuition is that this solution will turn out to be the right one, even though it make take a generation or so for this to happen.

If Deur is right, the main problems of quantum gravity may turn out to have arisen mainly because we were trying to design a theory that was equivalent to general relativity, when general relativity itself was actual wrong in a subtle way that is relevant mostly only in the very weak field limit that insures that any quantum gravity theory trying to replicate it will be pathological.

1 comment:

Tienzen said...

"... when general relativity itself was actual wrong in a subtle way that is relevant mostly only in the very weak field limit that insures that any quantum gravity theory trying to replicate it will be pathological."

Amen!
The recent direct observation of gravitational wave is by no means the GR is a correct description of the Nature.
In the G-string model, this universe has two spheres [(real/matter) and (ghost/nothingness)]. Although the actual space/time are quantized, the spacetime of the real sphere can be approximated as a continuum fabric. And, the gravitational interaction is expressed in two sphere: instantaneous via the ghost sphere; the gravitational wave in the real sphere. The gravitational wave is only an attribute (not the total) of the gravitational interaction, see https://tienzengong.wordpress.com/2016/02/11/ligo-story-exciting-yes-and-no/