Background
The sum of the square of each of the fundamental boson masses, plus the sum of the square of each of the fundamental fermion masses, equals to the square of the Higgs vacuum expectation value to a precision of 0.012% (easily within the experimental measurement error of the source masses), four significant digits if:
* one assumes that the mass of the Higgs boson is actually one half of the W boson mass and the Z boson mass (91.1876 GeV), and
* one also uses global fit values for the W boson mass (80.376 GeV/c^2) and the top quark mass (173.2 GeV/c^2) rather than taking individual best estimates of these masses without considering global fit considerations, and
* one calculates the Higgs vacuum expectation value of 242.29 GeV that these best fit values imply (together with the canonical value of the electromagnetic force coupling constant and its beta function governing its running).
The Higgs boson mass if this relationship is correct is 125.97 GeV (well within experimental bounds and accurate to five significant digits). The four significant digit accuracy of the Higgs vev to the sum of the fundamental particle masses squared is as accurate as the least accurately known of the inputs that have a significant impact on the total (the top quark mass).
In my view, this is quite compelling evidence that both the 2H=2W+Z and Higgs vev squared equals sum of fundamental particle rest mass squared relationships are true and fundamental rather than mere coincidences (which isn't to say that I have all of the details of the mechanism that illustrates how these fundamental relationships arise). We are moving into an era where these relationships can be evaluated with precision.
The Not Quite Boson-Fermion Mass Symmetry
If these relationships are true, then the sum of the square of the three boson masses (W, Z, and Higgs) is about 2.1% greater than the sum of the square of the masses of the twelve fermions (six quarks, three charged leptons and three neutrinos, but dominated by the top quark mass which accounts for 99.94% of the total).
If the first two relationships that almost precisely match the best possible estimates of the Standard Model parameters are true, it cannot be true that the sum of the squares of the boson masses and the sum of the square of the fermion masses are equal. Or at any rate, it can't be true for rest masses and the coupling constant strength at the energy level ordinarily used by physicists.
This is a frustrating thing. If the two were equal, we would have this profound boson-fermion mass symmetry - hinting that the boson-fermion symmetries of SUSY might be present in the plain old Standard Model, although more subtly. If the two were wildly different, we wouldn't even think to look at the relationship of the two quantities. But, they are instead close, but not quite close enough to possibly be the same.
It might be that there is a profound boson-fermion mass symmetry, but that it only manifests under the right conditions or from the right perspective. For example, there might be some energy scale at which the boson masses and fermion masses and coupling constants run at which 2H=2W+Z and the sum of squares relationship to Higgs vev, and the boson-fermion mass symmetry all hold simultaneously, and below which the boson-fermion mass symmetry is broken. Perhaps above this threshold, there is even a subtle shift in the running of the electroweak coupling constants that causes the three Standard Model coupling constants to converge at a single point.
One could also imagine this approximate symmetry corresponding to one or more other approximate symmetries in physics, which perhaps together produce an exact symmetry.
For example, perhaps the magnitude of boson-fermion mass asymmetry in the Standard Model exactly corresponds to the aggregate amount of CP violation found in the Standard Model when measured appropriately, or to some appropriate function of the Weinberg angle that governs electroweak unification, or to the amount by which quark-lepton complementarity does not quite perfectly hold in the CKM and PMNS matrixes.
Or, perhaps boson-fermion mass asymmetry is the key to the puzzle of matter-antimatter asymmetry in the universe.
Or, perhaps the "missing fermion rest mass" (about 25 GeV if all from a single particle), corresponds to the sum of the square of the rest masses of fundamental particles other than those in the Standard Model which do not participate in electroweak interactions and hence aren't part of equations involving the Higgs field - perhaps something in the "dark matter" sector, or a see-saw sterile neutrino, or the some significant measure of the dynamically created mass of gluons that are not "at rest" that have a negative rather than a positive contribution to the boson side of the equation.
Or, perhaps this imbalance relates to the cosmological constant in some way (another constant that is almost, but not quite, zero).
Thursday, September 26, 2013
Wednesday, September 25, 2013
Open Thread: The Oops Files
One of the next posts that I plan on doing is a compilation of the most notable cases of scientific or anthropological results that were later discredited. Comments suggesting examples of this to include in the post are welcome.
I am primarily interested in good faith errors by legitimate professionals, although examples of notable hoaxes and notable cases of faked data are also welcome.
I am primarily interested in good faith errors by legitimate professionals, although examples of notable hoaxes and notable cases of faked data are also welcome.
Monday, September 23, 2013
Göbekli Tepe marked dog star's appearance in Anatolian sky
The circular stone enclosures known as the temple at Göbekli Tepe in southeastern Turkey remain the oldest of its kind, dating back to around the 10th millennium B.C.
But Göbekli Tepe may also be the world's oldest science building.
Giulio Magli of the Polytechnic University of Milan hypothesizes it may have been built due to the “birth” of a “new” star; the brightest star and fourth brightest object of the sky, what we call Sirius (Greek for "glowing"). . . .
Precession at the latitude of Göbekli Tepe would have sent Sirius under the viewing horizon of those in ancient Turkey around 15,000 BC, where it remained unseen again until around 9,300 B.C. To those residents it was a new star appearing for the first time. . . .
"The extrapolated mean azimuths of the structures (taken as the mid-lines between the two central monoliths) are estimated as follows":
Structure D 172°
Structure C 165°
Structure B 159°
Those azimuths match the rising azimuths of Sirius:
Structure D 172° 9,100 BCFrom here.
Structure C 165° 8,750 BC
Structure B 159° 8,300 BC
The case that this pre-Neolithic structure was built to track a newly appeared star described above is quite convincing.
Ireland's First Neolithic Revolution Failed
Around 3700 BCE,Ireland had a full fledged sedentary farming village society. In the three hundred years that followed, apparently due to a worsening climate, this society collapsed. The people of Ireland returned to a hunting and foraging method of food production where the island remained for about 1200 years. A return to sedentary farming didn't begin until around 2200 BCE. There may have been similar developments in Britain. Instances of a return to hunting and foraging after a brief period of farming, before an ultimate return to farming, driven by climate, are also known in Scandinavia thousands of years later.
Among other things this means that for much of Northern and Western Europe, an era of hunting and foraging was much more recent at the advent of the historical record than an estimate based upon the earliest Neolithic archaeological traces would suggest. It also suggests that the relationship between megalithic cultural remnants and food production method may be more nuanced than earlier analysis had suggested.
Among other things this means that for much of Northern and Western Europe, an era of hunting and foraging was much more recent at the advent of the historical record than an estimate based upon the earliest Neolithic archaeological traces would suggest. It also suggests that the relationship between megalithic cultural remnants and food production method may be more nuanced than earlier analysis had suggested.
Monday, September 16, 2013
CMS Measures Fundamental Constants
The CMS experiment at the Large Hadron Collider has measured the strong force coupling constant at Z boson mass-energy to be 0.1151 +/- 0.0033. This compares to and is consistent with a four and a half times more precise previous measurement of 0.1184 +/- 0.0007.
It also measured top quark pole mass at 176.7 + 3.8 -3.4 GeV/c^2, which compares to and is consistent with a previous more accurate Tevatron measurement of 173.18 +/- 0.94 GeV/c^2. The new result, again, is several times less precise than the current world standard estimate.
While the results are not terribly precise, because they are obtained using a different methodology than most previous measurements of these quantities, the results make the average measurements more robust and less subject to systemic errors that could be shared by all of the other experiments.
CMS has also made the most precise measurement ever of the relative momentum of up quarks and down quarks within the proton, something that could ultimately be used to more accurately estimate their masses, two of the least accurately known constants in the Standard Model. These measurements now have more precision than the theoretically calculated prediction.
In January of this year, I summarized how precisely the various Standard Model constants have been measured. In March of this year, I summarized how global electroweak precision fits fine tune some of these measurements by trying to reconcile individual measurements with their known relationships to each other.
In January of this year, I summarized how precisely the various Standard Model constants have been measured. In March of this year, I summarized how global electroweak precision fits fine tune some of these measurements by trying to reconcile individual measurements with their known relationships to each other.
Tuesday, September 10, 2013
Absolute Dates In Ancient Egyptian Prehistory Refined
A new paper has made progress in firmly establishing absolute chronological dates for various phases of ancient Egyptian prehistory.
More Analysis of Neanderthal Introgression
"The timing and history of Neandertal gene flow into modern humans." S. Sankararaman et al.
Previous analyses of modern human variation in conjunction with the Neandertal genome have revealed that Neandertals contributed 1-4% of the genes of non-Africans with the time of last gene flow dated to 37,000-86,000 years before present. Nevertheless, many aspects of the joint demographic history of modern humans and Neandertals are unclear. We present multiple analyses that reveal details of the early history of modern humans since their dispersal out of Africa.
1.We analyze the difference between two allele frequency spectra in non-Africans: the spectrum conditioned on Neandertals carrying a derived allele while Denisovans carry the ancestral allele and the spectrum conditioned on Denisovans carrying a derived allele while Neandertals carry the ancestral allele. This difference spectrum allows us to study the drift since Neandertal gene flow under a simple model of neutral evolution in a panmictic population even when other details of the history before gene flow are unknown. Applying this procedure to the genotypes called in the 1000 Genomes Project data, we estimate the drift since admixture in Europeans of about 0.065 and about 0.105 in East Asians. These estimates are quite close to those in the European and East Asian populations since they diverged, implying that the Neandertal gene flow occurred close to the time of split of the ancestral populations.[Ed. This is probably in the time frame of ca. 50,000 to 86,000 years ago within the range of estimated admixture times.]
2.Assuming only one Neandertal gene flow event in the common ancestry of Europeans and East Asians, we estimate the drift since gene flow in the common ancestral population. We show that an upper bound on this shared drift is 0.018. Because this is far less than the drift associated with the out-of-Africa bottleneck of all non-African populations, this shows that the Neandertal gene flow occurred after the out-of-Africa bottleneck.[Ed. Note that if the effective population size of Out of Africa modern humans fell before it recovered, the bottleneck could have happened well after the initial Out of Africa migration.]
3.We use the genetic drift shared between Europeans and East Asians, in conjunction with the observation of large regions deficient in Neandertal ancestry obtained from a map of Neandertal ancestry in Eurasians, to estimate the number of generations and effective population size in the period immediately after gene flow. These analyses suggest that only a few dozen Neandertals may have contributed to the majority of Neandertal ancestry in non-Africans today.[Ed. I'd love to get a look at this data.]
Via a Dienekes Anthropology Blog post on the 2013 ASHG conference abstracts (emphasis his; bracketed comments mine).
* Notably, this study does not discriminate between a single admixture event in a single genetic population followed promptly by a population split into Western and Eastern components, and two similar parallel admixture events, one with proto-West Eurasians and one with proto-East Eurasians, that takes place after the populations split.
Point one does, however, confirm inferences I have made previously from the small amount of overlap found in the particular Neanderthal genes found in West Eurasians and East Eurasians respectively, which you would not expect if Neanderthal admixed genes had reached fixation in a single unstructured population very long before the West Eurasian-East Eurasian split of "Out of Africa" modern humans into two separate populations with very little gene exchange.
* Point two is expected from the absence of Neanderthal genes in Africans except to the extent of gene flow from back migration from the Out of Africa population, and from the presence of Neanderthal genes in all non-Africans.
While understated in the abstract, the bigger revelation of point two is that Neanderthal gene flow probably took place quite a while after the Out of Africa event, rather than at the outset when one might naively have expected the Out of Africa population to be at its smallest prior to its expansion into "virgin territory." Other studies have suggested based on statistical analysis of modern population genetic data that the Out of Africa population contracted before it expanded.
Neanderthal admixture may have happened not long after the effective Out of Africa population size hit bottom, or perhaps more accurately, Neanderthal admixture prior to then while the effective population size was falling was likely to be lost to the gene pool through drift, while Neanderthal admixture during the immediately following population expansion was likely to be preserved in the expanding population.
* Point three, suggesting that there were only a few dozen instances of Neanderthal admixture that account for almost all Neanderthal genes in modern humans is the most fascinating when it comes to building a narrative and understanding how this happened.
Assuming an effective population size around that time for Out of Africa modern humans of 3,000 to 20,000 and a time span of admixture that may have been somewhere in the range of 900 to 24,000 years (about 30 to 800 generations), we are talking about an entire modern human Out of Africa population the size of a small city in which there were one or two half-Neanderthal children born each generation or so (of course, more concentrated and more sparse scenarios are possible to some extent).
Also, based on reasoning from the absence of Neanderthal Y-DNA and mtDNA, I have previously concluded that Neanderthal hybrids who were born into modern human communities probably almost always had modern human mothers and Neanderthal fathers, and that live births were overwhelmingly of half-Neanderthal girls rather than boys due to issues of hybrid compatibility often expressed as Haldane's law. An abstract of a recent paper by Reich in the same post corroborates the notion that hybrid incompatibility was an issue from direct genetic evidence:
We built a map of Neandertal ancestry in modern humans, using data from all non-Africans in the 1000 Genomes Project. We show that the average Neandertal ancestry on chromosome X of present-day non-Africans is about a fifth of the genome average. It is known that hybrid incompatibility loci concentrate on chromosome X. Thus, this observation is consistent with a model of hybrid incompatibility in which Neandertal variants that introgressed into modern humans were rapidly selected away due to epistatic interactions with the modern human genetic background.
Source Paper: "Insights into population history from a high coverage Neandertal genome.
D. Reich for the Neandertal Genome Consortium.
In my view, this evidence, taken as a whole, supports a scenario in which modern human-Neanderthal couplings were episodic events, perhaps one night stands, perhaps seasons affairs, perhaps rapes, rather than sustained, marriage-like relationship in which a Neanderthal individual was integrated permanently into a modern human community, or visa versa. There may have been an undetectable small number of exceptions that proved the rule, of course.
* I also continue to think that the Neanderthal introgression legacy in modern humans probably shows only half the picture. Hybrid offspring of modern human men and Neanderthal women were probably also born, but if hybrid children were matrilocal, they would have disappeared when their Neanderthal tribes, in general, went extinct.
Are Archaic Ancestry Percentages A Function Of Phenotypic Invisibility?
One point it would be interesting to work out would be how long it would take before Neanderthal introgression reached fixation in the community and hybrid Neanderthals ceased to be a distinct and recognizable sub-community within Out of Africa modern humans. In similar models that I have run (aims at understanding the future of race relations), it takes surprisingly few generations (five to ten or so) for a population to become almost completely admixed in the absence of endogamy norms, but in a community where some people are strongly endogamous and a minority are not, unadmixed people are very swiftly comprised only of people who have a strong endogamy norm.
Ultimately, no unadmixed populations persisted in either West Eurasia or East Eurasia, presumably because after a few generations of admixture people with a small percentage of Neanderthal admixture became phenotypically indistinguishable from people with none. But, at first, it is hard to imagine that discernibly hybrid Neanderthal girls would have been on a completely level playing field in finding mates as girls without any Neanderthal admixture.
One would expect that a Neanderthal ancestry proportion that was phenotypically indistinguishable from a non-admixed individual would have a proportion of Neanderthal admixture similar to the amount which is indistinguishable for instance, in a black-white mixed race individual, since Neanderthals and modern humans would have been more phenotypically distinct from each other than any two modern humans.
Experience tends to show that a black-white mixed race individual can "pass for white" in the vicinity of 1/8th to 1/16th African ancestry, about 6.25%-12.5%. The current Neanderthal admixture level in modern humans is just a little bit below that threshold (which would have been a bit lower for Neanderthal-modern human mixed species individuals due to the greater differences between the two to start with). Indeed, one might wonder if this "pass for modern human" threshold played an important part in determining the ultimate level of admixture that took place on the theory that only people who could pass for modern human would have been fully integrated into the modern human community and contribute genetically to the population in the long run.
Otzi the Iceman and the estimated peak level of admixture in Denisovan introgressed populations both seem to be close to 8% archaic admixed, again suggesting the phenotypic indistinguishability fraction as a cultural threshold with important long term effects.
Tuesday, September 3, 2013
A Defense of Bohmian Pilot Wave Theory
Lubos has posted a guest post making a vigorous defense of the Bohmian Pilot Wave Theory interpretation of quantum mechanics in response to his recent post arguing that the theory was fundamentally flawed. The other main interpretations of quantum mechanics are the Copenhagen interpretation (which is the mainstream version) and the many worlds interpretation (aka inconsistent histories).
Lubos argued that the Bohmian interpretation does not produce the same results as other interpretation. This criticism is taken to task and the main benefit of the Bohmian interpretation, which is that it dispenses with the need for a meta-theory about the "collapse of the wave function" is dispensed with in Bohmian interpretations with an additional equation. Bohmian interpretations also try to give quantum mechanics a less "magical" PR spin via resort to hidden variables and in particular the "pilot wave".
But, Bohmian quantum mechanics, like the other kinds, involves subtle points that are easy to get wrong which is was the defense argues that Lubos has done - in effect arguing against a straw man flawed version of the theory when better versions exist.
In another forum the same author discusses why Lorentz invariance might plausible break down at small scales without breaking down at long distance sccales.
Lubos argued that the Bohmian interpretation does not produce the same results as other interpretation. This criticism is taken to task and the main benefit of the Bohmian interpretation, which is that it dispenses with the need for a meta-theory about the "collapse of the wave function" is dispensed with in Bohmian interpretations with an additional equation. Bohmian interpretations also try to give quantum mechanics a less "magical" PR spin via resort to hidden variables and in particular the "pilot wave".
But, Bohmian quantum mechanics, like the other kinds, involves subtle points that are easy to get wrong which is was the defense argues that Lubos has done - in effect arguing against a straw man flawed version of the theory when better versions exist.
In another forum the same author discusses why Lorentz invariance might plausible break down at small scales without breaking down at long distance sccales.
Planck 2013 CIB Anisotropy Data Released
The original release of the Planck 2013 CIB (cosmic infrared background radiation) data in March of this year excluded critical data on anisotropies that took longer to analyze. Some of that data was released today in an analysis of tightened boundaries on models of star formation, which includes some quite model dependent assumptions about dark matter that are not present in the overall six parameter cosmological model. But, the much awaited increased precision with which various cosmological inflation models can be discriminated from each other is not yet available.
These releases are a big deal because Planck's data are not just precise, but are measurements that are precise as it is theoretically possible to measure of a singular astronomical phenomena. Once the full set of Planck data are release it may be a thousand years before deep space probes make better data available.
These releases are a big deal because Planck's data are not just precise, but are measurements that are precise as it is theoretically possible to measure of a singular astronomical phenomena. Once the full set of Planck data are release it may be a thousand years before deep space probes make better data available.
A Tidal Wave Of New Ancient DNA and Population Genetic Data
Dienekes' Anthropology blog recounts abstracts of conference papers providing an immense number of new ancient DNA data sets from Europe and North Africa, some from ISABS 2013 and some from EAA 2013. None of the papers, taken individually, is really game changing. But, collectively, the vastly expanded ancient DNA data set is solidifying and refining conjectures made based upon early ancient DNA results into well established results.
Among the most notable results are:
* the extent to which North Africa has been predominantly one corner of the West Eurasian in genetic macro-population (as opposed to sub-Saharan African) as far back as 23,000 years ago with sub-Saharan African contributions attributable to relatively recent low level population exchanges across the Sahara;
* the increasingly clear reality that there have been several important demic migration waves in Europe from the advent of the Neolithic revolution and Bronze Age collapse; and
* another solid archaeological example of megalithic architecture by people with forager economies. The case of large scale social organization and religion preceding rather than following the Neolithic revolution is strengthening.
Among the most notable results are:
* the extent to which North Africa has been predominantly one corner of the West Eurasian in genetic macro-population (as opposed to sub-Saharan African) as far back as 23,000 years ago with sub-Saharan African contributions attributable to relatively recent low level population exchanges across the Sahara;
* the increasingly clear reality that there have been several important demic migration waves in Europe from the advent of the Neolithic revolution and Bronze Age collapse; and
* another solid archaeological example of megalithic architecture by people with forager economies. The case of large scale social organization and religion preceding rather than following the Neolithic revolution is strengthening.
Physics Conjectures Galore!
The plenary opening and closing addresses at academic conferences are often hotbeds of cutting edge conjecture by the gray beards of the discipline. Chris Quigg's paper "DIS and Beyond" (August 30, 2013) does not disappoint on that score. He notes:
* Optimistic prospects for new discoveries within QCD.
* An interesting conjectural relationship between the mass of the top quark and the mass of the proton (hat tip to Mitchell's theory blog).
* The notion that a good place to look for a GUT or supersymmetry would be in the bending of the running of the coupling constants of the Standard Model that might be observable at the LHC (and which if not observed would undermine motivation for SUSY).
* A discussion of alternative means by which electroweak symmetry breaking can take place.
* He closes by noting that remarkable success of the Standard Model against other contenders which is particularly evidence in the absence of flavor changing neutral currents predicted in many beyond the Standard Model theories but which are too rare to detect in the Standard Model. He closes his talk by stating:
* Optimistic prospects for new discoveries within QCD.
* An interesting conjectural relationship between the mass of the top quark and the mass of the proton (hat tip to Mitchell's theory blog).
* The notion that a good place to look for a GUT or supersymmetry would be in the bending of the running of the coupling constants of the Standard Model that might be observable at the LHC (and which if not observed would undermine motivation for SUSY).
* A discussion of alternative means by which electroweak symmetry breaking can take place.
* He closes by noting that remarkable success of the Standard Model against other contenders which is particularly evidence in the absence of flavor changing neutral currents predicted in many beyond the Standard Model theories but which are too rare to detect in the Standard Model. He closes his talk by stating:
I regard the persistent absence of flavor-changing neutral currents as a strong hint that a new symmetry or new dynamical principle may be implicated, or that new physics is more distant than hierarchy-problem considerations had indicated.In sum, the short paper is definitely worth a read.
Friday, August 30, 2013
The Most Notable "New Physics" Proposals That Probably Aren't True
Some of the "new physics" theories that receive the most academic attention are in my view, looking at the field from a "forest" rather than a "trees" view as an informed layman, very implausible. Here are some of the most notable of them.
1. Cold dark matter and WIMPS.
The dark matter paradigm is alive and kicking, but the "cold dark matter" paradigm that assumes that dark matter is made up of exotic particles of 8 GeV mass to hundreds of GeV, particularly "WIMPS" (weakly interacting massive particles) in that mass range increasingly appears to be at odds with data from astronomy.
2. SUSY, Supergravity, and string theory.
Most mainstream versions of string theory aka M-theory imply supersymmetry aka SUSY as a low energy effective field. But, decades of research later, the positive evidence for SUSY is still not there and the motivation for the theory is increasingly weak. Supergravity theories, which extend SUSY by incorporating gravity are similarly problematic.
3. SM4.
Experiments at the LHC have pretty much ruled out sensible extension of the Standard Model with four rather than three generations of fermions.
4. Technicolor.
This theory pretty much died when the Higgs boson was discovered. It had been a leading approach to explaining Standard Model data without a Higgs boson.
5. Anthropic Principal Theories and the Multiverse.
Cosmology theories that resort to explaining the current universe with the anthropic principle aren't really science in the ordinary sense.
1. Cold dark matter and WIMPS.
The dark matter paradigm is alive and kicking, but the "cold dark matter" paradigm that assumes that dark matter is made up of exotic particles of 8 GeV mass to hundreds of GeV, particularly "WIMPS" (weakly interacting massive particles) in that mass range increasingly appears to be at odds with data from astronomy.
2. SUSY, Supergravity, and string theory.
Most mainstream versions of string theory aka M-theory imply supersymmetry aka SUSY as a low energy effective field. But, decades of research later, the positive evidence for SUSY is still not there and the motivation for the theory is increasingly weak. Supergravity theories, which extend SUSY by incorporating gravity are similarly problematic.
3. SM4.
Experiments at the LHC have pretty much ruled out sensible extension of the Standard Model with four rather than three generations of fermions.
4. Technicolor.
This theory pretty much died when the Higgs boson was discovered. It had been a leading approach to explaining Standard Model data without a Higgs boson.
5. Anthropic Principal Theories and the Multiverse.
Cosmology theories that resort to explaining the current universe with the anthropic principle aren't really science in the ordinary sense.
Wednesday, August 28, 2013
MOND works and is predictive (and why we should care)
Modified Gravity theory (MOND) is a simple empirical relationship that has been predictive (most recently here) at explaining gravitational dynamics without dark matter at galactic scale, although it understates "dark matter" effects at galactic cluster scales. It predicts not just velocity dispersion of objects in galaxies but subtle effects like the impact of proximity to a host galaxy on dwarf galaxy behavior. There are good reasons to doubt that its mechanism is correct and that a more conventional dark matter theory is the right mechanism for causing these effects.
But, the great predictive success of a very simple, one parameter MOND theory over very large data sets and involving new kinds of data not used to generate the theory long in advance, implies that it must be possible to derive the MOND rule at the proper galaxy level scale from any correct dark matter theory. Likewise, if a simple one parameter formula can explain all of that data, any dark matter theory must itself be very simple. The simply theory is obviously flawed in some respects (e.g. in the original version it is not relativistic). But, it can be generalized without losing its essential features (e.g. in the TeVeS formulation that is fully relativistic).
It is also possible that MOND, "dim matter" and some kind of "cluster dark matter" that is abundant in galactic clusters, but almost absent everywhere else could be at work.
Another attractive feature of MOND is that the dark matter particles that particle physics was supposed to provide as dark matter candidates have not been detected. But, if MOND is correct, we don't need them.
There are a variety of ways to work MOND effects into modifications of general relativity. Some flow from the observation that the MOND constant has a strong coincidence with the size of the universe, suggesting that MOND may arise from the suppression of gravity waves with amplitudes larger than the size of the universe itself.
UPDATE August 30, 2013:
The observation that MOND works and predictive is more than an observation of a mere coincidence or even as I noted before an strong indication that any dark matter mechanism, if there is a dark matter mechanism is very likely very simple because the MOND theory itself is (although it is possible that the complex bits are simply small contributions to the overall result in the same way that the general relativity corrections to Newtonian gravity, while very deep and complex are usually negligible).
But, the fact that MOND works and predictive implies something else about the correct theory that produces this phenomenological relationship. While correlation does not imply causation, correlation does imply that some cause, direction unknown and possibly indirect, causes that correlation. Robust and predictive correlations happen for a reason even if that reason is not a direct causal relationship between the two data sets.
What is MOND?
The MOND hypothesis is that there is a functional relationship between the gravitational fields that would be generated by luminous matter in a galaxy and the "dark matter" effects in that galaxy that are observable only in the parts of the luminous matter gravitational field that are weak which is defined as having gravitational acceleration below the MOND acceleration constant a0. MOND argues that gravity gets weaker according to a conventional 1/r2 law (where r is the distance between the two objects which are attracted to each other by gravity) in fields stronger than a0 and according to a "new physics" 1/r relationship in fields weaker than a0. An ad hoc interpolation function is used to estimate the force of gravity around the transitional field strength and data don't allow meaningful ways to distinguish between the alternative transition formulas.
Because GMm/r2 << G'Mm/r where G' is the constant that produces the MOND gravity prediction in the limit as r >> r at a0, the simplest interpolation is simply to assume that MOND gravity equals Newtonian gravity + G'Mm/r gravity where the second term is too small to discern experimentally in gravitational fields that are strong relative relative to a0= approximately 1.2*10^-10 ms^-2, and the first term is too small to discern relative to the second experimentally in gravitational fields that are weak relative to a0.
What does this imply?
One of the most profound implications of the fact that MOND works and is predictive is that there is a direct and reasonable precise functional relationship between the input into MOND's black box formula, the amount and distribution of luminous matter in a galaxy, and the output, which is the "dark matter" effects that are observed empirically in a galaxy.
This means that in any dark matter theory that accurately replicates reality, the distribution of dark matter particles in the dark matter halo of a galaxy must be functional related to the amount and distribution of luminous matter in that galaxy.
There are several ways that this could be possible. To illustrate this point, here are three broad kinds of scenarios that could cause this to be true. I marginally favor the first, although I don't rule out the second. The third, I consider to be very unlikely, but include it for completeness.
First, it could be that galaxies differ from each other in a very simple, more or less one dimensional way as a result of the way that galaxies evolve. Galaxies of a particular mass may always have a particular or one of a couple of particular luminous matter distributions and any factor that impacts how a galaxy of a particular size evolves impacts the distribution of dark matter in that galaxy in a way that corresponds to the distribution of luminous matter in that galaxy. Thus, the MOND relationship between the galaxy's luminous matter distribution and its dark matter halo distribution arises because the evolution of both kinds of matter distributions is a process that in each case is almost entirely gravity dominated and is shared by all of the matter luminous and dark in a given galaxy. In this process, Newtonian gravitational effects predominate over additional general relativistic gravitational effects, and this very simple gravitational law produces very simple and characteristic distributions of matter than can be summed up in the empirical MOND relationship that is observed. Deriving the MOND relationship from this process may take some pretty clever analytical modeling of the evolution of galaxies that exhibits shrewd understanding of how this process can be drastically simplified without loss of significant accuracy.
In particular, there is a fair amount of evidence to suggest that inferred dark matter halo shapes are strongly related to the shape of a galaxy's inner bulge, but are fairly indifferent to the distribution of matter at the fringe of a galaxy. The shape of a galaxy's inner bulge, in turn is largely a function of the nature of a galaxy's central black hole. If the distribution of the luminous matter in a galaxy and the distribution of the dark matter in a galaxy are largely a function of the nature of the central black hole of the galaxy, then it would follow that luminous matter distributions in a galaxy and dark matter distributions in a galaxy should be functionally related to each other. Moreover, is a central black hole in a galaxy of a given mass is pretty much like every other central black hole in a galaxy that has the same mass, then the distribution of both luminous matter and dark matter in galaxies should both be a function of a single number - the mass of the central black hole of the galaxy.
One version of this kind of scenario is one in which apparent "dark matter" effects are actually driven by ordinary "dim matter" emitted by the central black hole mostly in the "upward" and "downward" directions of the axis of rotation of that central black hole and the galaxy that arises around it. A 1/r relationship between force and distance is precisely the relationship one would expect in a simple Newtonian gravitational scenario in which there is a long, narrow, axial distribution of dim matter in both directions from the central black hole of a galaxy. If the axial distribution of ordinary "dim matter" is long enough and coherent enough that is generates its own 1/r gravitational field to a distance at least as great as the most distant star for which the galaxy's gravitational influence can be observed by an astronomer, then this would generate apparent dark matter effects that approximately follow the phenomenological MOND law.
The combined distribution of luminous and non-luminous matter in a galaxy in the scenario discussed above would look something like the image above, but with thinner and longer extensions up and down out of its axis containing matter and energy that is in rapid motion away from the galaxy.
It should be fairly elementary, moreover, for anyone with a year or two of calculus based physics under their belt to use the MOND constant a0 to calculate the characteristic ratio of axial dim matter to galactic ordinary matter in such a scenario (I could do it this weekend if I could find the time in an hour or two). With a few additional data points about the most distant stars that have been observed to experience MOND-like effects in the vicinity of a galaxy one could also fairly easily establish a minimum length of this axial dim matter and the amount of mass per linear distance of axial dim matter that would be anticipated in a typical galaxy, although any bound on the width of this axial mass distribution would be fairly weak. Since there are at least two different processes observed by astronomers by which black holes are known to emit matter and energy in a more or less axial direction and much of that matter is "dim" and the speed of the emitted matter and emitted energy and the minimum age of a galaxy can be determined to within reasonable bounds, the extent to which known processes could account for axial dim matter giving rise to MOND-like effects wouldn't be too hard to estimate, and the amount of axial "dim matter" that would necessarily have a source in some other unknown form of black hole emissions could also be estimated fairly accurately. It wouldn't be surprising if the sum of the total amount of axial dim matter in the universe resolved much of the "missing baryon" problem - that the number of baryons in the universe according to widely held baryongenesis hypotheses is smaller than the number that are present in all observed luminous matter by a substantial amount without giving rise to any notable cosmological effects that have been attributed to this missing baryonic matter.
Of course, given what my weekend looks like - violin supply store, CostCo, bank deposits, working on my cases the weekend, writing course packs, buying groceries, BBQing, getting someone to a tennis lesson, weeding, mulching, fertilizing, laundry, etc., the efforts of anyone else interested in doing so and posting the results in the commons would be welcome. Scientific discoveries can't be patented and I would love to know the answer and have no deep need to be the one who finds it.
This black hole emitted matter unaccounted for by known processes could be created by the extreme conditions that exist only the large central black holes in the center of galaxies (which would explain why we can't produce this kind of matter in particle accelerators), or it could simply be ordinary matter that does coalesce into stars or other large objects when emitted from a black hole in this way because it is emitted in a diffuse spray of fast moving particles whose speed and common direction prevent them from gaining the critical mass necessary to combine into compact objects that astronomers can observe.
Perhaps astronomers looking for this very particular and well defined kind of dim matter signature could find a way to measure it in some natural experiment arrangement of stellar objects somewhere among the billions and billions of stars in the universe that we can observe on all manner of wavelengths.
Any such process would, by definition, produce neither non-baryonic dark matter, nor ordinary dim matter that ends up in the galaxy's disk of rotation. So, direct dark matter detection experiments conducted in the vicinity of Earth which is in the plane of the Milky Way's disk are doomed to fail if this hypothesis is correct, because in this model, there is no dark or dim matter in that part of the galaxy.
This would also explain why estimated cosmological amounts of dark matter are on the same order of magnitude as estimated cosmological amounts of ordinary matter, another great unsolved question in physics.
In any case, if the missing matter, whether in the form of "novel" dark matter particles of an unknown type, or merely "dim matter" has a distribution that is driven by the same central black hole gravitational effects that drive the distribution of luminous matter in galaxies, the key to reconciling MOND theories and dark matter theories would be at hand.
It is not clear, however, that such a theory would adequately fill the role that dark matter plays in the highly predictive six parameter lamdaCDM model of cosmology, or would be consistent with bottom up galaxy formation models that have been highly successful in 2 keV warm dark matter scenarios that help address problems with the cold dark matter model like "cuspy halos" and the "missing satellites" problem. This hypothesis could create as many new problems as it solves for the dark matter paradigm.
The warm dark matter literature is surprisingly devoid is simple diagrams like the one above illustrating the inferred shape of the Milky Way's dark matter halo in one recent study. But, this illustration is closer to the conventionally expected warm dark matter halo shape. The dark matter paradigm favors structures that are blob-like rather than cylindrical for dark matter halos because it is hard to make nearly collisionless particles with significant kinetic energy that interact primarily only through gravity form more compact structures. The differential effects of the central mass of the galaxy prevent the dark matter halos from behaving like an ideal gas, but only modestly.
The non-spherical shape of the halo, however, is critical to generating the apparent 1/r gravitational field strengths that are observed.
(It is also worth noting that the roughly 2 keV sterile neutrino-like warm dark matter particles that seem to be the best fit to the empirical data within the dark matter paradigm are virtually undetectable in existing direct dark matter detection experiments which are designed to observe weakly interacting dark matter particles with masses in the GeV or heavier mass range.)
A result like this that involves only ordinary "dim matter" however, would be a huge blow to physicists longing for "new physics." It would explain the biggest unsolved problem in physics when it comes to the fundamental laws of physics and their observable effects using only a deeper understanding of processes that occur entirely according to already well understood "old physics." The biggest empirical arrow pointing towards undiscovered types of stable fundamental particles would turn out to have been a mere mirage.
Without the "crutch" of some sort of theory to explain dynamically the evolution of dark or dim matter halo shapes in galaxies parallel to luminous matter distributions in those galaxies, no dark matter theory can be considered complete.
Second, the MOND law could, for whatever, reason actually constitute the true law of gravity when suitably formulated in a general relativistic form (something that has actually been done successfully in several different varieties of proof of concept efforts). As noted above, this would call for some kind of quantum gravity theory or perhaps something related to the impact of a bounded universe of finite size on the way that gravity waves behave.
This would be exciting news for quantum gravity researchers and bad news for particle physics theorists. A 1/r relationship would quite plausibly derive from some process that reduced the effective dimensionality of space-time from three spatial dimensions to two. Perhaps, for example, due to quantum entanglement of distant points between which a particle has traveled or because gravity models have underestimated the gravitational effect of the angular momentum of a spinning galaxy due to some subtle flaw in the normal formulation of general relativity or the way that this formulation is inaccurately applied in models of complex massively many bodied systems like galaxies.
Of course, in particular, this would also be bad news for direct dark matter detection experiments because in this scenario there is no dark matter to detect anywhere except possibly in galactic clusters - all of which are a very, very long way from planet Earth making direct detection of cluster dark matter virtually impossible. Making sense of anomalous gravitational effects that might be due to dark or dim matter in galactic clusters is hard. This is because the structure and non-luminous ordinary matter content of galactic clusters is far less well understood and is far more complex, than the structure and non-luminous ordinary content of ordinary individual spiral, elliptical and dwarf galaxies.
This mechanism for a MOND theory also directly and transparently explains why it doesn't work as well in galactic clusters, whether or not "cluster dark matter" exists. The MOND relationship, in any variation of this hypothesis flows from the parallel evolution processes that are more or less the same for any one given galactic central black hole of a given size, it makes sense that these relationships might not hold for a system with many galactic central black holes in close proximity to each other and different typical ages in the galaxy formation process. Galactic clusters may be profoundly more complex to such an extent that no simple model like MOND can explain them.
Third, there could be a non-gravitational interaction between luminous matter and dark matter that causes dark matter halos to be distributed in a particular way relative to luminous matter. For example, the flux of photons out of a galaxy is roughly proportional to the Newtonian component of the gravitational field of the luminous matter in that galaxy at any given distance from the galaxy. So, if dark matter had very weak electromagnetic interactions with the outgoing flux of photons, this could produce a dark matter distribution that tracks the distribution of luminous matter in a system, while still having a character as collisionless, non-self-interacting particles. Of course, since photon flux, like graviton flux has a 1/r2 relationship to distance from the luminous matter source, this doesn't easily explain a 1/r MOND effect. Also, the photon flux generated by a star is not all that perfectly related to the mass of the star generating the flux, so far as I know (more accurately, I have no idea one way or the other how tight the relationship is between photon flux and stellar mass). Perhaps, at long distances, the geometry of a galaxy impacts this flux somehow in a manner different than for graviton flux.
This kind of explanation would be a field day for particle physicists, because no known fundamental particle has this kind of interactions. I don't see it as a likely option, but one should consider all possibilities for unexplained phenomena for sake of completeness.
Dark matter of this variety ought to be highly amenable to detection by a model driven direct dark matter detection experiment, although existing direct dark matter detection experiments, which involve a very different paradigm and model, might be useless at detecting it.
But, the great predictive success of a very simple, one parameter MOND theory over very large data sets and involving new kinds of data not used to generate the theory long in advance, implies that it must be possible to derive the MOND rule at the proper galaxy level scale from any correct dark matter theory. Likewise, if a simple one parameter formula can explain all of that data, any dark matter theory must itself be very simple. The simply theory is obviously flawed in some respects (e.g. in the original version it is not relativistic). But, it can be generalized without losing its essential features (e.g. in the TeVeS formulation that is fully relativistic).
It is also possible that MOND, "dim matter" and some kind of "cluster dark matter" that is abundant in galactic clusters, but almost absent everywhere else could be at work.
Another attractive feature of MOND is that the dark matter particles that particle physics was supposed to provide as dark matter candidates have not been detected. But, if MOND is correct, we don't need them.
There are a variety of ways to work MOND effects into modifications of general relativity. Some flow from the observation that the MOND constant has a strong coincidence with the size of the universe, suggesting that MOND may arise from the suppression of gravity waves with amplitudes larger than the size of the universe itself.
UPDATE August 30, 2013:
The observation that MOND works and predictive is more than an observation of a mere coincidence or even as I noted before an strong indication that any dark matter mechanism, if there is a dark matter mechanism is very likely very simple because the MOND theory itself is (although it is possible that the complex bits are simply small contributions to the overall result in the same way that the general relativity corrections to Newtonian gravity, while very deep and complex are usually negligible).
But, the fact that MOND works and predictive implies something else about the correct theory that produces this phenomenological relationship. While correlation does not imply causation, correlation does imply that some cause, direction unknown and possibly indirect, causes that correlation. Robust and predictive correlations happen for a reason even if that reason is not a direct causal relationship between the two data sets.
What is MOND?
The MOND hypothesis is that there is a functional relationship between the gravitational fields that would be generated by luminous matter in a galaxy and the "dark matter" effects in that galaxy that are observable only in the parts of the luminous matter gravitational field that are weak which is defined as having gravitational acceleration below the MOND acceleration constant a0. MOND argues that gravity gets weaker according to a conventional 1/r2 law (where r is the distance between the two objects which are attracted to each other by gravity) in fields stronger than a0 and according to a "new physics" 1/r relationship in fields weaker than a0. An ad hoc interpolation function is used to estimate the force of gravity around the transitional field strength and data don't allow meaningful ways to distinguish between the alternative transition formulas.
Because GMm/r2 << G'Mm/r where G' is the constant that produces the MOND gravity prediction in the limit as r >> r at a0, the simplest interpolation is simply to assume that MOND gravity equals Newtonian gravity + G'Mm/r gravity where the second term is too small to discern experimentally in gravitational fields that are strong relative relative to a0= approximately 1.2*10^-10 ms^-2, and the first term is too small to discern relative to the second experimentally in gravitational fields that are weak relative to a0.
What does this imply?
One of the most profound implications of the fact that MOND works and is predictive is that there is a direct and reasonable precise functional relationship between the input into MOND's black box formula, the amount and distribution of luminous matter in a galaxy, and the output, which is the "dark matter" effects that are observed empirically in a galaxy.
This means that in any dark matter theory that accurately replicates reality, the distribution of dark matter particles in the dark matter halo of a galaxy must be functional related to the amount and distribution of luminous matter in that galaxy.
There are several ways that this could be possible. To illustrate this point, here are three broad kinds of scenarios that could cause this to be true. I marginally favor the first, although I don't rule out the second. The third, I consider to be very unlikely, but include it for completeness.
First, it could be that galaxies differ from each other in a very simple, more or less one dimensional way as a result of the way that galaxies evolve. Galaxies of a particular mass may always have a particular or one of a couple of particular luminous matter distributions and any factor that impacts how a galaxy of a particular size evolves impacts the distribution of dark matter in that galaxy in a way that corresponds to the distribution of luminous matter in that galaxy. Thus, the MOND relationship between the galaxy's luminous matter distribution and its dark matter halo distribution arises because the evolution of both kinds of matter distributions is a process that in each case is almost entirely gravity dominated and is shared by all of the matter luminous and dark in a given galaxy. In this process, Newtonian gravitational effects predominate over additional general relativistic gravitational effects, and this very simple gravitational law produces very simple and characteristic distributions of matter than can be summed up in the empirical MOND relationship that is observed. Deriving the MOND relationship from this process may take some pretty clever analytical modeling of the evolution of galaxies that exhibits shrewd understanding of how this process can be drastically simplified without loss of significant accuracy.
In particular, there is a fair amount of evidence to suggest that inferred dark matter halo shapes are strongly related to the shape of a galaxy's inner bulge, but are fairly indifferent to the distribution of matter at the fringe of a galaxy. The shape of a galaxy's inner bulge, in turn is largely a function of the nature of a galaxy's central black hole. If the distribution of the luminous matter in a galaxy and the distribution of the dark matter in a galaxy are largely a function of the nature of the central black hole of the galaxy, then it would follow that luminous matter distributions in a galaxy and dark matter distributions in a galaxy should be functionally related to each other. Moreover, is a central black hole in a galaxy of a given mass is pretty much like every other central black hole in a galaxy that has the same mass, then the distribution of both luminous matter and dark matter in galaxies should both be a function of a single number - the mass of the central black hole of the galaxy.
One version of this kind of scenario is one in which apparent "dark matter" effects are actually driven by ordinary "dim matter" emitted by the central black hole mostly in the "upward" and "downward" directions of the axis of rotation of that central black hole and the galaxy that arises around it. A 1/r relationship between force and distance is precisely the relationship one would expect in a simple Newtonian gravitational scenario in which there is a long, narrow, axial distribution of dim matter in both directions from the central black hole of a galaxy. If the axial distribution of ordinary "dim matter" is long enough and coherent enough that is generates its own 1/r gravitational field to a distance at least as great as the most distant star for which the galaxy's gravitational influence can be observed by an astronomer, then this would generate apparent dark matter effects that approximately follow the phenomenological MOND law.
The combined distribution of luminous and non-luminous matter in a galaxy in the scenario discussed above would look something like the image above, but with thinner and longer extensions up and down out of its axis containing matter and energy that is in rapid motion away from the galaxy.
It should be fairly elementary, moreover, for anyone with a year or two of calculus based physics under their belt to use the MOND constant a0 to calculate the characteristic ratio of axial dim matter to galactic ordinary matter in such a scenario (I could do it this weekend if I could find the time in an hour or two). With a few additional data points about the most distant stars that have been observed to experience MOND-like effects in the vicinity of a galaxy one could also fairly easily establish a minimum length of this axial dim matter and the amount of mass per linear distance of axial dim matter that would be anticipated in a typical galaxy, although any bound on the width of this axial mass distribution would be fairly weak. Since there are at least two different processes observed by astronomers by which black holes are known to emit matter and energy in a more or less axial direction and much of that matter is "dim" and the speed of the emitted matter and emitted energy and the minimum age of a galaxy can be determined to within reasonable bounds, the extent to which known processes could account for axial dim matter giving rise to MOND-like effects wouldn't be too hard to estimate, and the amount of axial "dim matter" that would necessarily have a source in some other unknown form of black hole emissions could also be estimated fairly accurately. It wouldn't be surprising if the sum of the total amount of axial dim matter in the universe resolved much of the "missing baryon" problem - that the number of baryons in the universe according to widely held baryongenesis hypotheses is smaller than the number that are present in all observed luminous matter by a substantial amount without giving rise to any notable cosmological effects that have been attributed to this missing baryonic matter.
Of course, given what my weekend looks like - violin supply store, CostCo, bank deposits, working on my cases the weekend, writing course packs, buying groceries, BBQing, getting someone to a tennis lesson, weeding, mulching, fertilizing, laundry, etc., the efforts of anyone else interested in doing so and posting the results in the commons would be welcome. Scientific discoveries can't be patented and I would love to know the answer and have no deep need to be the one who finds it.
This black hole emitted matter unaccounted for by known processes could be created by the extreme conditions that exist only the large central black holes in the center of galaxies (which would explain why we can't produce this kind of matter in particle accelerators), or it could simply be ordinary matter that does coalesce into stars or other large objects when emitted from a black hole in this way because it is emitted in a diffuse spray of fast moving particles whose speed and common direction prevent them from gaining the critical mass necessary to combine into compact objects that astronomers can observe.
Perhaps astronomers looking for this very particular and well defined kind of dim matter signature could find a way to measure it in some natural experiment arrangement of stellar objects somewhere among the billions and billions of stars in the universe that we can observe on all manner of wavelengths.
Any such process would, by definition, produce neither non-baryonic dark matter, nor ordinary dim matter that ends up in the galaxy's disk of rotation. So, direct dark matter detection experiments conducted in the vicinity of Earth which is in the plane of the Milky Way's disk are doomed to fail if this hypothesis is correct, because in this model, there is no dark or dim matter in that part of the galaxy.
This would also explain why estimated cosmological amounts of dark matter are on the same order of magnitude as estimated cosmological amounts of ordinary matter, another great unsolved question in physics.
In any case, if the missing matter, whether in the form of "novel" dark matter particles of an unknown type, or merely "dim matter" has a distribution that is driven by the same central black hole gravitational effects that drive the distribution of luminous matter in galaxies, the key to reconciling MOND theories and dark matter theories would be at hand.
It is not clear, however, that such a theory would adequately fill the role that dark matter plays in the highly predictive six parameter lamdaCDM model of cosmology, or would be consistent with bottom up galaxy formation models that have been highly successful in 2 keV warm dark matter scenarios that help address problems with the cold dark matter model like "cuspy halos" and the "missing satellites" problem. This hypothesis could create as many new problems as it solves for the dark matter paradigm.
The warm dark matter literature is surprisingly devoid is simple diagrams like the one above illustrating the inferred shape of the Milky Way's dark matter halo in one recent study. But, this illustration is closer to the conventionally expected warm dark matter halo shape. The dark matter paradigm favors structures that are blob-like rather than cylindrical for dark matter halos because it is hard to make nearly collisionless particles with significant kinetic energy that interact primarily only through gravity form more compact structures. The differential effects of the central mass of the galaxy prevent the dark matter halos from behaving like an ideal gas, but only modestly.
The non-spherical shape of the halo, however, is critical to generating the apparent 1/r gravitational field strengths that are observed.
(It is also worth noting that the roughly 2 keV sterile neutrino-like warm dark matter particles that seem to be the best fit to the empirical data within the dark matter paradigm are virtually undetectable in existing direct dark matter detection experiments which are designed to observe weakly interacting dark matter particles with masses in the GeV or heavier mass range.)
A result like this that involves only ordinary "dim matter" however, would be a huge blow to physicists longing for "new physics." It would explain the biggest unsolved problem in physics when it comes to the fundamental laws of physics and their observable effects using only a deeper understanding of processes that occur entirely according to already well understood "old physics." The biggest empirical arrow pointing towards undiscovered types of stable fundamental particles would turn out to have been a mere mirage.
Without the "crutch" of some sort of theory to explain dynamically the evolution of dark or dim matter halo shapes in galaxies parallel to luminous matter distributions in those galaxies, no dark matter theory can be considered complete.
Second, the MOND law could, for whatever, reason actually constitute the true law of gravity when suitably formulated in a general relativistic form (something that has actually been done successfully in several different varieties of proof of concept efforts). As noted above, this would call for some kind of quantum gravity theory or perhaps something related to the impact of a bounded universe of finite size on the way that gravity waves behave.
This would be exciting news for quantum gravity researchers and bad news for particle physics theorists. A 1/r relationship would quite plausibly derive from some process that reduced the effective dimensionality of space-time from three spatial dimensions to two. Perhaps, for example, due to quantum entanglement of distant points between which a particle has traveled or because gravity models have underestimated the gravitational effect of the angular momentum of a spinning galaxy due to some subtle flaw in the normal formulation of general relativity or the way that this formulation is inaccurately applied in models of complex massively many bodied systems like galaxies.
Of course, in particular, this would also be bad news for direct dark matter detection experiments because in this scenario there is no dark matter to detect anywhere except possibly in galactic clusters - all of which are a very, very long way from planet Earth making direct detection of cluster dark matter virtually impossible. Making sense of anomalous gravitational effects that might be due to dark or dim matter in galactic clusters is hard. This is because the structure and non-luminous ordinary matter content of galactic clusters is far less well understood and is far more complex, than the structure and non-luminous ordinary content of ordinary individual spiral, elliptical and dwarf galaxies.
This mechanism for a MOND theory also directly and transparently explains why it doesn't work as well in galactic clusters, whether or not "cluster dark matter" exists. The MOND relationship, in any variation of this hypothesis flows from the parallel evolution processes that are more or less the same for any one given galactic central black hole of a given size, it makes sense that these relationships might not hold for a system with many galactic central black holes in close proximity to each other and different typical ages in the galaxy formation process. Galactic clusters may be profoundly more complex to such an extent that no simple model like MOND can explain them.
Third, there could be a non-gravitational interaction between luminous matter and dark matter that causes dark matter halos to be distributed in a particular way relative to luminous matter. For example, the flux of photons out of a galaxy is roughly proportional to the Newtonian component of the gravitational field of the luminous matter in that galaxy at any given distance from the galaxy. So, if dark matter had very weak electromagnetic interactions with the outgoing flux of photons, this could produce a dark matter distribution that tracks the distribution of luminous matter in a system, while still having a character as collisionless, non-self-interacting particles. Of course, since photon flux, like graviton flux has a 1/r2 relationship to distance from the luminous matter source, this doesn't easily explain a 1/r MOND effect. Also, the photon flux generated by a star is not all that perfectly related to the mass of the star generating the flux, so far as I know (more accurately, I have no idea one way or the other how tight the relationship is between photon flux and stellar mass). Perhaps, at long distances, the geometry of a galaxy impacts this flux somehow in a manner different than for graviton flux.
This kind of explanation would be a field day for particle physicists, because no known fundamental particle has this kind of interactions. I don't see it as a likely option, but one should consider all possibilities for unexplained phenomena for sake of completeness.
Dark matter of this variety ought to be highly amenable to detection by a model driven direct dark matter detection experiment, although existing direct dark matter detection experiments, which involve a very different paradigm and model, might be useless at detecting it.
Homo Erectus Out Of Africa Wave Happened All At Once
Old Homo Erectus Dates In China Confirmed
Newly refined age estimates for the oldest hominin sites in China, establish that Homo Erectus spread at about the same time to Java Indonesia (1.9 million years ago), to Northern China (1.7 million years ago) and to the Southern Caucasus mountains and to a wider geographic range within Africa, all at around 1.7-1.9 million years ago from the previous core range within Africa. The evidence for the oldest H. Erectus anywhere in Africa is a bit older.
Age estimates at the scale are about +/- 100,000 years in accuracy, and the thinness of the data in this time frame also suggests that there is a certain amount of statistical variation due to the random sampling of existing data points from all available data points for undiscovered sites of the same type.
These factors combined, informed by data points from modern human expansion and expansion of other species on how long it took those species to disperse, make the precisely differences between the ages of the various early non-core H. Erectus site dates small enough to be insignificant and suggest as single wave of H. Erectus expansion both Out of Africa and within Africa.
Implications
This new data largely refutes the alternative hypothesis that H. Erectus expanded Out of Africa in a staged migration that reached some parts of Eurasia much later than others.
Of course, expansion "all at once" is a relative thing.
It could truly mean a true single wave of expansion (and honestly, that is what I believe is the most likely scenario), but several successive waves of expansion 10,000-20,000 years apart, of the kind that may have happened in the modern human process of expansion "Out of Africa" would be indistinguishable from a single wave of expansion in the Homo Erectus case.
The new data simply shows that expansion from an original source territory to the entire ultimate Homo Erectus range probably took place over a period that was probably shorter than 200,000 years, contrary to earlier theories based upon incomplete or less accurate data from China that had suggested that there might have been a pause of 400,000 years or more before Homo Erectus spread from SE Asia and the Southern Caucasus mountains to China.
Open Questions
These days, however, the really hot issues in the prehistory of H. Erectus relate to the tail end of the story, rather than the beginning.
When did H. Erectus go extinct and why? Was H. Erectus the source of the Denisovan genome or the H. Florensis species, and if not, as the Denisovan genome seems to suggest, what hominin species were each of these associated with, how did they end up where they did, and when? In particular: Does the Denisovan species have a relationship to early archaic hominin evidence in China? Did the Denisovan species replace or coexist with H. Erectus, and if so, when and where? (The distribution of lithic tools in Asia suggests that there might have been a limited replacement or co-existence of H. Erectus and Denisovans in Zomia, Malaysia and Indonesia, a path that connects most of the dots between the Denisovan cave in Southern Siberia and the island of Flores, but the more modern lithic tools are recent enough to be attributable to Homo Sapiens as well). During which time frames, if any, did H. Erectus co-exist with modern humans? Why isn't there a discernible trace of H. Erectus in most modern humans in Asia? How much did H. Erectus evolve after the species left Africa? Did H. Erectus evolve into other hominin species outside of Africa, and if so, which ones? What impact, if any, did the Toba explosion have on H. Erectus?
The picture is quite unclear for the events of the time frame from sometime after 150,000 years ago (i.e. after the period where there are no potentially more modern hominin remains) to 50,000 years ago (i.e. the period when modern humans were the undisputed dominant hominin species of Asia barring some currently unknown relict populations of archaic hominins and H. Florensis). Any sub-periods for Asian hominin populations from ca. 1,900,000 years ago to 150,000 years ago are also quite fuzzy. This was a period 1.5 million years plus long that was quite static relative to that of Neanderthals or modern humans over the similarly long time frames, and even relative to non-African H. Erectus populations.
Background Context
The Basic Story.
The H. Erectus who left Africa about 1.9 million years ago have not been important ancestral genetic contributors to modern humans. It is likely, however, that Neanderthals and modern humans are among the hominin species derived from H. Erectus.
All modern humans are predominantly descended, genetically, from common modern human African ancestors. Those modern humans evolved ca. 250,000 years ago, or so.
All non-African modern humans are predominantly descended from one or more groups of modern humans who left Africa much later. There is academic debate over when the first sustained modern human presence Out of Africa arose with the earliest estimates being around 130,000 years ago and the youngest being around 50,000-60,000 years ago with earlier Levantine modern human remains attributed to an "Out of Africa wave that failed" by supporters of that theory. As in the case of H. Erectus there is evidence that the "Out of Africa" migration by modern humans may have coincided with a range expansion of modern humans within Africa (roughly speaking around the time the Paleo-African populations like the Khoisan and Pygmy populations broke off from other African populations).
New archaeological data and increasingly refined understandings of population genetic data increasingly favors Out of Africa dates that are closer to the older date in this range, although the appearance of a younger age in some respects requires a fairly complex narrative of human expansion beyond Africa to fit the data precisely. For example, a population bottleneck for the Out of Africa population, or a second wave of expansion after a first less numerous one, could make non-Africans look genetically younger, on average, than the time when their earliest ancestors actually left Africa.
The Tweaks To The Story Associated With Archaic Admixture
* Neanderthal Admixture.
All modern humans who are descended from "Out of Africa" modern humans have significant traces of genetic admixture from Neanderthals (estimates have ranged from 2% to 10% with some individual and group variation). Apart from this Neanderthal admixture sometime after leaving Africa and before reaching Southeast Asia, modern humans are not directly descended from Neanderthals who were the dominant hominin species in Europe before modern humans arrived there over a period from ca. 50,000 years ago to ca. 42,000 years ago. Neanderthals were replaced in Europe over thousands of years of co-existence with modern humans in Europe (less in any one place) and were extinct or moribund as a species by 29,000 years ago. The details of the timing and scale and structure of this admixture process are the subject of ongoing research (e.g. described here).
* Denisovan Admixture.
There are also genetic signs of "Denisovan" admixture in aboriginal Australians, and in indigenous Papuans, Melanesians and Polynesians, in addition to their Neanderthal admixture which they share with other non-African modern humans. These populations have the highest known percentage of archaic admixture in their genomes of any modern human populations, but are still overwhelmingly genetic descendants of early modern human "Out of Africa" migrants.
No one archaeologically classified species of archaic hominin has been definitively identified with the non-Neanderthal archaic hominin admixture seen in the small number of modern humans that have genetic traces of Denisovan admixture that have been identified. The Denisovan genome is based on bones found in Siberia that are too fragmentary to make an archaic hominin species identification (although the Denisovans were not modern human and not pure-blooded Neanderthals) at a place far removed from where existing modern humans showing signs of this archaic admixture are found. It is also likely that there are traces within the Denisovan genome of archaic admixture between them and a previous archaic hominin species.
* Archaic admixture in Africa.
There are genetic signs of other kinds of archaic hominin admixture with modern humans are present in a couple of groups of Africans (one a pygmy group, and another found more widely adjacent to pygmy or former pygmy territories in tropical West Africa). These populations still have less archaic admixture than almost all non-Africans. No particular archaelogically classified species of archaic hominin has been definitively identified with the non-Neanderthal archaic hominin admixture seen in the small number of modern humans that have genetic traces of African non-Denisovan, non-Neanderthal admixture that have been identified. The genetic evidence points to relatively recent admixture with relict populations of archaic hominin species who had previously not been known to exist that late in time in Africa.
The traces of admixture with archaic hominins in Africa are at much lower levels than for Neanderthal and Denisovan admixture in the relevant populations, although the African case presents the strongest evidence yet for a single Y-DNA haplogroup that introgressed from an archaic hominin into modern humans.
* Mostly, archaic admixture DNA is selectively neutral.
Immune system related HLA complex genes appear to have been the main part of the archaic admixture package outside Africa that conferred selective benefit and has left a non-trace level mark in the region's genomes. The vast majority of archaic admixture in modern human genomes shows statistical frequencies and patterns consistent with the selective neutrality of those genes. Any archaic admixture sourced genes that were selective fitness reducing would have vanished from the modern human genome long before the time from which our oldest available ancient DNA samples were left behind.
Regional Evolution Compared
Notably, contrary to the original "regional evolution" hypothesis, the main phenotype distinctions (i.e. visible differences) between regional populations of modern humans described as "race" do not have archaic admixture with different archaic hominins as an important source. Serial founder effects and selective adaptation effects not related to archaic admixture are the source for most of these differences.
In short, while some of the processes associated with a regional evolution hypothesis did take place, they did not have the impact, or involve the kind of narrative, that the original proponents of the hypothesis suggested.
Newly refined age estimates for the oldest hominin sites in China, establish that Homo Erectus spread at about the same time to Java Indonesia (1.9 million years ago), to Northern China (1.7 million years ago) and to the Southern Caucasus mountains and to a wider geographic range within Africa, all at around 1.7-1.9 million years ago from the previous core range within Africa. The evidence for the oldest H. Erectus anywhere in Africa is a bit older.
Age estimates at the scale are about +/- 100,000 years in accuracy, and the thinness of the data in this time frame also suggests that there is a certain amount of statistical variation due to the random sampling of existing data points from all available data points for undiscovered sites of the same type.
These factors combined, informed by data points from modern human expansion and expansion of other species on how long it took those species to disperse, make the precisely differences between the ages of the various early non-core H. Erectus site dates small enough to be insignificant and suggest as single wave of H. Erectus expansion both Out of Africa and within Africa.
Implications
This new data largely refutes the alternative hypothesis that H. Erectus expanded Out of Africa in a staged migration that reached some parts of Eurasia much later than others.
Of course, expansion "all at once" is a relative thing.
It could truly mean a true single wave of expansion (and honestly, that is what I believe is the most likely scenario), but several successive waves of expansion 10,000-20,000 years apart, of the kind that may have happened in the modern human process of expansion "Out of Africa" would be indistinguishable from a single wave of expansion in the Homo Erectus case.
The new data simply shows that expansion from an original source territory to the entire ultimate Homo Erectus range probably took place over a period that was probably shorter than 200,000 years, contrary to earlier theories based upon incomplete or less accurate data from China that had suggested that there might have been a pause of 400,000 years or more before Homo Erectus spread from SE Asia and the Southern Caucasus mountains to China.
Open Questions
These days, however, the really hot issues in the prehistory of H. Erectus relate to the tail end of the story, rather than the beginning.
When did H. Erectus go extinct and why? Was H. Erectus the source of the Denisovan genome or the H. Florensis species, and if not, as the Denisovan genome seems to suggest, what hominin species were each of these associated with, how did they end up where they did, and when? In particular: Does the Denisovan species have a relationship to early archaic hominin evidence in China? Did the Denisovan species replace or coexist with H. Erectus, and if so, when and where? (The distribution of lithic tools in Asia suggests that there might have been a limited replacement or co-existence of H. Erectus and Denisovans in Zomia, Malaysia and Indonesia, a path that connects most of the dots between the Denisovan cave in Southern Siberia and the island of Flores, but the more modern lithic tools are recent enough to be attributable to Homo Sapiens as well). During which time frames, if any, did H. Erectus co-exist with modern humans? Why isn't there a discernible trace of H. Erectus in most modern humans in Asia? How much did H. Erectus evolve after the species left Africa? Did H. Erectus evolve into other hominin species outside of Africa, and if so, which ones? What impact, if any, did the Toba explosion have on H. Erectus?
The picture is quite unclear for the events of the time frame from sometime after 150,000 years ago (i.e. after the period where there are no potentially more modern hominin remains) to 50,000 years ago (i.e. the period when modern humans were the undisputed dominant hominin species of Asia barring some currently unknown relict populations of archaic hominins and H. Florensis). Any sub-periods for Asian hominin populations from ca. 1,900,000 years ago to 150,000 years ago are also quite fuzzy. This was a period 1.5 million years plus long that was quite static relative to that of Neanderthals or modern humans over the similarly long time frames, and even relative to non-African H. Erectus populations.
Background Context
The Basic Story.
The H. Erectus who left Africa about 1.9 million years ago have not been important ancestral genetic contributors to modern humans. It is likely, however, that Neanderthals and modern humans are among the hominin species derived from H. Erectus.
All modern humans are predominantly descended, genetically, from common modern human African ancestors. Those modern humans evolved ca. 250,000 years ago, or so.
All non-African modern humans are predominantly descended from one or more groups of modern humans who left Africa much later. There is academic debate over when the first sustained modern human presence Out of Africa arose with the earliest estimates being around 130,000 years ago and the youngest being around 50,000-60,000 years ago with earlier Levantine modern human remains attributed to an "Out of Africa wave that failed" by supporters of that theory. As in the case of H. Erectus there is evidence that the "Out of Africa" migration by modern humans may have coincided with a range expansion of modern humans within Africa (roughly speaking around the time the Paleo-African populations like the Khoisan and Pygmy populations broke off from other African populations).
New archaeological data and increasingly refined understandings of population genetic data increasingly favors Out of Africa dates that are closer to the older date in this range, although the appearance of a younger age in some respects requires a fairly complex narrative of human expansion beyond Africa to fit the data precisely. For example, a population bottleneck for the Out of Africa population, or a second wave of expansion after a first less numerous one, could make non-Africans look genetically younger, on average, than the time when their earliest ancestors actually left Africa.
The Tweaks To The Story Associated With Archaic Admixture
* Neanderthal Admixture.
All modern humans who are descended from "Out of Africa" modern humans have significant traces of genetic admixture from Neanderthals (estimates have ranged from 2% to 10% with some individual and group variation). Apart from this Neanderthal admixture sometime after leaving Africa and before reaching Southeast Asia, modern humans are not directly descended from Neanderthals who were the dominant hominin species in Europe before modern humans arrived there over a period from ca. 50,000 years ago to ca. 42,000 years ago. Neanderthals were replaced in Europe over thousands of years of co-existence with modern humans in Europe (less in any one place) and were extinct or moribund as a species by 29,000 years ago. The details of the timing and scale and structure of this admixture process are the subject of ongoing research (e.g. described here).
* Denisovan Admixture.
There are also genetic signs of "Denisovan" admixture in aboriginal Australians, and in indigenous Papuans, Melanesians and Polynesians, in addition to their Neanderthal admixture which they share with other non-African modern humans. These populations have the highest known percentage of archaic admixture in their genomes of any modern human populations, but are still overwhelmingly genetic descendants of early modern human "Out of Africa" migrants.
No one archaeologically classified species of archaic hominin has been definitively identified with the non-Neanderthal archaic hominin admixture seen in the small number of modern humans that have genetic traces of Denisovan admixture that have been identified. The Denisovan genome is based on bones found in Siberia that are too fragmentary to make an archaic hominin species identification (although the Denisovans were not modern human and not pure-blooded Neanderthals) at a place far removed from where existing modern humans showing signs of this archaic admixture are found. It is also likely that there are traces within the Denisovan genome of archaic admixture between them and a previous archaic hominin species.
* Archaic admixture in Africa.
There are genetic signs of other kinds of archaic hominin admixture with modern humans are present in a couple of groups of Africans (one a pygmy group, and another found more widely adjacent to pygmy or former pygmy territories in tropical West Africa). These populations still have less archaic admixture than almost all non-Africans. No particular archaelogically classified species of archaic hominin has been definitively identified with the non-Neanderthal archaic hominin admixture seen in the small number of modern humans that have genetic traces of African non-Denisovan, non-Neanderthal admixture that have been identified. The genetic evidence points to relatively recent admixture with relict populations of archaic hominin species who had previously not been known to exist that late in time in Africa.
The traces of admixture with archaic hominins in Africa are at much lower levels than for Neanderthal and Denisovan admixture in the relevant populations, although the African case presents the strongest evidence yet for a single Y-DNA haplogroup that introgressed from an archaic hominin into modern humans.
* Mostly, archaic admixture DNA is selectively neutral.
Immune system related HLA complex genes appear to have been the main part of the archaic admixture package outside Africa that conferred selective benefit and has left a non-trace level mark in the region's genomes. The vast majority of archaic admixture in modern human genomes shows statistical frequencies and patterns consistent with the selective neutrality of those genes. Any archaic admixture sourced genes that were selective fitness reducing would have vanished from the modern human genome long before the time from which our oldest available ancient DNA samples were left behind.
Regional Evolution Compared
Notably, contrary to the original "regional evolution" hypothesis, the main phenotype distinctions (i.e. visible differences) between regional populations of modern humans described as "race" do not have archaic admixture with different archaic hominins as an important source. Serial founder effects and selective adaptation effects not related to archaic admixture are the source for most of these differences.
In short, while some of the processes associated with a regional evolution hypothesis did take place, they did not have the impact, or involve the kind of narrative, that the original proponents of the hypothesis suggested.
Thursday, August 22, 2013
More Neutrino Data From Daya Bay
Daya Bay's first results were announced in March 2012 and established the unexpectedly large value of the mixing angle theta one-three, the last of three long-sought neutrino mixing angles. The new results from Daya Bay put the precise number for that mixing angle at sin2(2θ13)=0.090 plus or minus 0.009.. . .From this press release.
From the KamLAND experiment in Japan, they already know that the difference, or "split," between two of the three mass states is small. They believe, based on the MINOS experiment at Fermilab, that the third state is at least five times smaller or five times larger. Daya Bay scientists have now measured the magnitude of that mass splitting, |Δm2ee|, to be (2.59±0.20)x10-3 eV2. The result establishes that the electron neutrino has all three mass states and is consistent with that from muon neutrinos measured by MINOS. Precision measurement of the energy dependence should further the goal of establishing a "hierarchy," or ranking, of the three mass states for each neutrino flavor.
MINOS, and the Super-K and T2K experiments in Japan, have previously determined the complementary effective mass splitting (Δm2μμ) using muon neutrinos. Precise measurement of these two effective mass splittings would allow calculations of the two mass-squared differences (Δm232 and Δm231) among the three mass states. KamLAND and solar neutrino experiments have previously measured the mass-squared difference Δm221 by observing the disappearance of electron antineutrinos from reactors about 100 miles from the detector and the disappearance of neutrinos from the sun.
Neither of the two numbers is far from previous estimates. As of March 2012, the estimates were sin2(2θ13) = 0.092±0.017 and |Δm231| ≈ |Δm232| ≡ Δm2 atm = 2.43+0.13−0.13×10−3 eV2.
The precision in the theta13 number is about twice as great as the estimate from a year and a half ago, and is slightly lower than previously estimated. But, the results are consistent with each other at the one sigma level. The mass splitting estimate is consistent with prior data and similar in precision on a percentage basis.
Wednesday, August 21, 2013
The Twelve Most Important Experimental Data Points In Physics
What are the most important questions we need experiments to answer in physics right now?
Here is my list:
1. Discover or reduce the experimental bound on the maximum rate of neutrinoless double beta decay (I suspect it will not be found and this rules out many BSM theories).
2. Continue to place experimental bounds on proton decay rates (I suspect that it does not happen and this rules out many BSM theories).
3. Determine the CP violating phase of the PMNS matrix that governs neutrino oscillations (anybody's guess, but probably not zero).
4. Determine the absolute masses of the three neutrino mass eigenstates and whether the neutrinos have a "normal", "inverted" or "degenerate" hierarchy of masses (probably "normal" and very small).
5. Refine the precision with which we know the mass of the top quark (relevant to making relationships between Standard Model experimentally measured masses convincing).
6. Refine the precision with which we know the properties of the Higgs boson, particularly its mass (relative to making relationships between experimentally measured masses convincing) and any possible other variation from the Standard Model prediction (something that will probably not be found).
7. Complete the second phase of the analysis of the Planck satellite data (relevant to distinguishing between quintessence and the cosmological constant, with the latter more likely supported, and to ruling out possible cosmological inflation theories with the simplest theories currently favored).
8. Continue the search for glueballs, tetraquarks and pentaquarks - all of which are theoretically possible in QCD but not yet definitively observed (everything that is not forbidden in physics is mandatory, so the absence or presence of these phenomena have great importance by adding new QCD rules).
9. Tighten experimental boundaries on the masses of the five lightest quarks (this would allow for the proof or disproof of extensions of Koide's rule for quarks - the precision of these measurements is currently very low).
10. Conduct more astronomy observations that constrain the possible parameters of dark matter or any alternative theory that explains phenomena attributed to dark matter (dark matter is the single most glaring missing piece of modern physics), including measurements of ordinary "dim matter".
11. Experiments to reconcile discrepancies between muonic hydrogen and ordinary hydrogen's properties (probably due to imprecision in ordinary hydrogen measurements).
12. Improve the precision of QCD calculations that form backgrounds for other experimental measurements giving all other measurements at the LHC and elsewhere more statistical power.
LHC, the current physics show horse, is pertinent only to numbers 5, 6, 8 and 9. Progress on 5, 6 and 9 is likely to be very incremental after the next couple of years.
Experimental searches that I deemed less worthy, because I think they are less likely to be fruitful include:
1. Searches for supersymmetric particles or additional Higgs bosons (SUSY is increasingly ill motivated).
2. Searches for additional compact dimensions.
3. Searches for W' and Z' particles.
4. Searches for fourth generation Standard Model particles (basically ruled out already).
5. Direct dark matter detection experiments (the cross-section of interaction is too small to be likely to find anything with conceivable near term experiments as other data favors something akin to sterile neutrino 2 keV warm dark matter).
Alternative suggestions in the comments (with justifications) are welcome.
Here is my list:
1. Discover or reduce the experimental bound on the maximum rate of neutrinoless double beta decay (I suspect it will not be found and this rules out many BSM theories).
2. Continue to place experimental bounds on proton decay rates (I suspect that it does not happen and this rules out many BSM theories).
3. Determine the CP violating phase of the PMNS matrix that governs neutrino oscillations (anybody's guess, but probably not zero).
4. Determine the absolute masses of the three neutrino mass eigenstates and whether the neutrinos have a "normal", "inverted" or "degenerate" hierarchy of masses (probably "normal" and very small).
5. Refine the precision with which we know the mass of the top quark (relevant to making relationships between Standard Model experimentally measured masses convincing).
6. Refine the precision with which we know the properties of the Higgs boson, particularly its mass (relative to making relationships between experimentally measured masses convincing) and any possible other variation from the Standard Model prediction (something that will probably not be found).
7. Complete the second phase of the analysis of the Planck satellite data (relevant to distinguishing between quintessence and the cosmological constant, with the latter more likely supported, and to ruling out possible cosmological inflation theories with the simplest theories currently favored).
8. Continue the search for glueballs, tetraquarks and pentaquarks - all of which are theoretically possible in QCD but not yet definitively observed (everything that is not forbidden in physics is mandatory, so the absence or presence of these phenomena have great importance by adding new QCD rules).
9. Tighten experimental boundaries on the masses of the five lightest quarks (this would allow for the proof or disproof of extensions of Koide's rule for quarks - the precision of these measurements is currently very low).
10. Conduct more astronomy observations that constrain the possible parameters of dark matter or any alternative theory that explains phenomena attributed to dark matter (dark matter is the single most glaring missing piece of modern physics), including measurements of ordinary "dim matter".
11. Experiments to reconcile discrepancies between muonic hydrogen and ordinary hydrogen's properties (probably due to imprecision in ordinary hydrogen measurements).
12. Improve the precision of QCD calculations that form backgrounds for other experimental measurements giving all other measurements at the LHC and elsewhere more statistical power.
LHC, the current physics show horse, is pertinent only to numbers 5, 6, 8 and 9. Progress on 5, 6 and 9 is likely to be very incremental after the next couple of years.
Experimental searches that I deemed less worthy, because I think they are less likely to be fruitful include:
1. Searches for supersymmetric particles or additional Higgs bosons (SUSY is increasingly ill motivated).
2. Searches for additional compact dimensions.
3. Searches for W' and Z' particles.
4. Searches for fourth generation Standard Model particles (basically ruled out already).
5. Direct dark matter detection experiments (the cross-section of interaction is too small to be likely to find anything with conceivable near term experiments as other data favors something akin to sterile neutrino 2 keV warm dark matter).
Alternative suggestions in the comments (with justifications) are welcome.
Top Quarks Are Short Lived
The top quark is the heaviest fundamental particle and is also heavier than any possible hadron made of two or three, or even theoretically four or five, confined quarks. Therefore, it also the shortest lived particle that exists (this explains why top quarks do not become confined in hadrons - top quarks decay into other kinds of particles before the strong force has time to form a hadron involving a top quark).
Because top quarks are exceptionally heavy - 173.3 GeV, give or take less than a GeV - they have a large amount of energy to impart to their decay products, and this has several consequences; one of these is their quite ephemeral nature. Theoretical calculations allow us to predict that for such an object the lifetime depends on the inverse of the third power of the mass, yielding a very short existence for top quarks - less than a trillionth of a trillionth of a second!
Even imagining such a short time interval is a challenge. Light quanta do not even manage to travel through a proton in 10^-24 seconds. How to picture it? Let's say that if you could travel from here to the center of the Andromeda galaxy in one second (forgetting the limits of special relativity for a moment), a top quark created when you start that quite fast journey would decay before you move by one millimeter!A recent paper directly measures this lifetime (called decay width) more accurately than any prior study.
Nima Arkani-Hamed On The Hierarchy Problem
Matt Strassler's blog reports on day one of the SEARCH workshop in Stoney Brook, New York and in particular on presentations by Raman Sundrum and Nima Arkani-Hamed on the "hierarachy problem" associated with the discovery of a seemingly Standard Model Higgs boson at 125 GeV but no other new particles or phenomena.
I've never been as impressed with the hierarchy problem as many (most?) theoretical physicists seem to be, but I think that Nima Arkani-Hamed has his the nail on the head in describing in general terms what is going on: "The solution to the hierarchy problem involves a completely novel mechanism." We have not, however, figured out what that mechanism is yet. Arkani-Hamed focuses on two approaches, neither of which have worked so far:
In my view, the hierarchy problem is an issue of defective framing, category error, or failure to appreciate a key relationship between parts of the Standard Model, rather than anything that should surprise a physicist who knew the whole story.
I've never been as impressed with the hierarchy problem as many (most?) theoretical physicists seem to be, but I think that Nima Arkani-Hamed has his the nail on the head in describing in general terms what is going on: "The solution to the hierarchy problem involves a completely novel mechanism." We have not, however, figured out what that mechanism is yet. Arkani-Hamed focuses on two approaches, neither of which have worked so far:
One is based on trying to apply notions related to self-organized criticality, but he was never able to make much progress.
Another is based on an idea of Ed Witten’s that perhaps our world is best understood as one that has:I am unimpressed with the second approach (Arkani-Hamed hit a pretty fundamental dead end in trying to square this with the Standard Model reality too), and don't know enough about the first to comment (the link above is suggestive of the idea but doesn't apply it to quantum physics). But, I do think that the bottom line, that their is a mechanism or perspective that makes the seeming unnaturalness associated with the hierarchy problem seem natural is correct.
two dimensions of space (not the obvious three);
is supersymmetric(which seems impossible, but in three dimensions supersymmetry and gravity together imply that particles and their superpartner particles need not have equal masses);
has extremely strong forces.
In my view, the hierarchy problem is an issue of defective framing, category error, or failure to appreciate a key relationship between parts of the Standard Model, rather than anything that should surprise a physicist who knew the whole story.
Some Generic Quantum Gravity Predictions
Jean-Philippe Bruenton has made some interesting model-independent predictions in a pre-print regarding the phenomenological laws of quantum gravity. He argues that:
(1) There exists a (theoretically) maximal energy density and pressure.
(2) There exists a mass-dependent (theoretically) maximal acceleration given by mc3/(h bar) if m < mp and by c4/Gm if m > mp. This is of the order of Milgrom's acceleration a0 for ultra-light particles (m approximately H0) that could be associated to the Dark Energy fluid. This suggests models in which modified gravity in galaxies is driven by the Dark Energy field, via the maximal acceleration principle. It follows trivially from the existence of a maximal acceleration that there also exists a mass dependent maximal force and power.
(3) Any system must have a size greater than the Planck length, in the sense that there exists a minimal area (but without implying for quanta a minimal Planckian wavelength in large enough boxes).
(4) Physical systems must obey the Holographic Principle. Holographic bounds can only be saturated by systems with m > mp; systems lying on the "Compton line" "l" approximately equal to 1/m are fundamental objects without substructures
Bruenton's conjectures are driven by observations about the relationships of the Planck length, mass, and time which are derived from the speed of light c, the gravitational force constant G, and the reduced Planck's constant h bar, the Schwarzchild solution for the event horizon of a Black Hole in General Relativity reformulated in a generalized and manifestly covariant way, observations about the Kerr-Newman family of black holes, an alternate derivation of the Heisenberg uncertainty principle, the notion of a Compton length, and a few other established relationships.
Bruenton presents his conclusions as heuristic conjectures for any quantum gravity theory that displays a minimum set of commonly hypothesized features, rather than rigorously proven scientific laws.
I have omitted some of his more technical observations and consolidated others.
Bruenton acknowledges that these observations may fail in the case certain theoretically possible exotic "hairy" black holes (while implying that they probably don't exist for some non-obvious reason). He equivocates on the question of whether Lorentz symmetry violations near the Planck scale are possible, reasoning that an absence of a minimal Planckian wavelength could rescue Lorentz symmetry from quantum gravity effects.
I find his suggestion that there is a maximal energy density and pressure particularly notable because of the remarkable coincidence between the maximum density observed by astronomers in Black Holes and neutron stars on one hand, and the maximum observed density of an atomic nucleus on the other.
His suggestion that the Planck scale my denote the line between systems that are "fundamental objects without substructures" and "physical systems" is also shrewd.
(1) There exists a (theoretically) maximal energy density and pressure.
(2) There exists a mass-dependent (theoretically) maximal acceleration given by mc3/(h bar) if m < mp and by c4/Gm if m > mp. This is of the order of Milgrom's acceleration a0 for ultra-light particles (m approximately H0) that could be associated to the Dark Energy fluid. This suggests models in which modified gravity in galaxies is driven by the Dark Energy field, via the maximal acceleration principle. It follows trivially from the existence of a maximal acceleration that there also exists a mass dependent maximal force and power.
(3) Any system must have a size greater than the Planck length, in the sense that there exists a minimal area (but without implying for quanta a minimal Planckian wavelength in large enough boxes).
(4) Physical systems must obey the Holographic Principle. Holographic bounds can only be saturated by systems with m > mp; systems lying on the "Compton line" "l" approximately equal to 1/m are fundamental objects without substructures
Bruenton's conjectures are driven by observations about the relationships of the Planck length, mass, and time which are derived from the speed of light c, the gravitational force constant G, and the reduced Planck's constant h bar, the Schwarzchild solution for the event horizon of a Black Hole in General Relativity reformulated in a generalized and manifestly covariant way, observations about the Kerr-Newman family of black holes, an alternate derivation of the Heisenberg uncertainty principle, the notion of a Compton length, and a few other established relationships.
Bruenton presents his conclusions as heuristic conjectures for any quantum gravity theory that displays a minimum set of commonly hypothesized features, rather than rigorously proven scientific laws.
I have omitted some of his more technical observations and consolidated others.
Bruenton acknowledges that these observations may fail in the case certain theoretically possible exotic "hairy" black holes (while implying that they probably don't exist for some non-obvious reason). He equivocates on the question of whether Lorentz symmetry violations near the Planck scale are possible, reasoning that an absence of a minimal Planckian wavelength could rescue Lorentz symmetry from quantum gravity effects.
I find his suggestion that there is a maximal energy density and pressure particularly notable because of the remarkable coincidence between the maximum density observed by astronomers in Black Holes and neutron stars on one hand, and the maximum observed density of an atomic nucleus on the other.
His suggestion that the Planck scale my denote the line between systems that are "fundamental objects without substructures" and "physical systems" is also shrewd.
Thursday, August 15, 2013
Climate Link To Bronze Age Collapse Substantiated
A new study (open access) has convincingly linked the dramatic events around 1200 BCE in the Eastern Mediterranean called "Bronze Age Collapse" that led to events including the end of the Hittite Empire, the arrival of the Philistines in the Southern Levant, and the fall of many cities at the hands of the "Sea People" to a severe and sudden three hundred year long drought.
While other historians have cast doubt on the connection, the paper's rigorous analysis of radiocarbon dates in a consistent manner explains that:
While other historians have cast doubt on the connection, the paper's rigorous analysis of radiocarbon dates in a consistent manner explains that:
[T]he [Late Bronze Age] crisis, the Sea People raids, and the onset of the drought period are the same event. Because climatic proxies from Cyprus and coastal Syria are numerically correlated, as the LBA crisis shows an identical calibration range in the island and the mainland, and because this narrative was confirmed by written evidence (correspondences, cuneiform tablets), we can say that the LBA crisis was a complex but single event where political struggle, socioeconomic decline, climatically-induced food-shortage, famines and flows of migrants definitely intermingled. . . . the LBA crisis coincided with the onset of a ca. 300-year drought event 3200 years ago. This climate shift caused crop failures, dearth [sic] and famine, which precipitated or hastened socio-economic crises and forced regional human migrations at the end of the LBA in the Eastern Mediterranean and southwest Asia. The integration of environmental and archaeological data along the Cypriot and Syrian coasts offers a first comprehensive insight into how and why things may have happened during this chaotic period. The 3.2 ka BP event underlines the agro-productive sensitivity of ancient Mediterranean societies to climate and demystifies the crisis at the Late Bronze Age-Iron Age transition.The study's fairly narrow geographic range doesn't permit a determination of the extent to which this climate change extended further than the Eastern Mediterranean, although we do know that droughts typically affect very large geographic areas all at once. They are the natural disaster opposites of a tornado that can ravage one house while leaving the one next door virtually untouched.
Subscribe to:
Posts (Atom)