Saturday, January 30, 2016
Temporary Hiatus
In light of the acute state of two terminally ill people in my life (not me, I have only my usual physical infirmities), I'm going to have to set aside blogging for a while. I'm certain that I'll be back eventually, but don't know when.
Tuesday, January 26, 2016
Megalithic Grave Sites In Spain Have Non-Modern mtDNA Mix
Ancient DNA continues to pour in, most recently from a large megalithic burial site in Spain. The mtDNA mix in the gene pool there differs from the modern European gene pool mostly in its low frequency of mtDNA haplogroup H which is the most common mtDNA type in H.
Ancient DNA has made it increasingly clear that most of the modern European haplotypes of Y-DNA R1a and R1b in Europe mostly arrived sometime between 5,000 and 4,000 years ago. But, the evidence that frequency of mtDNA haplogroup H increased in connection with the same demographic transition has been less definitive, although this appears to be the best explanation.
The new megalithic mtDNA from Spain isn't absolutely decisive on that point. The sample size is small enough that an atypical mtDNA mix could be due to random chance (although as you look to the complete corpus of megalithic mtDNA in Europe this becomes an increasingly difficult case to make), and the vicinity where this sample was found has below average frequencies of mtDNA H even today.
But, the mtDNA H haplotype present in two samples pre-3000 BCE (H3) was different than one from 2500 BCE (H1) in the same area, again, suggesting some sort of demographic transition in that time frame which coincides with the expansion of selection Y-DNA R1b haplogroups in Western Europe.
The consistency of the trend towards more mtDNA H almost everywhere in Europe does point to an increase in frequency at around the same time (framed in ever great resolution by finds such as this one) as the demographic events that made select haplogroups of Y-DNA R1a and R1b dominant in much of Europe, is certainly suggestive of the possibility that the trend is real and has a common cause.
It is not the same story, because mtDNA H was present at measurable frequencies in Mesolithic Southern Europe and in early Neolithic Europe, albeit as levels lower than today, while Y-DNA R1a and R1b were found in Europe in those time frames only in trace frequencies relative to the overall available ancient Y-DNA pool. So, for example, the data are consistent with a narrative in which an expansion of these Y-DNA haplogroups involved descendants of migrating men from the steppe married to indigenous women from the Mesolithic era or earlier with mtDNA H whose mtDNA type increased in frequency due to founder effects.
The Spanish site is also notable for having a particularly substantial proportion of mtDNA U5, U4 and V, all of which are conventionally associated with Mesolithic European hunter-gatherers, rather than Neolithic European migrants. There seems to be, more generally, in Iberia, a tendency for there to be more Paleolithic continuity than in many other parts of Europe, perhaps because of its proximity to the Franco-Cantrabrian refugia from which much of Western Europe was repopulated following the Last Glacial Maximum.
The complete disappearance of mtDNA H3, U5, U4 and V from the 2500 BCE sample in the same area is quite notable, as those mtDNA haplogroups made up 48% of the earlier samples, suggesting something more than random sampling chance.
Ancient DNA has made it increasingly clear that most of the modern European haplotypes of Y-DNA R1a and R1b in Europe mostly arrived sometime between 5,000 and 4,000 years ago. But, the evidence that frequency of mtDNA haplogroup H increased in connection with the same demographic transition has been less definitive, although this appears to be the best explanation.
The new megalithic mtDNA from Spain isn't absolutely decisive on that point. The sample size is small enough that an atypical mtDNA mix could be due to random chance (although as you look to the complete corpus of megalithic mtDNA in Europe this becomes an increasingly difficult case to make), and the vicinity where this sample was found has below average frequencies of mtDNA H even today.
But, the mtDNA H haplotype present in two samples pre-3000 BCE (H3) was different than one from 2500 BCE (H1) in the same area, again, suggesting some sort of demographic transition in that time frame which coincides with the expansion of selection Y-DNA R1b haplogroups in Western Europe.
The consistency of the trend towards more mtDNA H almost everywhere in Europe does point to an increase in frequency at around the same time (framed in ever great resolution by finds such as this one) as the demographic events that made select haplogroups of Y-DNA R1a and R1b dominant in much of Europe, is certainly suggestive of the possibility that the trend is real and has a common cause.
It is not the same story, because mtDNA H was present at measurable frequencies in Mesolithic Southern Europe and in early Neolithic Europe, albeit as levels lower than today, while Y-DNA R1a and R1b were found in Europe in those time frames only in trace frequencies relative to the overall available ancient Y-DNA pool. So, for example, the data are consistent with a narrative in which an expansion of these Y-DNA haplogroups involved descendants of migrating men from the steppe married to indigenous women from the Mesolithic era or earlier with mtDNA H whose mtDNA type increased in frequency due to founder effects.
The Spanish site is also notable for having a particularly substantial proportion of mtDNA U5, U4 and V, all of which are conventionally associated with Mesolithic European hunter-gatherers, rather than Neolithic European migrants. There seems to be, more generally, in Iberia, a tendency for there to be more Paleolithic continuity than in many other parts of Europe, perhaps because of its proximity to the Franco-Cantrabrian refugia from which much of Western Europe was repopulated following the Last Glacial Maximum.
The complete disappearance of mtDNA H3, U5, U4 and V from the 2500 BCE sample in the same area is quite notable, as those mtDNA haplogroups made up 48% of the earlier samples, suggesting something more than random sampling chance.
Gravity Is Not Weak
The dinosaurs didn't worry about extrasolar impacts because gravity is weak and wasn't worth worrying about.
Einstein’s theory of general relativity still stands apart from the other known forces by its refusal to be quantized. Progress in finding a theory of quantum gravity has stalled because of the complete lack of data – a challenging situation that physicists have never encountered before.- Sabine Hossenfelder at Backreaction (emphasis added).
The main problem in measuring quantum gravitational effects is the weakness of gravity. Estimates show that testing its quantum effects would require detectors the size of planet Jupiter or particle accelerators the size of the Milky-way. Thus, experiments to guide theory development are unfeasible. Or so we’ve been told.
But gravity is not a weak force – its strength depends on the masses between which it acts. (Indeed, that is the very reason gravity is so difficult to quantize.) Saying that gravity is weak makes sense only when referring to a specific mass, like that of the proton for example. We can then compare the strength of gravity to the strength of the other interactions, demonstrating its relative weakness – a puzzling fact known as the “strong hierarchy problem.” But that the strength of gravity depends on the particles’ masses also means that quantum gravitational effects are not generally weak: their magnitude too depends on the gravitating masses.
To be more precise one should thus say that quantum gravity is hard to detect because if an object is massive enough to have large gravitational effects then its quantum properties are negligible and don’t cause quantum behavior of space-time. General relativity however acts in two ways: Matter affects space-time and space-time affects matter. And so the reverse is also true: If the dynamical background of general relativity for some reason has an intrinsic quantum uncertainty, then this will affect the matter moving in this space-time – in a potentially observable way.
The unqualified statement that "gravity is weak" has been a pet peeve of mine since I first heard it, probably sometime around the time that I was in middle school.
This makes an assumption that the only relevant comparison is of the action of the fundamental forces of the Standard Model and gravity on fundamental particles or simple composite particles of the Standard Model at atomic distances, where it is true that gravity is weaker than the other forces.
Indeed, even in the limit as distance goes to zero, the electromagnetic force, weak force and gravity all get stronger, so at very short distances in gravity remains weak compared to the electroweak force.
But, the strong force, once it crosses the distance at which it peaks, actually gets weaker. An important unsolved question in physics is whether the strong force is zero in the limit of zero distance (i.e. is trivial), or is non-zero. If the strong force is trivial in the limit of zero distance (probably the more likely possibility), then, for example, there is both a long distance regime and a short distance regime in which gravity is stronger than the strong force between two quarks.
The weak force has a limited range because its massive carrier boson has a short mean lifetime which means that even if that carrier boson is traveling at speeds approaching the speed of light that it will almost always decay before traveling for more than an atomic scale distance. Thus, is rapidly starts to approach zero at even microscopic distances, while gravity and the electromagnetic forces do not decay nearly so rapidly.
(And, in modified gravity theories, such as MOND and its more sophisticated and relativistic cousins, many of which assume that gravity has an additional Yukawa potential term, the electromagnetic falls off with distance squared, while gravity does not abate quite so quickly, making gravity grow stronger relative to an equivalent electromagnetic force at a given distance at much greater distances in the gravitational weak field.)
In sum, the standard assumptions are very context specific without appropriate justification. When substantial masses are involved and macroscopic distances are involved, gravity is frequently dominant, while the other forces are frequently negligible. The assumption that gravity is weak, for example, is often unwarranted in the context of astronomy, which is an equally valid frame of reference.
The Rainbow gravity theory discussed in the same post is not ready for prime time and doesn't warrant a discussion here, but the distinction that Hossenfelder makes in her post does merit repeating, because the echo chamber has propounded an inaccurate statement one time too many and deserves some counterpoint.
South Asian Population History
A more sophisticated autosomal genetic model of South Asian populations than Reich's ANI (ancestral North Indian) and ASI (ancestral South Indian) model from 2009, using richer data is now available.
The samples cover 13 tribal populations (three Dravidian, four Munda, one Dravidian and Munda, two Ongan, one Indo-European and two Tibeto-Burman), 1 lower-middle caste population (Dravidian), and 6 upper caste populations (four Indo-European, one Dravidian and one Tibeto-Burman). Thus, it may be under-representative of the vast middle of Indian society (lower caste Indo-European and Dravidian populations).
Further inspection reveals that the two new mainland genetic components, one for Austro-Asiatic Munda speaking populations and one for Tibeto-Burman populations, are completely expected and unsurprising, although the new component for Andaman and Nicobarese Islanders, previously equated with ASI, is an innovation, albeit a modest one (the supplemental materials show that the Andamanese are indeed very distinct from any mainland population and cluster with Papuans instead). As the paper explains:
The paper's insights about when the caste system came into existence in its modern endogamous form is also notable. The leading schools of thought in conventional wisdom had been that caste either arose immediately at the time of the arrival of the Indo-Aryans, ca. 4000 years ago, or emerged only under British colonial rule ca. 500 years ago. There is now convincing evidence to suggest that both of these theories are wrong (link added editorially).
The paper does not extensively examine the affinities and structure of the ANI component and its arrival in South Asia, for example, to attempt to discern if there was more than one wave of ANI introgression or to see which West Eurasian populations show the greatest affinity to it.
UPDATE: Razib offers some methodological criticism focused on a misunderstanding of what ADMIXTURE does which understates the extent to which populations are admixed and faults the investigators for not incorporating 1000 Genomes data and for not doing f4 and D statistic analysis. (Ironically, his title, "South Asians are not descended from four populations" is probably not true and not supported by his analysis.)
Basu et al., Genomic reconstruction of the history of extant populations of India reveals five distinct ancestral components and a complex structure, PNAS, Published online before print January 25, 2016, doi: 10.1073/pnas.1513197113 (Open access).India, occupying the center stage of Paleolithic and Neolithic migrations, has been underrepresented in genome-wide studies of variation. Systematic analysis of genome-wide data, using multiple robust statistical methods, on (i) 367 unrelated individuals drawn from 18 mainland and 2 island (Andaman and Nicobar Islands) populations selected to represent geographic, linguistic, and ethnic diversities, and (ii) individuals from populations represented in the Human Genome Diversity Panel (HGDP), reveal four major ancestries in mainland India. This contrasts with an earlier inference of two ancestries based on limited population sampling. A distinct ancestry of the populations of Andaman archipelago was identified and found to be coancestral to Oceanic populations. Analysis of ancestral haplotype blocks revealed that extant mainland populations (i) admixed widely irrespective of ancestry, although admixtures between populations was not always symmetric, and (ii) this practice was rapidly replaced by endogamy about 70 generations ago, among upper castes and Indo-European speakers predominantly. This estimated time coincides with the historical period of formulation and adoption of sociocultural norms restricting intermarriage in large social strata. A similar replacement observed among tribal populations was temporally less uniform.
The samples cover 13 tribal populations (three Dravidian, four Munda, one Dravidian and Munda, two Ongan, one Indo-European and two Tibeto-Burman), 1 lower-middle caste population (Dravidian), and 6 upper caste populations (four Indo-European, one Dravidian and one Tibeto-Burman). Thus, it may be under-representative of the vast middle of Indian society (lower caste Indo-European and Dravidian populations).
Further inspection reveals that the two new mainland genetic components, one for Austro-Asiatic Munda speaking populations and one for Tibeto-Burman populations, are completely expected and unsurprising, although the new component for Andaman and Nicobarese Islanders, previously equated with ASI, is an innovation, albeit a modest one (the supplemental materials show that the Andamanese are indeed very distinct from any mainland population and cluster with Papuans instead). As the paper explains:
All of the South Asian mainland populations studies have ANI, ASI and AAA admixture, and while ATB admixture was much more geographically defined, some populations that are not linguistically TB had more than trace amounts of ATB admixture.Contemporary populations of India are linguistically, geographically, and socially stratified, and are largely endogamous with variable degrees of porosity. We analyzed high quality genotype data, generated using a DNA microarray (Methods) at 803,570 autosomal SNPs on 367 individuals drawn from 20 ethnic populations of India (Table 1 and SI Appendix, Fig. S1), to provide evidence that the ancestry of the hunter gatherers of A&N is distinct from mainland Indian populations, but is coancestral to contemporary Pacific Islanders (PI). Our analysis reveals that the genomic structure of mainland Indian populations is best explained by contributions from four ancestral components. In addition to the ANI and ASI, we identified two ancestral components in mainland India that are major for the AA-speaking tribals and the TB speakers, which we respectively denote as AAA (for “Ancestral Austro-Asiatic”) and ATB (for “Ancestral Tibeto-Burman”). Extant populations have experienced extensive multicomponent admixtures. Our results indicate that the census sizes of AA and TB speakers in contemporary India are gross underestimates of the extent of the AAA and the ATB components in extant populations.
The paper's insights about when the caste system came into existence in its modern endogamous form is also notable. The leading schools of thought in conventional wisdom had been that caste either arose immediately at the time of the arrival of the Indo-Aryans, ca. 4000 years ago, or emerged only under British colonial rule ca. 500 years ago. There is now convincing evidence to suggest that both of these theories are wrong (link added editorially).
Since the genetic evidence points to strict caste endogamy as arising during the historic era, historians may be able, with this hint, to more precisely pin down what occurred from written documentation from the people who experienced that event. They may have the dates wrong, however. The authors assume a 22.5 year generation, when the standard assumption of generation length in this field is usually about 29 years, which would point to caste solidification three or four centuries earlier, although the historical case tends to favor the more recent Gupta era date.We have inferred that the practice of endogamy was established almost simultaneously, possibly by decree of the rulers, in upper-caste populations of all geographical regions, about 70 generations before present, probably during the reign (319–550 CE) of the ardent Hindu Gupta rulers. The time of establishment of endogamy among tribal populations was less uniform.
The paper does not extensively examine the affinities and structure of the ANI component and its arrival in South Asia, for example, to attempt to discern if there was more than one wave of ANI introgression or to see which West Eurasian populations show the greatest affinity to it.
UPDATE: Razib offers some methodological criticism focused on a misunderstanding of what ADMIXTURE does which understates the extent to which populations are admixed and faults the investigators for not incorporating 1000 Genomes data and for not doing f4 and D statistic analysis. (Ironically, his title, "South Asians are not descended from four populations" is probably not true and not supported by his analysis.)
Monday, January 25, 2016
Oops! Part of Major Ethiopia Ancient DNA Claim Was Actually An IT Glitch.
Last October, a published paper reported that there was significant Neanderthal DNA in most Africans due to Bronze Age or more recent back migration from Eurasia.
It turns out that this remarkable result was actually an IT glitch in how the ancient genome was compared to other genomes. There was significant Eurasian backmigration to Africa resulting in Mota, who preceded this migration having far less Neanderthal DNA than modern East Africans. But, it turns out that this backmigration only reached East Africa, not the rest of the African continent.
Specifically:
They were guilty of the far less culpable error of making a serious but subtle mistake, not catching it, and then believing the erroneous conclusions that flowed from the mistake even though the conclusions were paradigm changing. Extraordinary claims require extraordinary evidence and this wasn't it.
This wasn't quite as bad as the superluminal neutrinos claim by the OPERA experiment that turned out to be due to a bit of faulty hardware in their receiver, which was a far more extraordinary claim. But, its close.
Still, it is good news and a sign of a healthy scientific establishment that the embarrassing and very public mistakes were correctly promptly upon discovery in a forthright way with humility.
It turns out that this remarkable result was actually an IT glitch in how the ancient genome was compared to other genomes. There was significant Eurasian backmigration to Africa resulting in Mota, who preceded this migration having far less Neanderthal DNA than modern East Africans. But, it turns out that this backmigration only reached East Africa, not the rest of the African continent.
Specifically:
The results presented in the Report “Ancient Ethiopian genome reveals extensive Eurasian admixture throughout the African continent“ were affected by a bioinformatics error. A script necessary to convert the input produced by samtools v0.1.19 to be compatible with PLINK was not run when merging the ancient genome, Mota, with the contemporary populations SNP panel, leading to homozygote positions to the human reference genome being dropped as missing data (the analysis of admixture with Neanderthals and Denisovans was not affected). When those positions were included, 255,922 SNP out of 256,540 from the contemporary reference panel could be called in Mota.
The conclusion of a large migration into East Africa from Western Eurasia, and more precisely from a source genetically close to the early Neolithic farmers, is not affected. However, the geographic extent of the genetic impact of this migration was overestimated: the Western Eurasian backflow mostly affected East Africa and only a few Sub-Saharan populations; the Yoruba and Mbuti do not show higher levels of Western Eurasian ancestry compared to Mota.
We thank Pontus Skoglund and David Reich for letting us know about this problem.Razib has some appropriate comments:
First, scientists are humans and mistakes happen. So respect that the authors owned up to it. On the other hand, the conclusion never smelled right to many people. I was confused by it. I asked Iosif Lazaridis at ASHG. He was confused by it. I asked Pontus Skoglund. He was confused by it. . . .
Unfortunately the result from the bioinformatics error was emphasized on the abstract, and in the press. In The New York Times[.]
A rule of thumb in science is when you get a shocking and astonishing result, check to make sure you didn’t make some error along the sequence of analysis. That clearly did not happen here. The blame has to be distributed. Authors work with mentors and collaborators, and peer reviewers check to make sure things make sense. The idea of massive admixture across the whole of Africa just did not make sense.
If something like this happened to me I’d probably literally throw up. This is horrible. But then again, this paper made it into Science, and Nature wrote articles like this: First ancient African genome reveals vast Eurasian migration. The error has to be corrected.Also, despite my own tag of "bad scientists" to this post, as Razib notes, these scientists weren't actually "bad" in the usual sense of faking data or engaging in any other form of academic dishonestly. Once or twice in my career I've made, entirely in good faith, similar vomit worthy, but subtle and non-malicious mistakes. It is no fun, but it happens because humans make mistakes.
They were guilty of the far less culpable error of making a serious but subtle mistake, not catching it, and then believing the erroneous conclusions that flowed from the mistake even though the conclusions were paradigm changing. Extraordinary claims require extraordinary evidence and this wasn't it.
This wasn't quite as bad as the superluminal neutrinos claim by the OPERA experiment that turned out to be due to a bit of faulty hardware in their receiver, which was a far more extraordinary claim. But, its close.
Still, it is good news and a sign of a healthy scientific establishment that the embarrassing and very public mistakes were correctly promptly upon discovery in a forthright way with humility.
LUX Dark Matter Detection Experiment Gets An Upgrade
No direct dark matter detection experiment can ever rule out any kind of dark matter that has no non-gravitational interactions with ordinary matter, because, by definition, it can't be directly detected. But, the exclusions of the parameter space of any other kind of dark matter are increasingly strict.
The News From LUX
LUX Rules Out WIMP Dark Matter In The Narrow Sense
In the narrow sense, WIMP dark matter is matter that lacks color charge and electromagnetic charge, but does interact via the weak force, and is not a neutrino, generally with masses of 1 GeV to 1000 GeV.
LUX is the world's most powerful direct dark matter detection experiment and has ruled out the existence of dark matter over a very wide range of plausible dark matter masses to very tiny cross-sections of interaction with ordinary matter. The entire range of exclusions by all other experiments from 4 GeV to about 1000 GeV, except CDMSLite 2015 (which has a bit of an edge in the single digit GeV mass range for a dark matter particle) is simultaneous confirmed by LUX down to more faint cross-sections of interaction. Tweaks to the analysis of the 2013 data announced in December of 2015 imply that:
The LUX exclusions, for the most part, are enough to exclude interactions on the same order of magnitude as that of neutrinos (the cross-section of interaction of a neutrino with a nucleon, which a WIMP with weak force charge comparable to all other weak force interacting particles known to exist would naively be expected to share, is on the order of 4*10^-39 to 8*10^-39 per cm^2/GeV), which means that any interactions of dark matter with ordinary matter via the weak force would have to involve dark matter particles with a weak force charge that is a tiny fraction of that of all other weakly interacting particles (something with no precedent and no good theoretical motivation).
The LUX exclusion will soon be even stronger requiring even tinier fractions of the weak force charge areas where there is effectively already an exclusion and expanding the mass range over which there is an exclusion of particles with ordinary weak force charges.
Different approaches have to be used to attempt to directly detect dark matter particles significantly lower than 4 GeV (e.g. the keV mass range favored for "warm dark matter" or the MeV range favored for "dark photons" in self-interacting dark matter models) because the neutrino background gets to strong to make out a signal. CDMSLite 2015, for example, which uses a methodology similar to the LHC still gets down only to about 1 GeV (with a considerably weaker cross-section of interaction excluded).
Implications
The LHC and previous particle accelerator experiments rule out most lighter WIMPS.
The CMS experiment has provided exclusions more strict than LUX as low masses.
Particle accelerator experiments, likewise, strongly disfavor the existence of fundamental particles that have even very slight interaction with any form of Standard Model particles with masses in the 1 eV to hundreds of GeV mass range, overlapping with the direct dark matter detection experiment exclusion range and providing the most strict direct detection limitations at the low end of the mass range.
Astronomy Data Largely Rules Out Simple Non-Self-Interacting Cold Dark Matter
Astronomy data, in general, disfavors cold dark matter (on the order of 10 GeV or more), because it would give rise to far more small scale structure in the universe. This is because, assuming "thermal" dark matter (i.e. dark matter created in the very early universe with a mean lifetime on the order of the age of the universe), the mean velocity of dark matter particles is a function of dark matter particle mass. Mean dark matter particle velocity, in turn, influences the scale at which ordinary matter would be forced by dark matter to have a highly structured distribution at small distance scales (e.g. groups of galaxies or smaller).
For example, colder dark matter would produce more subhalos in large galaxies and more satellite galaxies around larger galaxies. This limitation is even stronger with dark matter predominantly made of particles in the 1000 GeV or more mass range (roughly the mass of four uranium atoms or more).
This robust exclusion applies even if dark matter interacts only via gravity, but could be avoided if dark matter is self-interacting, or is not a thermal relic (i.e. rather than having a tens of billions of year long mean lifetime or more, it is created and destroyed at rates that are basically equal).
It is unlikely that dark matter that interacts via the weak force with ordinary matter exists with particles of more than 1000 GeV although experimental designs can't rule it out for dark matter that has significant self-interactions (which prevent it from acting like ordinary "cold dark matter" particle theories).
It had been hoped that including ordinary matter-dark matter interactions in cold dark matter simulations would solve this problem, but, for the most part, efforts to do this have been insufficient to solve the problem, or to solve other related cold dark matter problems like the cusp-core problem, which notes that inferred dark matter halo shapes differ materially from the shapes inferred to exist from the dynamics observed in galaxies.
This exclusion shouldn't be unduly exaggerated, however. Dark matter does a very good job of explaining observations at a cosmology scale and at explaining phenomena at scales larger than galaxies, while not contradicting solar system scale observations. And, while the simple singlet fermion cold dark matter particle with no non-gravitational interaction is not right on the mark for galaxy scale and smaller structures, it is a pretty decent first order approximation of it. So, the notion that a dark matter self-interaction of some type could fix this real problem isn't far fetched.
Higgs Portal Dark Matter
A particle that interacts via the Higgs force with a Higgs boson in addition to gravity is sometimes called Higgs portal dark matter, because Higgs boson decays in particle accelerators would reveal it providing the only Standard Model connection to the dark sector.
Experiments at the LHC have not yet pinned down the properties of the Higgs boson precisely enough to confirm that it could not possibly have interactions with "dark" particles that have no interactions via the weak, strong, or electromagnetic forces, so they can't rule out "Higgs portal dark matter." But, LHC data in run-2 will greatly narrow this window of the dark matter parameter space, and even tighter boundaries will exist by the time that the LHC has finished its work.
The 750 GeV Anomaly Cannot Itself Be Dark Matter
The 750 GeV anomaly at the LHC announced last December, even if it is real, is not itself a good dark matter candidate, because a dark matter candidate needs to have a mean lifetime on the order of tens of billions of years or more, and needs to be much, much lighter (although dark matter self-interaction models can weaken the particle mass constraints). But, a 750 GeV anomaly, if real, might imply a whole new menagerie of particles which could include a dark matter candidate.
Sterile Neutrinos In The Narrow Sense Are Increasingly Disfavored
Neutrino oscillation data, as it becomes more precise, also increasingly disfavors a "sterile neutrino" in senso stricto that oscillates with other neutrinos despite not having strong, electromagnetic or weak force interactions, although it wouldn't rule out, for example, a particle called a sterile neutrino in the weak sense, which has no strong, electromagnetic or weak force interactions but does have mass and interact via gravity.
Annihilation Searches
Another indirect, but nearly direct, way of detecting dark matter is to see the signature of dark matter-antidark matter annihilation events, if they exist. The Fermi experiment, for example, is of this type. But, these experiments face the fundamental problem that the background is ill understood. It can exclude annihilating dark matter in areas where no signals are seen up to small annihilation cross-sections, but cannot really confirm that potential annihilation signals have a dark matter source.
Also, the notion that dark matter particles which lack electromagnetic charge would produce highly energetic photons in their annihilation, is itself problematic.
The Simplicity Constraint.
It also bears noting that very simple models of dark matter, with dark matter dominated by one kind of fermion and possibly interacting via one kind of boson, tend to be better fits to the data, almost across the board, than more complex models of the dark sector. This doesn't mean that the dark sector is really that simple if it exists (e.g. no one could guess from astronomy data alone, that there were second and third generation fermions, that there were W or Z bosons, that there were eight different kinds of gluons, or that protons and neutrons were composite particles), but it does mean that the dominant particle content of a dark sector must be very simple.
We can also infer this from the fact that modified gravity models can accurately predict the reality that we observe over many order of magnitude of scale, with one degree of freedom at the galactic scale and only about three degrees of freedom at all scales. The minimum number of degrees of freedom in a modified gravity model is an effective cap on the number of particles that contribute to the dominant particle content of the dark matter sector.
Bottom Line
Warm dark matter (ca. keV scale matter) with a dominant dark matter singlet fermion, and self-interacting cold or warm dark matter models with a dominant dark matter singlet fermion and a dominant dark matter boson, which in either case do not interact at all with ordinary matter except via gravity, remain the best fits to the data.
Thus, it seems very likely that direct dark matter detection experiments are doomed to not see any dark matter signals and that LUX will merely find nothing and extend the exclusion range in parameter space for dark matter particles.
The exclusion of so much of the parameter space of plausible dark matter candidates is one of the important reasons that gravity modification theories to explain dark matter are more plausible now than they used to be in the 1980s when dark matter theories were formulated and became dominant.
In contrast, in the 1980s, SUSY provided theoretically well motivated dark matter candidates with the right properties in multiple respects to fit what was then known about dark matter from cosmology models and much more crude predictions about dark matter haloes, when small scale structure problems with the cold dark matter paradigm weren't known, and when LUX hadn't excluded so much of the WIMP (in the narrow sense) parameter space. The remaining dark matter parameter space is no longer a good fit to the SUSY particles that had been hypothesized to be dark matter candidates.
For example, even if warm dark matter with keV mass particles is the right solution, there are really no SUSY particles that can serve in this capacity.
Indeed, ultimately, LUX is more of a blow to SUSY, by ruling out light dark matter candidates that would have a signal at the tested dark matter particle masses if SUSY was correct, than it is to the dark matter particle hypothesis in general.
The News From LUX
LUX Rules Out WIMP Dark Matter In The Narrow Sense
In the narrow sense, WIMP dark matter is matter that lacks color charge and electromagnetic charge, but does interact via the weak force, and is not a neutrino, generally with masses of 1 GeV to 1000 GeV.
LUX is the world's most powerful direct dark matter detection experiment and has ruled out the existence of dark matter over a very wide range of plausible dark matter masses to very tiny cross-sections of interaction with ordinary matter. The entire range of exclusions by all other experiments from 4 GeV to about 1000 GeV, except CDMSLite 2015 (which has a bit of an edge in the single digit GeV mass range for a dark matter particle) is simultaneous confirmed by LUX down to more faint cross-sections of interaction. Tweaks to the analysis of the 2013 data announced in December of 2015 imply that:
[W]e have improved the WIMP sensitivity of the 2013 LUX search data, excluding new parameter space. The lowered analysis thresholds and signal model energy cut-off, added exposure, and improved resolution of light and charge over the first LUX result yield a 23% reduction in cross-section limit at high WIMP masses. Reach is significantly extended at low mass where the cut-off has most effect on the predicted event rate: the minimum kinematically-accessible mass is reduced from 5.2 to 3.3 GeV/c2.LUX is getting an upgrade in early 2016 to improve its performance, which means that the exclusion range will be getting even better in a few years.
The LUX exclusions, for the most part, are enough to exclude interactions on the same order of magnitude as that of neutrinos (the cross-section of interaction of a neutrino with a nucleon, which a WIMP with weak force charge comparable to all other weak force interacting particles known to exist would naively be expected to share, is on the order of 4*10^-39 to 8*10^-39 per cm^2/GeV), which means that any interactions of dark matter with ordinary matter via the weak force would have to involve dark matter particles with a weak force charge that is a tiny fraction of that of all other weakly interacting particles (something with no precedent and no good theoretical motivation).
The LUX exclusion will soon be even stronger requiring even tinier fractions of the weak force charge areas where there is effectively already an exclusion and expanding the mass range over which there is an exclusion of particles with ordinary weak force charges.
Different approaches have to be used to attempt to directly detect dark matter particles significantly lower than 4 GeV (e.g. the keV mass range favored for "warm dark matter" or the MeV range favored for "dark photons" in self-interacting dark matter models) because the neutrino background gets to strong to make out a signal. CDMSLite 2015, for example, which uses a methodology similar to the LHC still gets down only to about 1 GeV (with a considerably weaker cross-section of interaction excluded).
Implications
The LHC and previous particle accelerator experiments rule out most lighter WIMPS.
The CMS experiment has provided exclusions more strict than LUX as low masses.
Particle accelerator experiments, likewise, strongly disfavor the existence of fundamental particles that have even very slight interaction with any form of Standard Model particles with masses in the 1 eV to hundreds of GeV mass range, overlapping with the direct dark matter detection experiment exclusion range and providing the most strict direct detection limitations at the low end of the mass range.
Astronomy Data Largely Rules Out Simple Non-Self-Interacting Cold Dark Matter
Astronomy data, in general, disfavors cold dark matter (on the order of 10 GeV or more), because it would give rise to far more small scale structure in the universe. This is because, assuming "thermal" dark matter (i.e. dark matter created in the very early universe with a mean lifetime on the order of the age of the universe), the mean velocity of dark matter particles is a function of dark matter particle mass. Mean dark matter particle velocity, in turn, influences the scale at which ordinary matter would be forced by dark matter to have a highly structured distribution at small distance scales (e.g. groups of galaxies or smaller).
For example, colder dark matter would produce more subhalos in large galaxies and more satellite galaxies around larger galaxies. This limitation is even stronger with dark matter predominantly made of particles in the 1000 GeV or more mass range (roughly the mass of four uranium atoms or more).
This robust exclusion applies even if dark matter interacts only via gravity, but could be avoided if dark matter is self-interacting, or is not a thermal relic (i.e. rather than having a tens of billions of year long mean lifetime or more, it is created and destroyed at rates that are basically equal).
It is unlikely that dark matter that interacts via the weak force with ordinary matter exists with particles of more than 1000 GeV although experimental designs can't rule it out for dark matter that has significant self-interactions (which prevent it from acting like ordinary "cold dark matter" particle theories).
It had been hoped that including ordinary matter-dark matter interactions in cold dark matter simulations would solve this problem, but, for the most part, efforts to do this have been insufficient to solve the problem, or to solve other related cold dark matter problems like the cusp-core problem, which notes that inferred dark matter halo shapes differ materially from the shapes inferred to exist from the dynamics observed in galaxies.
This exclusion shouldn't be unduly exaggerated, however. Dark matter does a very good job of explaining observations at a cosmology scale and at explaining phenomena at scales larger than galaxies, while not contradicting solar system scale observations. And, while the simple singlet fermion cold dark matter particle with no non-gravitational interaction is not right on the mark for galaxy scale and smaller structures, it is a pretty decent first order approximation of it. So, the notion that a dark matter self-interaction of some type could fix this real problem isn't far fetched.
Higgs Portal Dark Matter
A particle that interacts via the Higgs force with a Higgs boson in addition to gravity is sometimes called Higgs portal dark matter, because Higgs boson decays in particle accelerators would reveal it providing the only Standard Model connection to the dark sector.
Experiments at the LHC have not yet pinned down the properties of the Higgs boson precisely enough to confirm that it could not possibly have interactions with "dark" particles that have no interactions via the weak, strong, or electromagnetic forces, so they can't rule out "Higgs portal dark matter." But, LHC data in run-2 will greatly narrow this window of the dark matter parameter space, and even tighter boundaries will exist by the time that the LHC has finished its work.
The 750 GeV Anomaly Cannot Itself Be Dark Matter
The 750 GeV anomaly at the LHC announced last December, even if it is real, is not itself a good dark matter candidate, because a dark matter candidate needs to have a mean lifetime on the order of tens of billions of years or more, and needs to be much, much lighter (although dark matter self-interaction models can weaken the particle mass constraints). But, a 750 GeV anomaly, if real, might imply a whole new menagerie of particles which could include a dark matter candidate.
Sterile Neutrinos In The Narrow Sense Are Increasingly Disfavored
Neutrino oscillation data, as it becomes more precise, also increasingly disfavors a "sterile neutrino" in senso stricto that oscillates with other neutrinos despite not having strong, electromagnetic or weak force interactions, although it wouldn't rule out, for example, a particle called a sterile neutrino in the weak sense, which has no strong, electromagnetic or weak force interactions but does have mass and interact via gravity.
Annihilation Searches
Another indirect, but nearly direct, way of detecting dark matter is to see the signature of dark matter-antidark matter annihilation events, if they exist. The Fermi experiment, for example, is of this type. But, these experiments face the fundamental problem that the background is ill understood. It can exclude annihilating dark matter in areas where no signals are seen up to small annihilation cross-sections, but cannot really confirm that potential annihilation signals have a dark matter source.
Also, the notion that dark matter particles which lack electromagnetic charge would produce highly energetic photons in their annihilation, is itself problematic.
The Simplicity Constraint.
It also bears noting that very simple models of dark matter, with dark matter dominated by one kind of fermion and possibly interacting via one kind of boson, tend to be better fits to the data, almost across the board, than more complex models of the dark sector. This doesn't mean that the dark sector is really that simple if it exists (e.g. no one could guess from astronomy data alone, that there were second and third generation fermions, that there were W or Z bosons, that there were eight different kinds of gluons, or that protons and neutrons were composite particles), but it does mean that the dominant particle content of a dark sector must be very simple.
We can also infer this from the fact that modified gravity models can accurately predict the reality that we observe over many order of magnitude of scale, with one degree of freedom at the galactic scale and only about three degrees of freedom at all scales. The minimum number of degrees of freedom in a modified gravity model is an effective cap on the number of particles that contribute to the dominant particle content of the dark matter sector.
Bottom Line
Warm dark matter (ca. keV scale matter) with a dominant dark matter singlet fermion, and self-interacting cold or warm dark matter models with a dominant dark matter singlet fermion and a dominant dark matter boson, which in either case do not interact at all with ordinary matter except via gravity, remain the best fits to the data.
Thus, it seems very likely that direct dark matter detection experiments are doomed to not see any dark matter signals and that LUX will merely find nothing and extend the exclusion range in parameter space for dark matter particles.
The exclusion of so much of the parameter space of plausible dark matter candidates is one of the important reasons that gravity modification theories to explain dark matter are more plausible now than they used to be in the 1980s when dark matter theories were formulated and became dominant.
In contrast, in the 1980s, SUSY provided theoretically well motivated dark matter candidates with the right properties in multiple respects to fit what was then known about dark matter from cosmology models and much more crude predictions about dark matter haloes, when small scale structure problems with the cold dark matter paradigm weren't known, and when LUX hadn't excluded so much of the WIMP (in the narrow sense) parameter space. The remaining dark matter parameter space is no longer a good fit to the SUSY particles that had been hypothesized to be dark matter candidates.
For example, even if warm dark matter with keV mass particles is the right solution, there are really no SUSY particles that can serve in this capacity.
Indeed, ultimately, LUX is more of a blow to SUSY, by ruling out light dark matter candidates that would have a signal at the tested dark matter particle masses if SUSY was correct, than it is to the dark matter particle hypothesis in general.
Friday, January 22, 2016
xkcd on Planet Nine
An accurate, but absurd assessment demonstrating how well we understand the solar system despite the fact that there may be a few new discoveries to be made about it.
From xkcd (Randall Munroe).
UPDATE January 25, 2016 (based upon input from the comments):
Objects much larger than the top right corner in the box ("Planets Ruled Out Because We Could See Them During the Day") are ruled out because they would be stars. The size of the solar system (before objects would fall into another star's orbit) is about 107,000 AU which means that the boundary on the right hand side of the chart actually extends beyond the size of the solar system by a bit. Objects smaller than dwarf planets don't have sufficient gravitational pull to be even roughly spherical due to gravitational forces. Very large planets have been ruled out by the LISE survey everywhere in the solar system.
The hypothetical Planet Nine has correctly been placed about 700 AU out and at a mass which would assume a Neptune or Uranus diameter and would be somewhat smaller if it was rocky instead.
Wednesday, January 20, 2016
Strong Indirect Evidence For Planet Nine
Michael Brown, the world's leading expert in solar system planetary astronomy and co-conspirator in the demotion of Pluto from planet status, has with a collaborator identified strong indirect evidence of a ninth true planet in the solar system.
Meet Planet Nine
If it is a moderate sized gas giant would expect it to have a mean radius of about 15,000-16,000 miles or so, gravity that might be quite comparable to Earth gravity. But, if it is a rocky planet (like the core of Jupiter or Saturn beneath their clouds and oceans) it could be roughly half the radius and have a surface gravity closer to four times the gravitational pull on the surface of the Earth.
It would have a surface temperature of less than 70 degrees Kelvin (colder than liquid nitrogen), unless it is generating its own heat through some kind of nuclear process insufficiently powerful to cause it to become a star (something similar makes the magma at the center of the Earth a liquid). This is a temperature low enough that many "conventional" superconductors would start to display their superconducting properties.
The authors suspect, for reasons set forth in the final section of their paper, that Planet Nine "represents a primordial giant planet core that was ejected during the nebular epoch of the solar system's evolution." In other words, it is probably about four billion years old and has been with us from the very early days of the solar system.
A Once In A Lifetime Discovery
To give you an idea of how far out Planet Nine must be, the Planet Neptune takes about 165 Earth years to rotate around the sun. Planet Nine would take roughly a hundred times as long to do so.
Pluto used to be called the Ninth Planet, but is so small that it has been demoted to dwarf planet status. Dwarf planet sized objects (and some objects that are smaller) that are very far from the sun all called Kuinper Belt Objects, such as Sedna.
Brown has discovered more dwarf planets than anyone who has ever lived and now may land the discovery of the only remaining true planet left in the solar system.
Is It Real? Why Should It Exist?
The linked Science Direct article (really a Cal Tech press release) is quite rich in explaining how the discovery came about and what evidence supports their conclusion, yet very readable. I came away from it convinced that they are almost certainly correct despite not having directly observed it yet.
The discussion in the initial section of the paper demonstrates how deep the recent literature is on explaining the various phenomena that Batygin and Brown have finally appeared to crack with their Planet Nine hypothesis.
The physics of the solar system can be calculated very precisely because they involve just a single force (gravity) operating with extremely little friction, according to classical mechanics, in a weak gravity regime where general relativity makes only slight corrections to the very simple Newtonian GMm/r^2 force rule.
The general relativity perturbations whose magnitude can be calculated with great precision in principle. But, the general relativity effect is very small, because Planet Nine, unlike Mercury whose perihelion is tweaked slightly from the Newtonian expectation by general relativity due to its proximity to the strong gravitational field of the Sun, and the direction of the general relativity effect can be calculated much more easily than the full exact general relativity calculation with multiple bodies. So, one can add a one directional error bar for systemic differences between general relativity and Newtonian gravity that is quite small to the Newtonian prediction. Indeed, the correction is probably dwarfed by other experimental uncertainties in the astronomy observations of the solar system objects used an inputs in the model of the solar system used to make the prediction.
So, basically, one is left making predictions using a computer model constructed using only high school physics and calculus and a wealth of available data points on all of the known masses in the solar system, that is still phenomenally accurate to the limits of the precision of state of the art telescope measurements of solar system objects. (The initial analysis of the multi-body gravitational dynamics that flow from Newtonian gravity is done not using this "dumb" N-body simulation method, but with a more advanced mathematical physics concept known as a Hamiltonian which is an equation that adds up the potential and kinetic energies of all of the objects in a system which must stay constant due to the conservation of energy that has been known in its current form for solar system dynamics since at least 1950. But, it all flows from applying high school physics and calculus to this complex multi-body situation.)
Convincingly, a lot of seemingly unrelated properties of Kuinper Belt Objects, including some that were not the basis of the original formulation of the model are all consistent with a Planet Nine hypothesis.
For example, it turns out that the trick to making these models work with a Ninth Planet that influences the dynamics of a lot of Kuinper Belt objects is for a key property of the Ninth Planet orbit and of the affected Kuinper Belt object fall quite exactly into relatively small integer ratios of each other such as 2:1, 3:1, 5:3, 7:4, 9:4, 11:4, 13:4, 23:6, 27:17, 29:17, and 33:19, because if they lack a common denominator, the objects orbits will never collide even though their orbits overlap (tiny friction effects due to things like tidal effects on the shape of the objects eventually probably would collider, but over time periods too long relative to the age of the solar system for this to have actually happened). Remarkably, this turns out to be the case.
Similar objects have been observed around other stars, but not to date, around our own.
Why Hasn't It Been Directly Observed?
It appears that a big barrier to observation is that its location has been pinned down only to a particular, very long orbital path around the Sun (which is also quite wide due to margins of errors in the astronomy measurements and calculation uncertainties), rather than to a specific location. Also its great distance from the Sun means that it is not illuminated strongly (and like all planets does not make its own light) and due to its distance and not super huge diameter (compared, for example, to large gas giants and stars) should be only a tiny, almost point-like object in the night sky.
Depending on the albedo (i.e. reflectivity) of Planet Nine's surface, it may not even be visible via a telescope at all except when it obscures some other known object like a star by passing between it and the Earth. Gas giants and planets with atmospheres like Venus reflect 40-65% of the sunlight that hits them, and gas giants are also larger (because the density of gases and liquids is lower than a rocky core), but rocky objects without much in the way of atmospheres like Mercury, the Moon and Mars reflect only 10%-15% of the light that hits them and also have about half the radius of a gas giant of comparable mass. So, a rocky Planet Nine would reflect only about 6% or so of the light of a gas giant Planet Nine comparable to Uranus or Neptune, making it significantly harder to detect directly. Since a planet like this is basically unprecedented in the solar system, there is no really strong reason to favor a rocky Planet Nine hypothesis over a gas giant Planet Nine hypothesis which would be very similar to Uranus and Neptune but significantly colder.
Has Planet Nine Been Seen Already? Probably Not.
Maju notes at his blog a pre-print purporting to have possibly observed a new planet which may or may not be related. The Vlemmings, et al. paper that he notes states that "we find that, if it is gravitationally bound, Gna is currently located at 12−25 AU distance and has a size of ∼220−880 km. Alternatively it is a much larger, planet-sized, object, gravitationally unbound, and located within ∼4000 AU, or beyond (out to ∼0.3~pc) if it is strongly variable." The Liseau preprint to which he links also appears to be describing the same object using the same data.
Neptune is about 30 AU from the Sun, so according to the press release, Planet Nine's orbit should be about 600 AU from the Sun. This isn't a good fit for the object described by Vlemmings, although without a closer read of the preprints it is hard to see what assumptions were made in order to rule out the possibility definitively. If the Vlemmings paper is observing a new planet, at any rate, it is probably not observing Planet Nine.
The Paper
The paper open access paper and its abstract as as follows:
Post Script
Hat tip to Maju for altering me to the discovery in the comments on another post.
Michael Brown's website, or his blog, which is in the sidebar, make no mention of the discovery (perhaps due to a publication embargo, or perhaps because he simply no longer maintains either of them).
Meet Planet Nine
Researchers have found evidence of a giant planet tracing a bizarre, highly elongated orbit in the outer solar system. The object, which the researchers have nicknamed Planet Nine, has a mass about 10 times that of Earth and orbits about 20 times farther from the sun on average than does Neptune (which orbits the sun at an average distance of 2.8 billion miles). In fact, it would take this new planet between 10,000 and 20,000 years to make just one full orbit around the sun.By comparison, Jupiter is about 317 times the mass of Earth, Saturn is about 95 times the mass of Earth, Neptune is 17 times the mass of Earth, and Uranus is about 15 times the mass of Earth, which is the next most heavy planet. The Sun, in contrast, has a mass about 333,000 times the mass of the Earth. So, Planet Nine should have a mass on the same order of magnitude as Neptune and Uranus, but should be much smaller than Saturn and Jupiter.
The researchers, Konstantin Batygin and Mike Brown, discovered the planet's existence through mathematical modeling and computer simulations but have not yet observed the object directly.
If it is a moderate sized gas giant would expect it to have a mean radius of about 15,000-16,000 miles or so, gravity that might be quite comparable to Earth gravity. But, if it is a rocky planet (like the core of Jupiter or Saturn beneath their clouds and oceans) it could be roughly half the radius and have a surface gravity closer to four times the gravitational pull on the surface of the Earth.
It would have a surface temperature of less than 70 degrees Kelvin (colder than liquid nitrogen), unless it is generating its own heat through some kind of nuclear process insufficiently powerful to cause it to become a star (something similar makes the magma at the center of the Earth a liquid). This is a temperature low enough that many "conventional" superconductors would start to display their superconducting properties.
The authors suspect, for reasons set forth in the final section of their paper, that Planet Nine "represents a primordial giant planet core that was ejected during the nebular epoch of the solar system's evolution." In other words, it is probably about four billion years old and has been with us from the very early days of the solar system.
A Once In A Lifetime Discovery
To give you an idea of how far out Planet Nine must be, the Planet Neptune takes about 165 Earth years to rotate around the sun. Planet Nine would take roughly a hundred times as long to do so.
"This would be a real ninth planet," says [Mike] Brown, the Richard and Barbara Rosenberg Professor of Planetary Astronomy. "There have only been two true planets discovered since ancient times, and this would be a third. It's a pretty substantial chunk of our solar system that's still out there to be found, which is pretty exciting."The last such discovery was 150 years ago.
Pluto used to be called the Ninth Planet, but is so small that it has been demoted to dwarf planet status. Dwarf planet sized objects (and some objects that are smaller) that are very far from the sun all called Kuinper Belt Objects, such as Sedna.
Brown has discovered more dwarf planets than anyone who has ever lived and now may land the discovery of the only remaining true planet left in the solar system.
Is It Real? Why Should It Exist?
The linked Science Direct article (really a Cal Tech press release) is quite rich in explaining how the discovery came about and what evidence supports their conclusion, yet very readable. I came away from it convinced that they are almost certainly correct despite not having directly observed it yet.
The discussion in the initial section of the paper demonstrates how deep the recent literature is on explaining the various phenomena that Batygin and Brown have finally appeared to crack with their Planet Nine hypothesis.
The physics of the solar system can be calculated very precisely because they involve just a single force (gravity) operating with extremely little friction, according to classical mechanics, in a weak gravity regime where general relativity makes only slight corrections to the very simple Newtonian GMm/r^2 force rule.
The general relativity perturbations whose magnitude can be calculated with great precision in principle. But, the general relativity effect is very small, because Planet Nine, unlike Mercury whose perihelion is tweaked slightly from the Newtonian expectation by general relativity due to its proximity to the strong gravitational field of the Sun, and the direction of the general relativity effect can be calculated much more easily than the full exact general relativity calculation with multiple bodies. So, one can add a one directional error bar for systemic differences between general relativity and Newtonian gravity that is quite small to the Newtonian prediction. Indeed, the correction is probably dwarfed by other experimental uncertainties in the astronomy observations of the solar system objects used an inputs in the model of the solar system used to make the prediction.
So, basically, one is left making predictions using a computer model constructed using only high school physics and calculus and a wealth of available data points on all of the known masses in the solar system, that is still phenomenally accurate to the limits of the precision of state of the art telescope measurements of solar system objects. (The initial analysis of the multi-body gravitational dynamics that flow from Newtonian gravity is done not using this "dumb" N-body simulation method, but with a more advanced mathematical physics concept known as a Hamiltonian which is an equation that adds up the potential and kinetic energies of all of the objects in a system which must stay constant due to the conservation of energy that has been known in its current form for solar system dynamics since at least 1950. But, it all flows from applying high school physics and calculus to this complex multi-body situation.)
Convincingly, a lot of seemingly unrelated properties of Kuinper Belt Objects, including some that were not the basis of the original formulation of the model are all consistent with a Planet Nine hypothesis.
For example, it turns out that the trick to making these models work with a Ninth Planet that influences the dynamics of a lot of Kuinper Belt objects is for a key property of the Ninth Planet orbit and of the affected Kuinper Belt object fall quite exactly into relatively small integer ratios of each other such as 2:1, 3:1, 5:3, 7:4, 9:4, 11:4, 13:4, 23:6, 27:17, 29:17, and 33:19, because if they lack a common denominator, the objects orbits will never collide even though their orbits overlap (tiny friction effects due to things like tidal effects on the shape of the objects eventually probably would collider, but over time periods too long relative to the age of the solar system for this to have actually happened). Remarkably, this turns out to be the case.
Similar objects have been observed around other stars, but not to date, around our own.
Why Hasn't It Been Directly Observed?
It appears that a big barrier to observation is that its location has been pinned down only to a particular, very long orbital path around the Sun (which is also quite wide due to margins of errors in the astronomy measurements and calculation uncertainties), rather than to a specific location. Also its great distance from the Sun means that it is not illuminated strongly (and like all planets does not make its own light) and due to its distance and not super huge diameter (compared, for example, to large gas giants and stars) should be only a tiny, almost point-like object in the night sky.
Depending on the albedo (i.e. reflectivity) of Planet Nine's surface, it may not even be visible via a telescope at all except when it obscures some other known object like a star by passing between it and the Earth. Gas giants and planets with atmospheres like Venus reflect 40-65% of the sunlight that hits them, and gas giants are also larger (because the density of gases and liquids is lower than a rocky core), but rocky objects without much in the way of atmospheres like Mercury, the Moon and Mars reflect only 10%-15% of the light that hits them and also have about half the radius of a gas giant of comparable mass. So, a rocky Planet Nine would reflect only about 6% or so of the light of a gas giant Planet Nine comparable to Uranus or Neptune, making it significantly harder to detect directly. Since a planet like this is basically unprecedented in the solar system, there is no really strong reason to favor a rocky Planet Nine hypothesis over a gas giant Planet Nine hypothesis which would be very similar to Uranus and Neptune but significantly colder.
Has Planet Nine Been Seen Already? Probably Not.
Maju notes at his blog a pre-print purporting to have possibly observed a new planet which may or may not be related. The Vlemmings, et al. paper that he notes states that "we find that, if it is gravitationally bound, Gna is currently located at 12−25 AU distance and has a size of ∼220−880 km. Alternatively it is a much larger, planet-sized, object, gravitationally unbound, and located within ∼4000 AU, or beyond (out to ∼0.3~pc) if it is strongly variable." The Liseau preprint to which he links also appears to be describing the same object using the same data.
Neptune is about 30 AU from the Sun, so according to the press release, Planet Nine's orbit should be about 600 AU from the Sun. This isn't a good fit for the object described by Vlemmings, although without a closer read of the preprints it is hard to see what assumptions were made in order to rule out the possibility definitively. If the Vlemmings paper is observing a new planet, at any rate, it is probably not observing Planet Nine.
The Paper
The paper open access paper and its abstract as as follows:
Recent analyses have shown that distant orbits within the scattered disk population of the Kuiper Belt exhibit an unexpected clustering in their respective arguments of perihelion. While several hypotheses have been put forward to explain this alignment, to date, a theoretical model that can successfully account for the observations remains elusive.
In this work we show that the orbits of distant Kuiper Belt objects (KBOs) cluster not only in argument of perihelion, but also in physical space. We demonstrate that the perihelion positions and orbital planes of the objects are tightly confined and that such a clustering has only a probability of 0.007% to be due to chance, thus requiring a dynamical origin.
We find that the observed orbital alignment can be maintained by a distant eccentric planet with mass greater than approximately10 m⊕ whose orbit lies in approximately the same plane as those of the distant KBOs, but whose perihelion is 180° away from the perihelia of the minor bodies.
In addition to accounting for the observed orbital alignment, the existence of such a planet naturally explains the presence of high-perihelion Sedna-like objects, as well as the known collection of high semimajor axis objects with inclinations between 60° and 150° whose origin was previously unclear.
Continued analysis of both distant and highly inclined outer solar system objects provides the opportunity for testing our hypothesis as well as further constraining the orbital elements and mass of the distant planet.Konstantin Batygin and Michael E. Brown, "Evidence for a Distant Giant Planet in the Solar System." Astronomical Journal (January 20, 2016); DOI: 10.3847/0004-6256/151/2/22.
Post Script
Hat tip to Maju for altering me to the discovery in the comments on another post.
Michael Brown's website, or his blog, which is in the sidebar, make no mention of the discovery (perhaps due to a publication embargo, or perhaps because he simply no longer maintains either of them).
The Anglo-Saxon Conquest Had A Major Demic Impact
Two new studies, utilizing ancient DNA samples, show that the Anglo-Saxon conquest of England (which is historically attested at ca. 400 CE), had a major impact on the gene pool of England.
One study estimates the Anglo-Saxon contribution at 38% in contemporary East England. Another finds that the Roman and pre-Roman era East English were similar genetically to the modern Welsh population of England. The Anglo-Saxon contribution was closer to the modern day Dutch and Danish people.
This is not an easy thing to distinguish, however, because the populations of all of these regions have broad general similarities to each other.
Razib also covers these publications but with an important additional observation (emphasis in original):
One study estimates the Anglo-Saxon contribution at 38% in contemporary East England. Another finds that the Roman and pre-Roman era East English were similar genetically to the modern Welsh population of England. The Anglo-Saxon contribution was closer to the modern day Dutch and Danish people.
This is not an easy thing to distinguish, however, because the populations of all of these regions have broad general similarities to each other.
Razib also covers these publications but with an important additional observation (emphasis in original):
So genetics tells us that extreme positions of total replacement or (near) total continuity are both false. Rather, the genetic landscape of modern England is a synthesis, with structure contingent upon geography. But, it also shows us that substantial demographic change which produces a genetic synthesis can result in a total cultural shift. Though we may think of elements of culture as entirely modular, with human ability to mix and match components as one might see fit, the reality is that often cultural identities and markers are given and taken as package deals. But, it probably took the transplantation of a total German culture through a mass folk movement to give the Saxons enough insulation from the local British substrate to allow them to expand so aggressively and become genetically assimilative and culturally transformative.Off topic, Razib also points in another post out an interesting effort to use folktales as a means of tracking Indo-European phylogenies, in much that same way that one would use lexical correspondences for the same purpose.
Monday, January 18, 2016
Two Notable Pre-Holocene Traces Of Hominins In Asia
* Maju notes the discovery of 118,000 year old flaked stone tools on the island of Sulawasi (formerly known as Celebres) in what is now called Indonesia (which would be advanced and unexpected for Homo Erectus) on the Papuan side of the Wallace line. This is 58,000-78,000 years before any previously evidence of hominin presence on the island.
The term "flaked stone tools" used in the Guardian newspaper of London story that he references can hide a multitude of sins and isn't specific enough to distinguish tools typically associated with one hominin species from another. Even tools from the Oldowan Industry, which was utilized by very early Homo Erectus and earlier Homo species could arguably be described as flaked.
Thus, while the tools were surely made by hominins (assuming for the time being that the characterization of the discovery as flaked tools and the dating by the investigators interviewed is accurate), the description in the story is insufficient to distinguish any of the hominin species that made it to Eurasia from each other based upon the technological distinctiveness alone, although a more precise description of these tools might make it possible to rule out some potential hominin species.
Wikipedia sums up the prehistory of the island prior to the report he references in the Guardian newspaper of London as follows:
I don't claim an equal level of certainty about the skeletal remains that should be associated with Denisovan DNA, but agree that an archaic hominin is a likely possibility.
This time period is largely a blank slate in Indonesia due to a lack of archaeological evidence.
While I agree that modern humans were in SW Asia by 120,000 years ago or so, I am not yet convinced by the thin available evidence that modern humans made it to Indonesia or East Asia that early on, although a few finds like this one, if the technology could be determined to be definitely modern human could convince me otherwise.
The need for maritime travel so very, very early on also casts doubt on a modern human explanation even if the East Asian traces of modern human-like evidence turns out to have been correct. There would be times earlier and later in history when low sea levels could have compensated for inferior seafaring by modern humans or other hominins 120,000 years ago or more ago, but the window from time from Out of Africa to this site is too small and lacks the sea levels to be explained this way.
* Dienekes' Anthropology Blog, meanwhile, notes evidence supporting a much larger range of Upper Paleolithic modern humans (in degrees of latitude and time frame) than was previously supported by archaeological evidence.
Specifically, this is about 1,080 miles further North and at least 10,000 years earlier than any previous trace of a hominin presence in northern Eurasia, tends to coincide with the appearance of modern humans in Europe (which came at least 10,000 years after the commonly accepted beginning on the Upper Paleolithic era), and may disturb the correspondence between the Siberian mass extinction event and a modern human presence in the area which had previously been assumed to tightly correspond to each other.
The abstract and paper are as follows:
He rightly notes that the case for this mammoth kill being made by modern humans rather than Neanderthals is circumstantial but strong.
The Neanderthal range, per Wikipedia.
Neanderthals didn't generally make it further North than ca. 50°N latitude, and was only found at even lower latitudes in Northern Asia.
The appearance of a Neanderthal mammoth kill more than 1584 miles north of Africa at the precise moment in time when modern humans start to appear in Europe after hundreds of thousands of years of Neanderthal presence in West Eurasia is just too much of a coincidence to be credible, while this is a quite natural, incremental possibility for a modern human mammoth kill.
The term "flaked stone tools" used in the Guardian newspaper of London story that he references can hide a multitude of sins and isn't specific enough to distinguish tools typically associated with one hominin species from another. Even tools from the Oldowan Industry, which was utilized by very early Homo Erectus and earlier Homo species could arguably be described as flaked.
Thus, while the tools were surely made by hominins (assuming for the time being that the characterization of the discovery as flaked tools and the dating by the investigators interviewed is accurate), the description in the story is insufficient to distinguish any of the hominin species that made it to Eurasia from each other based upon the technological distinctiveness alone, although a more precise description of these tools might make it possible to rule out some potential hominin species.
Wikipedia sums up the prehistory of the island prior to the report he references in the Guardian newspaper of London as follows:
Before October 2014, the settlement of South Sulawesi by modern humans had been dated to c. 30,000 BC on the basis of radiocarbon dates obtained from rock shelters in Maros. No earlier evidence of human occupation had at that point been found, but the island almost certainly formed part of the land bridge used for the settlement of Australia and New Guinea by at least 40,000 BCE. There is no evidence of Homo erectus having reached Sulawesi; crude stone tools first discovered in 1947 on the right bank of the Walennae River at Berru, Indonesia, which were thought to date to the Pleistocene on the basis of their association with vertebrate fossils, are now thought to date to perhaps 50,000 BC.Maju ponders if this could represent a very early modern human relic, and also considers that it might perhaps involved tools left behind by an archaic hominin, such as the species that left Denisovan DNA in Papuans and Aboriginal Australians which he associates with H. Hiedelbergensi, or perhaps H. Florensis, which may or may not be a dwarf species of H. Hiedelbergensi in his estimation.
Following Peter Bellwood's model of a southward migration of Austronesian-speaking farmers, radiocarbon dates from caves in Maros suggest a date in the mid-second millennium BC for the arrival of an a group from east Borneo speaking a Proto-South Sulawesi language (PSS). Initial settlement was probably around the mouth of the Sa'dan river, on the northwest coast of the peninsula, although the south coast has also been suggested. Subsequent migrations across the mountainous landscape resulted in the geographical isolation of PSS speakers and the evolution of their languages into the eight families of the South Sulawesi language group. If each group can be said to have a homeland, that of the Bugis – today the most numerous group – was around lakes Témpé and Sidénréng in the Walennaé depression. Here for some 2,000 years lived the linguistic group that would become the modern Bugis; the archaic name of this group (which is preserved in other local languages) was Ugiq. Despite the fact that today they are closely linked with the Makasar, the closest linguistic neighbours of the Bugis are the Toraja.
Pre-1200 Bugis society was most likely organised into chiefdoms. Some anthropologists have speculated these chiefdoms would have warred and, in times of peace, exchanged women with each other. Further they have speculated that personal security would have been negligible, and head-hunting an established cultural practice. The political economy would have been a mixture of hunting and gathering and swidden or shifting agriculture. Speculative planting of wet rice may have taken place along the margins of the lakes and rivers.
In Central Sulawesi there are over 400 granite megaliths, which various archaeological studies have dated to be from 3000 BC to AD 1300. They vary in size from a few centimetres to around 4.5 metres (15 ft). The original purpose of the megaliths is unknown. About 30 of the megaliths represent human forms. Other megaliths are in form of large pots (Kalamba) and stone plates (Tutu'na).
In October 2014 it was announced that cave paintings in Maros had been dated as being about 40,000 years old. Dr Maxime Aubert, of Griffith University in Queensland, Australia, said that the minimum age for the outline of a hand was 39,900 years old, which made it "the oldest hand stencil in the world" and added, "Next to it is a pig that has a minimum age of 35,400 years old, and this is one of the oldest figurative depictions in the world, if not the oldest one."
I don't claim an equal level of certainty about the skeletal remains that should be associated with Denisovan DNA, but agree that an archaic hominin is a likely possibility.
This time period is largely a blank slate in Indonesia due to a lack of archaeological evidence.
While I agree that modern humans were in SW Asia by 120,000 years ago or so, I am not yet convinced by the thin available evidence that modern humans made it to Indonesia or East Asia that early on, although a few finds like this one, if the technology could be determined to be definitely modern human could convince me otherwise.
The need for maritime travel so very, very early on also casts doubt on a modern human explanation even if the East Asian traces of modern human-like evidence turns out to have been correct. There would be times earlier and later in history when low sea levels could have compensated for inferior seafaring by modern humans or other hominins 120,000 years ago or more ago, but the window from time from Out of Africa to this site is too small and lacks the sea levels to be explained this way.
* Dienekes' Anthropology Blog, meanwhile, notes evidence supporting a much larger range of Upper Paleolithic modern humans (in degrees of latitude and time frame) than was previously supported by archaeological evidence.
Specifically, this is about 1,080 miles further North and at least 10,000 years earlier than any previous trace of a hominin presence in northern Eurasia, tends to coincide with the appearance of modern humans in Europe (which came at least 10,000 years after the commonly accepted beginning on the Upper Paleolithic era), and may disturb the correspondence between the Siberian mass extinction event and a modern human presence in the area which had previously been assumed to tightly correspond to each other.
The abstract and paper are as follows:
Archaeological evidence for human dispersal through northern Eurasia before 40,000 years ago is rare. In west Siberia, the northernmost find of that age is located at 57°N. Elsewhere, the earliest presence of humans in the Arctic is commonly thought to be circa 35,000 to 30,000 years before the present. A mammoth kill site in the central Siberian Arctic, dated to 45,000 years before the present, expands the populated area to almost 72°N. The advancement of mammoth hunting probably allowed people to survive and spread widely across northernmost Arctic Siberia.Vladimir V. Pitulko, Alexei N. Tikhonov et al., "Early human presence in the Arctic: Evidence from 45,000-year-old mammoth remains". 351/6270 Science 260-263 (January 15, 2016).
He rightly notes that the case for this mammoth kill being made by modern humans rather than Neanderthals is circumstantial but strong.
The Neanderthal range, per Wikipedia.
Neanderthals didn't generally make it further North than ca. 50°N latitude, and was only found at even lower latitudes in Northern Asia.
The appearance of a Neanderthal mammoth kill more than 1584 miles north of Africa at the precise moment in time when modern humans start to appear in Europe after hundreds of thousands of years of Neanderthal presence in West Eurasia is just too much of a coincidence to be credible, while this is a quite natural, incremental possibility for a modern human mammoth kill.
Musings On The Practice Of Having Companions For Dead Leaders
Many ancient cultures of the Copper, Bronze and Iron Ages from Egypt (during the First Dynasty 3100 BCE to 2900 BCE) to China, at least, and perhaps even in Oceania and the New World (e.g. in the Mississipian culture), has a practice of burying deceased leaders with attendants (retainer sacrifice) and/or family members (particularly widows) who were either killed for the purpose or died after being buried alive in a particularized form of human sacrifice.
(The Chinese Terracotta Army ca. 210 BCE, appears to be a reform of this traditional practices that replaced actually afterlife attendants with symbolic representations of them. And, Egyptian burial practices underwent a similar reform after the First Dynasty with statutes of servants replacing acatual servants.)
A new series by Kate Elliot called Court of Fives has a speculative fiction world that has such a practice, fittingly as she has substantial training and experience in anthropology and history.
The broad geographic and temporal range of the practice suggests that this practice is something that a wide variety of early organized religions. Broad outlines of early religious practice seem to be near universal, and as Robert Wright notes in his book, The Evolution of God, all available information suggests that these societies were pervasively religious one.
Essentially, then, it seems that a very common path of religious development, whether or not it is actually truly universal, is for religion to imagine an afterlife, and further, to imagine that important figures who were served by many in life should be likewise served by many in the afterlife by having servants who die with them.
It isn't at all clear to what extent the people whose lives are sacrificed in the process, or those who carry out the rituals, are sincere in believing that an afterlife with companions will follow, and to what extent the sacrifices are forced into it not believing that an afterlife will follow and those who choose the sacrifices are using the practice for anything other than expedient political murder, regardless of what the larger society believes.
Likewise, the practical impact of this practice on societies and any evolutionary benefits conferred by it are unclear, although given that so many societies have practiced it, surely there must be some.
One possibility is that most ancient societies were close to a Malthusian limit which periodic culling of people who needed its resources eased somewhat, often at a time when expert leadership was replaced by neophyte leadership that might make errors reducing the economic resources available to the community, in an antiseptic manner.
Another possibility is that what really mattered was who was chosen to join the leader in death, and that choosing people loyal to that leader removed from society people who might not share that loyalty towards the successor leader, thereby reducing dissent and assuring the stability of the new regime.
A third possibility is that often the old leader's trusted aides were chosen when he took office and by his death were themselves often feeble of mind and body, and that culling these aged aides opened the door to a new generation of younger, more capable leadership to come to the fore. This would mitigate the initial concern that removing top talent in the leader's court from society would be a catastrophic loss of social capital that would hurt the society (unless those who accompanied the leader into death were mere stand ins for the aides).
At any rate, it is an interesting and widespread practice that deserves more consideration.
With regard to Kate Elliot's book that inspired the thought, Circle of Fives, whatever merits or lack thereof can be attributed to it as a literary work (the reviews range from extremely positive to extremely negative), really does deserve to be admired for its world building. It draws on Mesopotamian (i.e. Sumerian) and Hurrian/Hittite sounding names, and has been compared by reviews to ancient Egyptian, ancient Greek, ancient Roman and ancient Mayan cultures respectively, with a couple of smatterings of phenomena that may be magic, may be technology, or may be indeterminate (like metal war machines called "Spiders" that sound suspiciously like Steampunk technology even though the general milieu has more in common with far more ancient cultures).
It is artful, of course, because while it draws on general concepts present in many of these societies, is not just an remaking of any one of them and is instead an artful synthesis of historical cultures of the appropriate technology and civilization level that doesn't precisely replicate any one of them (although I would lean towards Sumerian and Egyptian leanings).
Another thing that her world does a good job of exploring is what a society in which a superstrate conquering population rules over a substrate conquered population which speaks a different language looks like, from the wonderfully liminal position of a protagonist who is the child of a male soldier who belongs to the superstrate culture and a local woman (a very common situation in ancient history and the Holocene prehistoric past in almost every region on Earth).
(The Chinese Terracotta Army ca. 210 BCE, appears to be a reform of this traditional practices that replaced actually afterlife attendants with symbolic representations of them. And, Egyptian burial practices underwent a similar reform after the First Dynasty with statutes of servants replacing acatual servants.)
A new series by Kate Elliot called Court of Fives has a speculative fiction world that has such a practice, fittingly as she has substantial training and experience in anthropology and history.
The broad geographic and temporal range of the practice suggests that this practice is something that a wide variety of early organized religions. Broad outlines of early religious practice seem to be near universal, and as Robert Wright notes in his book, The Evolution of God, all available information suggests that these societies were pervasively religious one.
Essentially, then, it seems that a very common path of religious development, whether or not it is actually truly universal, is for religion to imagine an afterlife, and further, to imagine that important figures who were served by many in life should be likewise served by many in the afterlife by having servants who die with them.
It isn't at all clear to what extent the people whose lives are sacrificed in the process, or those who carry out the rituals, are sincere in believing that an afterlife with companions will follow, and to what extent the sacrifices are forced into it not believing that an afterlife will follow and those who choose the sacrifices are using the practice for anything other than expedient political murder, regardless of what the larger society believes.
Likewise, the practical impact of this practice on societies and any evolutionary benefits conferred by it are unclear, although given that so many societies have practiced it, surely there must be some.
One possibility is that most ancient societies were close to a Malthusian limit which periodic culling of people who needed its resources eased somewhat, often at a time when expert leadership was replaced by neophyte leadership that might make errors reducing the economic resources available to the community, in an antiseptic manner.
Another possibility is that what really mattered was who was chosen to join the leader in death, and that choosing people loyal to that leader removed from society people who might not share that loyalty towards the successor leader, thereby reducing dissent and assuring the stability of the new regime.
A third possibility is that often the old leader's trusted aides were chosen when he took office and by his death were themselves often feeble of mind and body, and that culling these aged aides opened the door to a new generation of younger, more capable leadership to come to the fore. This would mitigate the initial concern that removing top talent in the leader's court from society would be a catastrophic loss of social capital that would hurt the society (unless those who accompanied the leader into death were mere stand ins for the aides).
At any rate, it is an interesting and widespread practice that deserves more consideration.
With regard to Kate Elliot's book that inspired the thought, Circle of Fives, whatever merits or lack thereof can be attributed to it as a literary work (the reviews range from extremely positive to extremely negative), really does deserve to be admired for its world building. It draws on Mesopotamian (i.e. Sumerian) and Hurrian/Hittite sounding names, and has been compared by reviews to ancient Egyptian, ancient Greek, ancient Roman and ancient Mayan cultures respectively, with a couple of smatterings of phenomena that may be magic, may be technology, or may be indeterminate (like metal war machines called "Spiders" that sound suspiciously like Steampunk technology even though the general milieu has more in common with far more ancient cultures).
It is artful, of course, because while it draws on general concepts present in many of these societies, is not just an remaking of any one of them and is instead an artful synthesis of historical cultures of the appropriate technology and civilization level that doesn't precisely replicate any one of them (although I would lean towards Sumerian and Egyptian leanings).
Another thing that her world does a good job of exploring is what a society in which a superstrate conquering population rules over a substrate conquered population which speaks a different language looks like, from the wonderfully liminal position of a protagonist who is the child of a male soldier who belongs to the superstrate culture and a local woman (a very common situation in ancient history and the Holocene prehistoric past in almost every region on Earth).
Thursday, January 14, 2016
Goat Domestication Occurred About Where Expected
Part of the Fertile Crescent Neolithic package of domesticated plants and animals was the domesticated goat, which is derived from one or more species of wild goats.
A survey of available mtDNA data from goats reveals, consistent with prior archaeological evidence, that goats were probably domesticated around the time of the Fertile Crescent Neolithic revolution in the Zargos Mountains at a location probably in the vicinity of what is currently in the Southeastern portion of the modern country Turkey (which obviously didn't exist at the time).
This is not far from the site of other important Fertile Crescent Neolithic domestication events (although probably some of domesticated species had origins elsewhere in the Fertile Crescent or even outside and adjacent to those strict boundaries), which is pretty much where archaeologists, historical zoologists and anthropologists had long suspects that this happened.
A survey of available mtDNA data from goats reveals, consistent with prior archaeological evidence, that goats were probably domesticated around the time of the Fertile Crescent Neolithic revolution in the Zargos Mountains at a location probably in the vicinity of what is currently in the Southeastern portion of the modern country Turkey (which obviously didn't exist at the time).
This is not far from the site of other important Fertile Crescent Neolithic domestication events (although probably some of domesticated species had origins elsewhere in the Fertile Crescent or even outside and adjacent to those strict boundaries), which is pretty much where archaeologists, historical zoologists and anthropologists had long suspects that this happened.
The Testimony Of The Gut Bacteria
The stomach bacterium Helicobacter pylori is one of the most prevalent human pathogens. It has dispersed globally with its human host, resulting in a distinct phylogeographic pattern that can be used to reconstruct both recent and ancient human migrations.
The extant European population of H. pylori is known to be a hybrid between Asian and African bacteria, but there exist different hypotheses about when and where the hybridization took place, reflecting the complex demographic history of Europeans.
Here, we present a 5300-year-old H. pylori genome from a European Copper Age glacier mummy. The “Iceman” H. pylori is a nearly pure representative of the bacterial population of Asian origin that existed in Europe before hybridization, suggesting that the African population arrived in Europe within the past few thousand years.Frank Maixner, Ben Krause-Kyora, Dmitrij Turaev, Alexander Herbig, et al., "The 5300-year-old Helicobacter pylori genome of the Iceman" 351-6269 Science 162-165 (January 8, 2016).
This is a counterintuitive result. There was a major population shift in Europe shortly after the Ice Man died ca. 3300 BCE. But, the migration that changed it all arrived from Asia, not Africa.
It would be great to see if the African version of H. pylori was present in steppe populations, or in Western Europeans, before the Copper Age/Early Bronze Age demographic shift in Europe that gave rise to its modern population genetics.
Evidence Of Class Stratification From Early Bell Beaker Period Burials
Bell Beaker blogger has an interesting post summarizing new evidence regarding the burial practices of Europeans in areas where the Bell Beaker culture appeared.
In a nutshell, immediately before the Beaker era, children were buried (as was everyone) in undistinguished collective graves.
In the early Beaker period, there was a sharp divide between non-Beaker child burials which omitted all children under six months of age, involved collective child graves, and had no grave goods, and contemporaneous child burials of Beaker people, in which all children were buried with adults who seemed to be their parents or relatives and grave goods were present. The Beaker children were better fed with lots of milk and meat, and people buried in Beaker graves had longer lives, on average, than those buried in non-Beaker graves.
Later in the Beaker period, all graves followed a Bell Beaker-like pattern, and individuals and adults were each buried in separate graves.
This tends to show that there was a Beaker superstrate which was culturally distinct from the substrate people in the same time and place, and that life was better for Beaker people, not in small part due to their ability to include meat and milk in their diets which the autochronous people lacked.
In a nutshell, immediately before the Beaker era, children were buried (as was everyone) in undistinguished collective graves.
In the early Beaker period, there was a sharp divide between non-Beaker child burials which omitted all children under six months of age, involved collective child graves, and had no grave goods, and contemporaneous child burials of Beaker people, in which all children were buried with adults who seemed to be their parents or relatives and grave goods were present. The Beaker children were better fed with lots of milk and meat, and people buried in Beaker graves had longer lives, on average, than those buried in non-Beaker graves.
Later in the Beaker period, all graves followed a Bell Beaker-like pattern, and individuals and adults were each buried in separate graves.
This tends to show that there was a Beaker superstrate which was culturally distinct from the substrate people in the same time and place, and that life was better for Beaker people, not in small part due to their ability to include meat and milk in their diets which the autochronous people lacked.
Significance Of 750 GeV Bump May Be Greatly Overstated
The 750 GeV Bump
The ATLAS experiment has reported an apparent resonance at 750 GeV with local significance of about 3.5 to 3.9 sigma, which the CMS experiment has also reported, but with a lower significance of about 2 sigma. (Previous coverage at this blog is found here and here and here and here).
A new paper discussed below, however, suggests that this may be grossly overestimated due to a subtle flaw in the way that ATLAS determined the margin of error in its estimation of the diphoton background to which the actual number of events seen experimentally were compared.
Previous Discussions Of The Statistical Significance Of The Bump
A great deal of the initial discussion about this find focused on the extent to which the look elsewhere effect reduced the significance of this discovery, and the extent to which the fact that a bump was found in the same place by both the ATLAS and CMS experiments tamed the look elsewhere effect.
Normally, the look elsewhere effect would greatly reduce the significance of an individual experiment's locally significant resonance. But, the likelihood that an experiment will have one highly significant bump in a great many trials simply due to random chance is much, much greater than the likelihood that two independent experiments will have a significant bump in the same place simply due to random chance.
But, the initial discussion largely took at face value the 3.9 sigma local significance of the ATLAS bump before adjustment for look elsewhere effects. This has now been seriously questioned.
Doubts About the Significance Of The ATLAS Bump. Is It Really Only 2 Sigma?
A new paper argues on quite technical physics driven grounds that ATLAS used the wrong methodology to calculate the local significance of the 750 GeV bump. Basically, they argue that the margin of error in the diphoton background was underestimated by almost a factor of two, because the absolute magnitude of the margin of error in lower energy events (which were oversampled in estimating the margin of error) is smaller than the margin of error in higher energy events (which were undersampled in estimating the margin of error).
Therefore, the new paper argues that the actual local significance of the 750 GeV bump at ATLAS was just 2 sigma. But, a 2 sigma bump, even when replicated in an independent experiment at the same significance, followed by slight discounting for the look elsewhere effect, is very quite likely to be a mere fluke.
As the abstract of the new paper explains, theoretical error in calculating the margins of error in the background expectations is probably at fault in this case.
Background: Conventional Wisdom About Statistical Significance In High Energy Physics
Flukes with a local two sigma significance come and go all of the time in collider physics, and even bumps with a local three sigma significance tend to have a less than 50% chance of turning out to be real in the long run, although they start to warrant some serious attention. Four sigma results are quite promising, but are still not considered a sure thing. Only a result with five sigma significance after considering the look elsewhere effect is considered a "discovery" that is truly "real" in particle physics.
The gross discounting of the significance of experimental data (two sigma should mean a 95% chance that it is not a fluke, and three sigma should mean a 99% chance that it is not a fluke) that should be extremely unlikely given a Standard Model background and the mathematical rules of probability, flows from three main reasons.
It flows first, from look elsewhere effects (the fact that unusual results will happen randomly sometime if you do a lot of trials which is notoriously hard to quantify because rigorous definition of what constitutes a trial in complex experiments is notoriously elusive).
Second, it flows from underestimations of what are usually fairly modest theoretical calculation errors (which exist due to numerical approximations in the calculations and error in the measurement of fundamental constants used as inputs in them).
Third, it flows from systemic measurement errors. The statistical component of the margin of error, however, is almost always calculated with a precision that is empirically indistinguishable from perfect, because the mathematical formulas needed to calculate this are fairly straightforward and well understood, and it is purely a mathematical calculation that does not rely on the underlying physics.
Implications For Fundamental Physics
If the analysis by Davis and his colleagues is correct, then the 150+ papers devising beyond the Standard Model theories that can accommodate a new 750 GeV scalar or pseudoscalar boson are much ado about nothing, the fact that there have not been noticeable signals in four lepton or mixed lepton-photon channels (as would be expected in connection with a strong diphoton signal) requires no elaborate work around, and it is very likely that the 750 GeV bump seen by ATLAS and CMS will disappear as more data are collected in this year's portion of LHC Run-2 data collection.
Put another way, particle physics is at a great fork in the road right now. If the 750 GeV bump is real, then the LHC has just made the most profound discovery in physics since the development of the Standard Model in the early 1970s that explained everything except neutrino oscillation in the next forty year and had predicted the existence of the Higgs boson that was finally discovered in 2012.
A new 750 GeV boson would portend a whole new world of BSM physics at the electroweak scale that many people had hoped for in some form or another, but there had been no widespread consensus in the physics community predicting, and it becomes imperative to immediately start building a more powerful collider that can measure the new phenomena that are just around the corner.
In contrast, if the 750 GeV boson turns out to be just another fluke, the prospect that beyond the Standard Model physics do not exist all of the way up to the GUT scale or Plank scale are greatly heightened, and it is likely to there is very little left for a new collider to show us about fundamental physics, because a collider capable of disclosing GUT or Plank scale physics are far beyond the power of physics to construct in the foreseeable future in light of humanity's current technological and economic constraints. A new collider would have to be ten orders of magnitude more powerful than the current one to get a good look at that scale of new physics.
How Likely Is The 750 GeV Bump To Be Real?
Unfortunately, in light of the new paper by Davis and the lack of corroboration of the 750 GeV bump in other channels, the betting odds that the 750 GeV bump have gone from perhaps 3-2 to odds of somewhere in the range of 9-1 to 99-1, in my estimation.
The previously emerging conventional wisdom that there is no phenomenology outside of neutrino physics that can't be fully explained by the Standard Model up to the GUT scale (1015 GeV vs. the 103 to 104 GeV scale measured at the LHC) described in the early 1970s is again well on its way to being our reality once again. The discovery of the Higgs boson mass of about 125 GeV made this theoretically possible (many of the different possible Higgs boson masses the Standard Model would have given rise to non-sensical results such as producing probabilities of something happening that don't add up to 100% at high energies), and the failure of experiments to discover convincing hints that there are any new physics out there hints that this is not only possible, but is probably true.
The ATLAS experiment has reported an apparent resonance at 750 GeV with local significance of about 3.5 to 3.9 sigma, which the CMS experiment has also reported, but with a lower significance of about 2 sigma. (Previous coverage at this blog is found here and here and here and here).
A new paper discussed below, however, suggests that this may be grossly overestimated due to a subtle flaw in the way that ATLAS determined the margin of error in its estimation of the diphoton background to which the actual number of events seen experimentally were compared.
Previous Discussions Of The Statistical Significance Of The Bump
A great deal of the initial discussion about this find focused on the extent to which the look elsewhere effect reduced the significance of this discovery, and the extent to which the fact that a bump was found in the same place by both the ATLAS and CMS experiments tamed the look elsewhere effect.
Normally, the look elsewhere effect would greatly reduce the significance of an individual experiment's locally significant resonance. But, the likelihood that an experiment will have one highly significant bump in a great many trials simply due to random chance is much, much greater than the likelihood that two independent experiments will have a significant bump in the same place simply due to random chance.
But, the initial discussion largely took at face value the 3.9 sigma local significance of the ATLAS bump before adjustment for look elsewhere effects. This has now been seriously questioned.
Doubts About the Significance Of The ATLAS Bump. Is It Really Only 2 Sigma?
A new paper argues on quite technical physics driven grounds that ATLAS used the wrong methodology to calculate the local significance of the 750 GeV bump. Basically, they argue that the margin of error in the diphoton background was underestimated by almost a factor of two, because the absolute magnitude of the margin of error in lower energy events (which were oversampled in estimating the margin of error) is smaller than the margin of error in higher energy events (which were undersampled in estimating the margin of error).
Therefore, the new paper argues that the actual local significance of the 750 GeV bump at ATLAS was just 2 sigma. But, a 2 sigma bump, even when replicated in an independent experiment at the same significance, followed by slight discounting for the look elsewhere effect, is very quite likely to be a mere fluke.
As the abstract of the new paper explains, theoretical error in calculating the margins of error in the background expectations is probably at fault in this case.
We investigate the robustness of the resonance like feature centred at around a 750 GeV invariant mass in the 13 TeV diphoton data, recently released by the ATLAS collaboration. We focus on the choice of empirical function used to model the continuum diphoton background in order to quantify the uncertainties in the analysis due to this choice. We extend the function chosen by the ATLAS collaboration to one with two components. By performing a profile likelihood analysis we find that the local significance of a resonance drops from 3.9σ using the ATLAS background function, and a freely-varying width, to only 2σ with our own function. We argue that the latter significance is more realistic, since the former was derived using a function which is fit almost entirely to the low-energy data, while underfitting in the region around the resonance.Jonathan H. Davis, et al., "The Significance of the 750 GeV Fluctuation in the ATLAS Run 2 Diphoton Data" (January 13, 2016).
Background: Conventional Wisdom About Statistical Significance In High Energy Physics
Flukes with a local two sigma significance come and go all of the time in collider physics, and even bumps with a local three sigma significance tend to have a less than 50% chance of turning out to be real in the long run, although they start to warrant some serious attention. Four sigma results are quite promising, but are still not considered a sure thing. Only a result with five sigma significance after considering the look elsewhere effect is considered a "discovery" that is truly "real" in particle physics.
The gross discounting of the significance of experimental data (two sigma should mean a 95% chance that it is not a fluke, and three sigma should mean a 99% chance that it is not a fluke) that should be extremely unlikely given a Standard Model background and the mathematical rules of probability, flows from three main reasons.
It flows first, from look elsewhere effects (the fact that unusual results will happen randomly sometime if you do a lot of trials which is notoriously hard to quantify because rigorous definition of what constitutes a trial in complex experiments is notoriously elusive).
Second, it flows from underestimations of what are usually fairly modest theoretical calculation errors (which exist due to numerical approximations in the calculations and error in the measurement of fundamental constants used as inputs in them).
Third, it flows from systemic measurement errors. The statistical component of the margin of error, however, is almost always calculated with a precision that is empirically indistinguishable from perfect, because the mathematical formulas needed to calculate this are fairly straightforward and well understood, and it is purely a mathematical calculation that does not rely on the underlying physics.
Implications For Fundamental Physics
If the analysis by Davis and his colleagues is correct, then the 150+ papers devising beyond the Standard Model theories that can accommodate a new 750 GeV scalar or pseudoscalar boson are much ado about nothing, the fact that there have not been noticeable signals in four lepton or mixed lepton-photon channels (as would be expected in connection with a strong diphoton signal) requires no elaborate work around, and it is very likely that the 750 GeV bump seen by ATLAS and CMS will disappear as more data are collected in this year's portion of LHC Run-2 data collection.
Put another way, particle physics is at a great fork in the road right now. If the 750 GeV bump is real, then the LHC has just made the most profound discovery in physics since the development of the Standard Model in the early 1970s that explained everything except neutrino oscillation in the next forty year and had predicted the existence of the Higgs boson that was finally discovered in 2012.
A new 750 GeV boson would portend a whole new world of BSM physics at the electroweak scale that many people had hoped for in some form or another, but there had been no widespread consensus in the physics community predicting, and it becomes imperative to immediately start building a more powerful collider that can measure the new phenomena that are just around the corner.
In contrast, if the 750 GeV boson turns out to be just another fluke, the prospect that beyond the Standard Model physics do not exist all of the way up to the GUT scale or Plank scale are greatly heightened, and it is likely to there is very little left for a new collider to show us about fundamental physics, because a collider capable of disclosing GUT or Plank scale physics are far beyond the power of physics to construct in the foreseeable future in light of humanity's current technological and economic constraints. A new collider would have to be ten orders of magnitude more powerful than the current one to get a good look at that scale of new physics.
How Likely Is The 750 GeV Bump To Be Real?
Unfortunately, in light of the new paper by Davis and the lack of corroboration of the 750 GeV bump in other channels, the betting odds that the 750 GeV bump have gone from perhaps 3-2 to odds of somewhere in the range of 9-1 to 99-1, in my estimation.
The previously emerging conventional wisdom that there is no phenomenology outside of neutrino physics that can't be fully explained by the Standard Model up to the GUT scale (1015 GeV vs. the 103 to 104 GeV scale measured at the LHC) described in the early 1970s is again well on its way to being our reality once again. The discovery of the Higgs boson mass of about 125 GeV made this theoretically possible (many of the different possible Higgs boson masses the Standard Model would have given rise to non-sensical results such as producing probabilities of something happening that don't add up to 100% at high energies), and the failure of experiments to discover convincing hints that there are any new physics out there hints that this is not only possible, but is probably true.
Monday, January 11, 2016
The Genetic Case Against An Out of India Theory Of Indo-European Origins
Davidski at Eurogenes makes a rather convincing case using an ancient genome that the Y-DNA R1a that is a major Indo-Aryan litmus test, originates in Europe, rather than South Asia.
Tuesday, January 5, 2016
Second Light Neutral Higgs Boson Excluded For Masses From 10 GeV to 100 GeV
Many beyond the Standard Model physics theories, including all supersymmetry (SUSY) theories assume the existence of two Higgs doublets. In these theories there are two electrically neutral scalar Higgs bosons, one heavier and one lighter.* By convention, the symbol H is used for the heavier one and the symbol h is used for the lighter one.
One of these could be a Standard Model-like Higgs boson such as the one experimentally found to exist at 125 GeV of mass. (Experiments have also increasingly ruled out the possibility that the 125 GeV boson is a mix of scalar boson with a significant mixing of a pseudoscalar boson to a very high confidence level.) But, there is little theoretical guidance on the question of whether the other would be lighter or heavier.
A recent review of Fermilab data has ruled out the existence of a light second Higgs boson at masses from 10 GeV to 100 GeV, which is an important part of the parameter space for many such two Higgs doublet models and strongly favors two Higgs doublet models in which the 125 GeV Higgs boson is the light, rather than the heavy of the neutral scalar Higgs bosons in the pair.
Previous bounds on neutral Higgs boson masses in two Higgs doublet models can be found here. This study significant expands the exclusion range in addition to making previous bounds more robust by replicating them with methods that are not identical. Experimental limits on extra scalar and pseudoscalar Higgs bosons are essentially the same because their decay channels show up in the same kinds of experiments.
* The other three additional Higgs bosons predicted in such theories are a Higgs boson with positive electric charge (H+), a Higgs boson with negative electric charge (H-), and an electrically neutral pseudo-scalar Higg boson called A. Limits on charged Higgs boson masses can be found here with new data from the LHC creating more strict exclusions found, for example, in this paper.
More elaborate more than two Higgs doublet models also have, for example, doubly charged Higgs bosons which are excluded up to much higher masses than singly charged Higgs bosons (they have masses of not less than 322 GeV if they exist).
There is no evidence for the discovery of any of these either and there are significant recent exclusions on their existences, although it surprises me that the possibility that the 750 GeV bump is a heavy neutral scalar or pseudo-scalar Higgs boson that produces diphoton decays via a triangle diagram involving the two charged Higgs bosons, hasn't received more attention yet. Perhaps other data of which I am not aware rules out this possibility which is quite elegant compared to many of the other proposals out there these days to explain it.
One of these could be a Standard Model-like Higgs boson such as the one experimentally found to exist at 125 GeV of mass. (Experiments have also increasingly ruled out the possibility that the 125 GeV boson is a mix of scalar boson with a significant mixing of a pseudoscalar boson to a very high confidence level.) But, there is little theoretical guidance on the question of whether the other would be lighter or heavier.
A recent review of Fermilab data has ruled out the existence of a light second Higgs boson at masses from 10 GeV to 100 GeV, which is an important part of the parameter space for many such two Higgs doublet models and strongly favors two Higgs doublet models in which the 125 GeV Higgs boson is the light, rather than the heavy of the neutral scalar Higgs bosons in the pair.
Previous bounds on neutral Higgs boson masses in two Higgs doublet models can be found here. This study significant expands the exclusion range in addition to making previous bounds more robust by replicating them with methods that are not identical. Experimental limits on extra scalar and pseudoscalar Higgs bosons are essentially the same because their decay channels show up in the same kinds of experiments.
* The other three additional Higgs bosons predicted in such theories are a Higgs boson with positive electric charge (H+), a Higgs boson with negative electric charge (H-), and an electrically neutral pseudo-scalar Higg boson called A. Limits on charged Higgs boson masses can be found here with new data from the LHC creating more strict exclusions found, for example, in this paper.
More elaborate more than two Higgs doublet models also have, for example, doubly charged Higgs bosons which are excluded up to much higher masses than singly charged Higgs bosons (they have masses of not less than 322 GeV if they exist).
There is no evidence for the discovery of any of these either and there are significant recent exclusions on their existences, although it surprises me that the possibility that the 750 GeV bump is a heavy neutral scalar or pseudo-scalar Higgs boson that produces diphoton decays via a triangle diagram involving the two charged Higgs bosons, hasn't received more attention yet. Perhaps other data of which I am not aware rules out this possibility which is quite elegant compared to many of the other proposals out there these days to explain it.
Monday, January 4, 2016
A Quick Neuro-Linguistic Observation
The linguistic distinction between non-sex based gender systems or more than three genders for male, female and neuter (a common feature, for example, of Niger-Congo languages, Papuan languages, and Australian Aboriginal languages), and noun cases that are not called genders which are present in many other languages (which are numerous, for example in Caucasian and Dravidian languages), is a distinction without a difference in my opinion, that obscures possible relationships between languages based merely on regional and sub-disciplinary conventions about how grammatical features are named.
It is also notable that both these non-sex based gender systems and these noun case systems tend to correspond to taxonomies of nouns that neuroscientists have found of distinct modules of kinds of nouns in the human brain. Thus, while these seem like arbitrary categories, these categories may be to a significant extent hardwired into our brains, which suggests that these categories were likely also part of many Upper Paleolithic human languages that were never attested.
The fact that these language features are most common in some of the language families derive from language families that have probably existed with very deep time depths also supports this hypothesis.
It is also notable that both these non-sex based gender systems and these noun case systems tend to correspond to taxonomies of nouns that neuroscientists have found of distinct modules of kinds of nouns in the human brain. Thus, while these seem like arbitrary categories, these categories may be to a significant extent hardwired into our brains, which suggests that these categories were likely also part of many Upper Paleolithic human languages that were never attested.
The fact that these language features are most common in some of the language families derive from language families that have probably existed with very deep time depths also supports this hypothesis.
Sunday, January 3, 2016
Coming Attractions in Physics
It is probably easier to sum up the physics discoveries that almost surely won't be made in 2016 than it is to sum up those that are likely. Still, let's consider what active research is underway and what it is likely to find, or not.
* Foremost in the minds of particle physicists is the 750 GeV bump in both ATLAS and CMS searches found at the end of last year. There are already a mountain of pre-prints hypothesizing beyond the Standard Model explanations for it.
Now that the dust has settled, and there has been time to consider what it would take for this bump to be real, while no one has seen significant signs of it elsewhere, my personal estimation of the likelihood of it being real has fallen to about 35%.
For example, the extent to which the 125 GeV Higgs boson resembles the Standard Model Higgs boson in every respect, places strict bounds on the properties that a spin-0 or spin-2 electrically neutral boson with 750 GeV can exhibit consistent with existing data (the linked pre-print is also an exceptionally lucid introduction to the physics of the Higgs boson, generally before it goes on to speculate about the 750 GeV bump). The singlet boson model discussed in the linked paper would suggest a boson with couplings similar to the Higgs boson but at least 80% weaker. (UPDATE January 4, 2016: Jester estimates that it is at least 90% weaker. "From the non-observation of anything interesting in run-1 one can conclude that there must be little Higgsiness in the 750 GeV particle, less than 10%.").
The lack of a corresponding clear signal of a four-lepton channel of decay at 750 GeV is particularly notable. So is the lack of any channel with very large amounts of missing traverse energy at the LHC, which would be expected if there were decays to a dark sector or of stable particles such as the lightest supersymmetric particle.
These considerations are not insurmountable, but they force any beyond the Standard Model theory that can incorporate a 750 GeV particle to be particularly baroque. It is virtually impossible for the 750 GeV particle to be a new singlet particle, or for it to be an excitation of the Standard Model Higgs boson (which would have had four intermediate excitations before this one), consistent with the data to date. This means that if the 750 GeV bump is real, that there must be a whole suite of other new particles out there to be discovered along with it.
The analysis of the remaining data from 2015 and new data from 2016 should reveal if this bump grows in significance (as it should if it is real), or fades in significance (in which case it is almost surely a statistical fluke).
* Neutrino experiments will continue to refine the precision with which we know that parameters of neutrino oscillations and masses, but none of these will have enough data to be definitive on any question in 2016. We had a lot of the answers at the end of 2015 and will have to wait more than another year before we have more definitive answers in the area of neutrino physics.
* LIGO is rumored to have evidence of direct detection of gravitational waves, but I am highly skeptical of the report due to past false alarms and due to the theoretical expectation that gravity waves should be more subtle than what LIGO is capable of detecting.
* Astronomy is one area where there are myriad ongoing active experiments and a great deal of simulation work that could produce breakthroughs in dark matter/dark energy/inflation with dark matter in particular being susceptible to insights from new astronomy data. Black hole physics generally could also see progress.
* Foremost in the minds of particle physicists is the 750 GeV bump in both ATLAS and CMS searches found at the end of last year. There are already a mountain of pre-prints hypothesizing beyond the Standard Model explanations for it.
Now that the dust has settled, and there has been time to consider what it would take for this bump to be real, while no one has seen significant signs of it elsewhere, my personal estimation of the likelihood of it being real has fallen to about 35%.
For example, the extent to which the 125 GeV Higgs boson resembles the Standard Model Higgs boson in every respect, places strict bounds on the properties that a spin-0 or spin-2 electrically neutral boson with 750 GeV can exhibit consistent with existing data (the linked pre-print is also an exceptionally lucid introduction to the physics of the Higgs boson, generally before it goes on to speculate about the 750 GeV bump). The singlet boson model discussed in the linked paper would suggest a boson with couplings similar to the Higgs boson but at least 80% weaker. (UPDATE January 4, 2016: Jester estimates that it is at least 90% weaker. "From the non-observation of anything interesting in run-1 one can conclude that there must be little Higgsiness in the 750 GeV particle, less than 10%.").
The lack of a corresponding clear signal of a four-lepton channel of decay at 750 GeV is particularly notable. So is the lack of any channel with very large amounts of missing traverse energy at the LHC, which would be expected if there were decays to a dark sector or of stable particles such as the lightest supersymmetric particle.
These considerations are not insurmountable, but they force any beyond the Standard Model theory that can incorporate a 750 GeV particle to be particularly baroque. It is virtually impossible for the 750 GeV particle to be a new singlet particle, or for it to be an excitation of the Standard Model Higgs boson (which would have had four intermediate excitations before this one), consistent with the data to date. This means that if the 750 GeV bump is real, that there must be a whole suite of other new particles out there to be discovered along with it.
The analysis of the remaining data from 2015 and new data from 2016 should reveal if this bump grows in significance (as it should if it is real), or fades in significance (in which case it is almost surely a statistical fluke).
* Neutrino experiments will continue to refine the precision with which we know that parameters of neutrino oscillations and masses, but none of these will have enough data to be definitive on any question in 2016. We had a lot of the answers at the end of 2015 and will have to wait more than another year before we have more definitive answers in the area of neutrino physics.
* LIGO is rumored to have evidence of direct detection of gravitational waves, but I am highly skeptical of the report due to past false alarms and due to the theoretical expectation that gravity waves should be more subtle than what LIGO is capable of detecting.
* Astronomy is one area where there are myriad ongoing active experiments and a great deal of simulation work that could produce breakthroughs in dark matter/dark energy/inflation with dark matter in particular being susceptible to insights from new astronomy data. Black hole physics generally could also see progress.
Subscribe to:
Posts (Atom)