Monday, April 30, 2012
Higgs Boson Search Animated
You know that you want to see an animated, school house rock style video of the Higgs boson search from PhD Comics.
The Ewok Village of Life
A Gen X metaphor, for an evolutionary concept mostly advanced by Gen X scientists.
Octopi Still Smart
Brains aren't just for vertebrates. One of the smartest creatures on Earth is the octopus. A recent learning experiment conducted on them illustrates their intelligence. A kidnapped octopus from the harbor unable to carry out a difficult task to reach food was able to learn it quickly and effectively after seeing a trained octopus carry out the task.
LHC: The Killing Machine
The LHC is being advertised as a discovery tool but most of all it is a killing machine. The purpose of the LHC is to destroy... no, not the life on Earth... to destroy the profusion of theories that particles theorists have created during the last 40 years.From here.
Jester, at his Resonaances blog, proclaims the death most versions of Higgsless Technicolor theories and a number of classes of theories that had proposed an "invisible Higgs bosons." These are obvious conclusions, as the apparent discovery of a Higgs boson rules out all theories in which there is no Higgs boson (a primary motivation of Technicolor theories), and all theories in which Higgs bosons are undetectable.
The more interesting part of his post is that the Higgs boson searches have pretty definitively ruled out a fourth generation of Standard Model fermions.
The Standard Model contains 3 generations of quarks and leptons with identical quantum numbers and identical couplings except for the couplings to the Higgs field. A priori, there is no reason why there could not be yet another heavier copy, the so-called 4th generation. Yet there isn't.
In this case the death was also foretold by the long-standing tension with electroweak precision tests, but again the final blow came from the Higgs searches. The new quarks of the 4th generation would contribute to the gluon fusion amplitude of the Higgs production, leading to a dramatic increase of the Higgs production rate. At the same time, due to accidental cancellations, the amplitude of the Higgs decay into 2 photons would be largely suppressed compared to the Standard Model. Thus, the prediction of the 4th generation would be an increase of the Higgs event rate in the WW* channel, and a suppression in the LHC gamma-gamma and the Tevatron bb channels.... which is exactly opposite to the tendencies shown by the current Higgs data.Fourth generation fermions who have properties different from the ones we know and love in the Standard Model in some subtle respects could be possible, but they are far less naturally extrapolated from the table of fermions that we have detected.
Sunday, April 29, 2012
QCD Still Works
An excited state of a baryon (three quark composite particle), in this case made of a bottom quark, a strange quark and an up quark, which has long been predicted by quantum chromodynamics but never previously observed, has been detected at the large hadron collider (LHC).
It has a mass of about 6 GeV compared to about 1 GeV for a proton. In a proton, the vast majority of the composite particle's mass is in the gluons that bind the up and down quarks of a proton together. In this excited Xi b baryon, about three quarters of the mass in in the quarks (mostly the bottom quark), and the mass of the gluons for this excited state is about 50% more than in a proton. While it is heavy compared to a proton, it is much lighter than a W boson (roughly 80 GeV), a Z boson (roughly 90 GeV), a Higgs boson (roughly 125 GeV), or a top quark (about 173 GeV).
But, since top quarks have never been observed to become part of composite particles called hadrons (i.e. mesons with two quarks, or baryons with three quarks) before decaying into other particles (99.8% of the time into bottom quarks, 0.17% of the time into strange quarks and 0.07% of the time into down quarks), particles such as the Xi b that include bottom quarks are the heaviest composite particles, with the heaviest hadron topping out at about a sixth of the mass of a Z boson, give or take.
It is really a bit odd. There are four fundamental particles that are heavier than any of the first order composite particles (obviously, protons and neutrons combined to form atomic nuclei that are much heavier than the hadrons that are their component parts, with the atomic nuclei of many isotypes of the heaviest periodic table elements, and many molecules, being heavier than a top quark).
It has a mass of about 6 GeV compared to about 1 GeV for a proton. In a proton, the vast majority of the composite particle's mass is in the gluons that bind the up and down quarks of a proton together. In this excited Xi b baryon, about three quarters of the mass in in the quarks (mostly the bottom quark), and the mass of the gluons for this excited state is about 50% more than in a proton. While it is heavy compared to a proton, it is much lighter than a W boson (roughly 80 GeV), a Z boson (roughly 90 GeV), a Higgs boson (roughly 125 GeV), or a top quark (about 173 GeV).
But, since top quarks have never been observed to become part of composite particles called hadrons (i.e. mesons with two quarks, or baryons with three quarks) before decaying into other particles (99.8% of the time into bottom quarks, 0.17% of the time into strange quarks and 0.07% of the time into down quarks), particles such as the Xi b that include bottom quarks are the heaviest composite particles, with the heaviest hadron topping out at about a sixth of the mass of a Z boson, give or take.
It is really a bit odd. There are four fundamental particles that are heavier than any of the first order composite particles (obviously, protons and neutrons combined to form atomic nuclei that are much heavier than the hadrons that are their component parts, with the atomic nuclei of many isotypes of the heaviest periodic table elements, and many molecules, being heavier than a top quark).
Friday, April 27, 2012
Homo Erectus Caused Large Carnivore Extinction
John Hawks calls attention to a recent paper making a strong circumstantial case that the rise of Homo Erectus lead to the extinction of all but six of the twenty-nine large carnivore species that were around before Homo Erectus arrived on the scene. Carnivore species extant were evalutated at 1.5 million and 3.5 million years ago. Homo Erectus arose between those dates and left Africa around 1.9 million years ago.
This is mostly a good thing, because while the megafauna extinctions associated with modern humans are well documented, one of the big open questions in pre-history and evolution is why this happened in some places but not others and why it might not have happened for previous hominin species. This study suggests that it did happen for previous hominin species and that places that experienced milder megafauna extinctions may have had that experience because Homo Erectus had already wiped out the most vulnerable species.
New European Ancient DNA Papers
Maju has unearthed two new ancient DNA papers (he also notes that a third paper with ancient mtDNA from Biscany is coming soon).
The most notable is an ancient mtDNA paper from Basque country which shows, for the first time in a formally published paper, solid evidence of mtDNA haplogroup H in Southwest Europe in two Magdelenian samples (i.e. before the arrival of herding and farming). Previous studies have found pre-Neolithic samples to be dominated by mtDNA haplogroup U.
Also notable from the perspective of his thoughts on the subject is his willlingness to consider the possibility that mtDNA haplogroup H spread not in the Upper Paleolithic era, but sometime in the early to middle Neolithic or Copper Age, a view that I generally agree with:
[T]he dominant lineage today among Basques and Western Europeans in general, haplogroup H, was already present in Magdalenian times in Cantabria, what is consistent with it being found in Epipaleolithic Portugal and Oranian Morocco and really puts to rest the hypothesis that promoted that it had arrived from West Asia with Neolithic colonists (something argued as a matter of fact but without any clear evidence). As of late, although I have yet to formalize it somehow, I've been thinking that a serious possibility is that mtDNA might have spread partly with Dolmenic Megalithism. While there is some apparent Neolithic expansion of H in the Basque area (apparent in this paper and resulting in an almost modern genetic pool), in other parts of Europe this is less obvious, with H showing up but not reaching modern levels just with Neolithic. In fact the loss of other Neolithic lineages like N1a strongly suggests that the populations of Central Europe were largely replaced after the Early Neolithic. Where from? Again from Southwest Europe, I suspect but not in the context that was once believed of Magdalenian expansion, but maybe in the context of Megalithic expansion instead, with origin not in the Franco Cantabrian region but in Portugal. This is just a draft hypothesis that I have mentioned before only in private discussions or at best in the comments section somewhere, and certainly it would need more research. My main argument is that before these results for Cantabria, the only pre-Neolithic location where the data clearly suggested very high levels of mtDNA H (near 75%) was Portugal (Chandler 2005) and Portugal played a major role in Neolithic and Chalcolithic Western Europe. As I said above, ancient Portuguese apparently developed the Dolmenic Megalithic pehnomenon (culture, religion...), later they developed some of the earliest Western European civilizations, since the Third millennium BCE, specially Zambujal (also at Wikipedia), which were central elements of this long-lasting Megalithic culture and later of the Bell Beaker phenomenon as well. And we need very high levels of mtDNA H in a colonizer population to changed the genetic landscape from c. 20% H into c. 45% H, we need something like a 70-80% H ancestral population unless replacement was total, what I think unlikely. What happened to that overwhelmingly H population of Portugal (assuming that the hypothesis is correct)? They were probably colonized in due time, possibly in the Bronze Age (the mysterious archaeological "horizons" that replace urban life in much of Southern Portugal in that period with their strange crab-shaped elite tombs, vaguely resembling Mycenaean circular walled ones) and/or in the period of Celtic invasions from inland Iberia later in the Iron Age (Hallstat periphery).The second paper looks at ancient autosomal DNA evidence from 5000 years ago from Sweden that suggests that there were distinct Russian and Iberian Neolithic populations that were present then. The Russian-like population was a Pitted Ware hunter-gather population. The Neolithic population was the one that was similar to modern Iberians. This largely confirms prior research with other methods.
Thursday, April 26, 2012
Genetic Diabetes Risk Greater For Africans Than Asians
The Global Distribution of Type Two Diabetes Risk Alleles (from Chen, et al. (2012)
Type Two Diabetes And Genotypes
Diabetes is a disease characterized by the inability of the body to use insulin to properly manage blood glucose (i.e. blood sugar) levels, primarily associated with pancreas function, although some new chemical pathways that play a part in the conditions have been discovered. It can be managed with insulin shots and careful restriction of sugar in one's diet (or foods that quickly metabolize to sugar), and in worst case scenarios when not managed well, can lead to diabetes shock and comas, to poor circulation leading to loss of limb function or even limbs themselves, and to kidney failure that must be treated with dialysis (i.e. having a machine mechanically treat your blood in the way that internal organs should, typically for many hours several times a week). Mismanaged diabetes is deadly.
There are two main kinds of diabetes. Type one diabetes is associated with poor insulin production that most often manifests early childhood and is essentially treatable but incurable, although scientists keep trying. Type two diabetes is the adult onset form that is strongly associated with obesity and other particular dietary imbalances. Diabetes is also sometimes a complication of pregnancy. A number of common genetic variants have been associated with diabetes risk.
A new open access paper at PLoS Genetics (Chen R, Corona E, Sikora M, Dudley JT, Morgan AA, et al. (2012) Type 2 Diabetes Risk Alleles Demonstrate Extreme Directional Differentiation among Human Populations, Compared to Other Diseases. PLoS Genet 8(4): e1002621. doi:10.1371/journal.pgen.1002621) shows that essentially every known gene associated with risk for Type Two Diabetes is more common in Africans than in Asians. As the abstract explains:
Many disease-susceptible SNPs exhibit significant disparity in ancestral and derived allele frequencies across worldwide populations. While previous studies have examined population differentiation of alleles at specific SNPs, global ethnic patterns of ensembles of disease risk alleles across human diseases are unexamined.
To examine these patterns, we manually curated ethnic disease association data from 5,065 papers on human genetic studies representing 1,495 diseases, recording the precise risk alleles and their measured population frequencies and estimated effect sizes. We systematically compared the population frequencies of cross-ethnic risk alleles for each disease across 1,397 individuals from 11 HapMap populations, 1,064 individuals from 53 HGDP populations, and 49 individuals with whole-genome sequences from 10 populations.
Type 2 diabetes (T2D) demonstrated extreme directional differentiation of risk allele frequencies across human populations, compared with null distributions of European-frequency matched control genomic alleles and risk alleles for other diseases. Most T2D risk alleles share a consistent pattern of decreasing frequencies along human migration into East Asia. Furthermore, we show that these patterns contribute to disparities in predicted genetic risk across 1,397 HapMap individuals, T2D genetic risk being consistently higher for individuals in the African populations and lower in the Asian populations, irrespective of the ethnicity considered in the initial discovery of risk alleles.
We observed a similar pattern in the distribution of T2D Genetic Risk Scores, which are associated with an increased risk of developing diabetes in the Diabetes Prevention Program cohort, for the same individuals. This disparity may be attributable to the promotion of energy storage and usage appropriate to environments and inconsistent energy intake. Our results indicate that the differential frequencies of T2D risk alleles may contribute to the observed disparity in T2D incidence rates across ethnic populations.A New Paradigm For Population Level Epidemiology
In most studies of disease risk, disease incidence is known, and a potential ancestry based risk which could be due to either nature or nurture is inferred from disease incidence data after controlling for known environmental risk factors (e.g. diet), but genotype is not directly measured. This paper is direct study of known genotypes.
It is beyond reasonable dispute that Africans (and people with African descent) have greater frequencies of genes believed to present a risk for type two diabetes, while East Asians, and people of East Asian descent have a lower frequency of these genes.
It is possible that there are unknown confounding genes or dietary practices or cultural practices that prevent these genes from manifesting in a Type Two Diabetes phenotype, or alternatively make low risk individuals more prone to developing Type Two Diabetes than the known Two Two Diabetes risk genes would suggest.
An analogous case involving an unknown confounding gene occurs for lactase persistence. A significant number of Africans who are lactase persistent (i.e. continue to have no problem drinking cows milk as adults), which is closely associated with specific known genes in European, lack the main European LP genes. This is because there are other African variants of the LP gene (some not specifically identified) that serve the same purpose. This scenario is plausible because African gene-disease associations are less well studied, in general, than European and Asian gene-disease associations.
An analogous case of a dietary practice confound is the case of cardiovascular disease in Parisians. The residents of Paris, France have diets that are high in fats known to be important risk factors for cardiovascular disease. But, they don't have an incidence of those diseases that reflects that level of fat consumption. The main reason for the disparity appears to be that Parsians also drink lots of red wine and that some combination of the alcohol and the other contents of the wine counteract the dangers of a high fat diet.
Wider Implications Related To Diabetes
But, the null hypothesis would be that a significant share of the known racial variation in type two diabetes incidence flows from the rates at which type two diabetes risk alleles are present, and since the impact of these type two diabetes risk alleles has been quantified in most cases, it should be possible to statistically distinguish between the racial variation in type two diabetes incidence due to known genetic risks from the variation due to environmental factors and undiscovered hereditary factors: the reverse of the usual epidemiological paradigm.
A study conducted with the methodology used in this paper would likely find, for example, that a significant share of of the type two diabetes incidence among African-Americans in the American South which has been previously attributed with old paradigm epidemiological methods to poor choices in diet and exercise in epidemiological studies conducted without genotype information may actually be due to differences in genotypes between African-Americans and whites.
Of course, while the proportion of the two two diabetes incidence rates attributable to heredity rather than diet and exercise may shift, the practical response is much the same.
In essence, a person with type two diabetes risk genes is someone who can't escape the disease consequence of suboptimal diet and exercise choices to the extent as someone who lacks those genes. For example, a full blooded Korean American woman with the same sedentary office worker lifestyle and moderately high sugar and fat and calorie diet is probably quite a bit less likely to get Type Two Diabetes than an African American woman with a typical level of African and non-African admixture, whose activity and diet are precisely the same. Proof, once again, that life is not in the least bit fair.
Put another way, on average people with African descent need to pay more attention to lifestyle risk factors for type two diabetes and obesity to avoid ill health effects than the average person.
Implications For Racial Disparity In Other Disease Risk Genotypes
On the other hand, with a few other minor exceptions (sickle cell anemia vulnerability, for example, which is more common in Africans because the same gene that causes the disease also provides resistance of malaria), a not very clearly stated implication of this study (given that it looked at genotype studies of 1,495 diseases and that Type Two Diabetes stood out as the most noteworthy of them) is that almost no other common diseases with known common SNP allele genetic risk factors have such a starkly ancestry linked pattern of genotypic risk. Type Two Diabetes appears to be something of a worst case scenario.
The default assumption when looking for geographic pattern in different diseases which are not associated with diet and metabolism or infectious diseases of a geographically constricted range, is that should be that genotype is much more loosely linked to geography, race or deep ancestry.
Also, since the source populations of particular areas of Europe or Asia often have much smaller effective populations and have shown serial founder effects, even very large populations in these areas are likely to have quite homogeneous patterns of disease vulnerability risk, so epidemiological models that focus on environmental factors may be more viable in these situations than populations in or near African, and multi-continental mixing pot populations in the New World.
Implications For Modern Human Evolution
Which adaptations conferred the greatest fitness advantages?
If diet, metabolism and infectious disease risk are the primary distinctions in disease risk genotypes between Africans and non-Africans (and one can obviously add skin, eye and hair coloration and type to the list), this implies that infectious disease and food supply have been some of the most powerful factors in the evolutionary selection on modern humans in the post-Out of Africa era, while many other hypothetically plausible targets of evolutionary selection in modern humans in this era turn out to have been largely irrelevant to evolutionary fitness for modern humans in this era.
When and where did these adaptations become common?
Of course, knowing that there is a distinction doesn't necessarily tell us when that distinction arose. Did it arise in the Middle Paleolithic, when modern humans left Africa; did it arise in the Upper Paleolithic, when modern humans settled in Europe, Australia, Papua New Guinea, Japan and the Americas for the first time; or did it arise in the Neolithic together with the shift from a hunting and gathering mode of food production to a farming and herding mode of food production, or could it be an even more recent metal age development?
The maps, which show Papua New Guinea and the Americas to be largely congruent with the Asian pattern suggest that these genotype differences arose at least as far back as the Upper Paleolithic and prior to the Neolithic revolution in Asia. Otherwise, American and Papuan populations which were genetically isolated from Neolithic populations until a few hundred years ago, would look more like the African populations and less like the Asian ones.
Although it is harder to eyeball, there appear to be (and the charts in the body of the paper support the conclusion that there are) lower frequencies of type two diabetes risk genes in areas that have "Southern Route" population histories in Asia, and intermediate frequencies of type two diabetes risk genes in Europe and areas with ties to Central Asia that were at some point or another experienced a significant and lasting Indo-European presence.
This suggests a two step selection process - one common to all non-Africans and possibily diluted at the African fringe by short distance migration, and a second one particular to non-Africans who got far enough along on a Southern route to make it past South Asia and were subject to heightened selective pressure.
Alternately, one could also imagine a scenario in which many of the common type two diabetes alleles emerge somewhere Asia where they approach fixation. The presence of these alleles at all in other parts of the Old World could be entirely due to back migration from Asia.
Indeed, while the distribution of these genes disfavors an association with Denisovian admixture which is much more narrowly distributed, although one could also fit a Neanderthal source to these genes to their distribution quite easily so long as one assumed that they conferred more selective advantage in Asia than in Europe.
To the extent that these were a disease resistance alleles that had selective advantage for non-Africans other than Asians (even if the advantage was not as great for Europeans than for East Asians, for example), it is worth noting that the overall genetic contribution of the back migrating population could have been much smaller than their percentage contribution of specific alleles at these specific locations in the genome where the percentages would be amplified over tens of thousands of years by the fitness advantage that they conferred.
For example, even though 40% of Europeans have some type two diabetes risk reduction allele that has reached near fixation in China today, that could easily have emerged from a back migration that is a source for only 4% or less of the overall whole genome of Europeans.
Such a back migration could easily have happened, for example, in a period that was some time after the Toba erruption (ca. 74,000 years ago) which is a plausible geoclimatological event that might have coincided with the arrival of modern humans in Southeast Asia from South Asia, but was long before modern humans started to displace Neanderthals in Europe (ca. 42,000 years ago). Hence, the presence of these disease risk reducing alleles may date back to a period when the back migration was to a proto-European population in South Asia, Iran or the Middle East, rather than to modern humans who actually lived in Europe at the time.
What kind of events could have caused these genes to confer a fitness advantage?
Perhaps the genes approach fixation because at some point in ancient prehistory in Asia only holders of the genes could survive some genetic bottleneck in significant numbers, or because the fitness enhancement was greater given the foods available in Asia than in Europe and the Middle East.
For example, perhaps there are more plants with natural sweet sugars in Asia than Europe and the Middle East, so an ability to manage blood sugar levels became more important.
What would a bottleneck like that look like? I'd imagine one where the ability to be both obese and healthy in the long term would provide an advantage. Thus, you'd imagine perhaps a hunting and gathering population that experienced both "fat times" and "lean times" where the best adapted individuals got quite obese in the fat times, without developing disadvantageous diabetes, and then were able to survive lean times as a result of having these great food reserves that the peers who either got fat and died from diabetes, or didn't get fat in the first place, lacked. These genes may have served a purpose analogous to the one served by a camel's ability to store water internally for long periods of time after gorging itself.
At a generalized level, type two diabetes risk reduction alleles might confer fitness primarily by providing a survival advantage in circumstances when food supplies are unreliable, something that may not have been nearly so much of a selective pressure for Africans many of whom may have enjoyed (and still enjoy) a more stable tropical and subtropical climate and seasons that had less of an impact on food supplies for hunters and gatherers. Those inclined to put things in Biblical metaphor could describe Africa as an Eden and the Eurasians as exiles from Eden who faced greater struggles to meet their needs from nature that made it fitness enhancing to have these alleles.
And, the Edenic nature of the food supply in Africa wouldn't have been coincidental. Modern humans, our hominin predecessors, and our primate predecessors evolved over millions of years to be ideally suited to African life. We may have been better able to find food all year around, year after year in Africa, because only the ancestors who learned to hunt, gather and eat a wide enough range of foods to do so thrived and were rewarded by the evolutionary process. Had modern humans evolved in Southeast Asia instead of Africa, for example, perhaps we would, like pandas, be able to digest bamboo, for example. But, since we evolved in Africa, rather than in Europe or Asia, even our omnivorous diet may been able to secure nourishment from a smaller percentage of the biomass that could have been food there than it did in Africa. And, the smaller the percentge of biomass one can digest, the more unstable one's food supply will be and the more one needs to be able to store calories in fat in good times so one can survive later in hard times.
One More Piece of Evidence Added To The Clues That Reveal Pre-History
The selectively driven type two diabetes risk allele distributions (and despite my lumping together of these alleles in the discussion above, there are really at least six separate distributions to consider which could have spread to world populations in two or more separate events) provide another tool, on top of uniparental genetic markers, autosomal genome data, and other genes that have known functions with geographically distinctive distributions (like lactase persistence genes and blood types) that allow us to make inferences (bound by the limitations that the inferences be consistent somehow), about human pre-history that we are pressed to understand with the very sparse available archaeological evidence.
Monday, April 23, 2012
2H=Z+2W?
The mass of the W boson, the Z boson, and the Weinberg angle that relate the two, are all known with increasing precision. The two masses are known to about one part in five thousand. The cosine of the Weinberg angle is equal to the ratio of their masses. The mass of what appears to be the Higgs boson is known far less accurately, to about one part in fifty or sixty. The Higg field vacuum expectation value is known with a precision of one part per hundred, at least, and perhaps even more precision.
As the error bars around these standard model constants, and others, becomes smaller, efforts to discern any apparent relationships between the constants become more interesting, because more precisely known relationships are less prone to be numerological flukes.
At one point, it looked like the mass of the Higgs boson would be half of the vacuum expection value of the Higgs field, but this now looks a little low.
Another possibility is that the Higgs boson mass that fits current data to within current error bars for the Higgs boson mass is that two times the Higgs boson mass equal the mass of the Z boson plus the mass of the W+ boson, plus the mass of the W- boson (i.e. the three weak force bosons). Equivalently, since the mass of the photon is zero, 2H might equal the mass of the four electroweak bosons, or for that matter (since the strong force bosons also have zero rest mass), 2H might equal the mass of all of the other bosons.
Of course, the Higgs boson mass uncertainty right now is sufficient that all manner of inconsistent formulas to arrive at some mass within the current error bars for its value are possible.
Also, since the Weinberg angle and a fortiori, the masses of the W and/or Z bosons "run" with the energy scale of the interaction, much as the three couple constants of the Standard Model forces do, small discrepencies between the measured figures for these constants and apparently simple relationships between them, could flow from an incorrect implementation of the running of the Weinberg angle when determining a numerical value to insert into the formula.
For example, the most commonly used value for the Weinberg angle (about thirty degrees a.k.a. pi/6 radians), is based on an energy scale of the Z boson mass. But, perhaps in the equation that is the title of this post, an energy scale equal to the sum of the Z boson mass and two times of the W boson mass is more appropriate and would tweak the relevant values a little bit.
Perhaps, both 2H=Z+2W and 2H=Higgs v.e.v. are both true if one makes the appropriate adjustments for the running of the Weinberg angle, which are not obvious. Perhaps the "energy level of 2H=Z+2W is half of the energy level of 2H=Higgs v.e.v.
If one wanted to really get numerological about it, one could even suggest that a pi/6 value for the Weinberg angle might be related in some deep way to the fact that it is a component of a formula with six bosons in it: two Higgs bosons and the four electroweak bosons. Probably not. But, who knows.
Even a tenfold increase in the accuracy with which we know what appears to be the Higgs boson mass greatly narrows the range of numerical combinations that can produce it, and if there some sort of simple formula like 2H=Z+2W that relates the masses of the respective bosons, with or without adjustments for the running of the Weinberg angle, then this suggests that there may be some deeper and previously unknown relationship between the Standard Model constants.
This, in turn, could suggest theories such as the Higgs boson as a linear combination or otherwise composite state of the electro-weak bosons. And, any newly discovered relationship of these constants, whatever its character, would make the Standard Model more parsimonious, would reduce the number of degrees of freedom in the model, and would point the way towards a more fundamental theory from which the Standard Model is emergent.
If the Higgs boson mass runs with energy scale, one could also imagine this relationship tweaking the theoretical expectations about the high energy behavior of the Standard Model, for example, to cause the coupling constants of the three Standard Model forces to converge at a single point when extrapolated to a "GUT scale" energy, something that they don't quite do under existing Standard Model formulas for the running of the coupling constants. Or, this might resolve divergent terms in Standard Model calculations at high energy levels.
As the error bars around these standard model constants, and others, becomes smaller, efforts to discern any apparent relationships between the constants become more interesting, because more precisely known relationships are less prone to be numerological flukes.
At one point, it looked like the mass of the Higgs boson would be half of the vacuum expection value of the Higgs field, but this now looks a little low.
Another possibility is that the Higgs boson mass that fits current data to within current error bars for the Higgs boson mass is that two times the Higgs boson mass equal the mass of the Z boson plus the mass of the W+ boson, plus the mass of the W- boson (i.e. the three weak force bosons). Equivalently, since the mass of the photon is zero, 2H might equal the mass of the four electroweak bosons, or for that matter (since the strong force bosons also have zero rest mass), 2H might equal the mass of all of the other bosons.
Of course, the Higgs boson mass uncertainty right now is sufficient that all manner of inconsistent formulas to arrive at some mass within the current error bars for its value are possible.
Also, since the Weinberg angle and a fortiori, the masses of the W and/or Z bosons "run" with the energy scale of the interaction, much as the three couple constants of the Standard Model forces do, small discrepencies between the measured figures for these constants and apparently simple relationships between them, could flow from an incorrect implementation of the running of the Weinberg angle when determining a numerical value to insert into the formula.
For example, the most commonly used value for the Weinberg angle (about thirty degrees a.k.a. pi/6 radians), is based on an energy scale of the Z boson mass. But, perhaps in the equation that is the title of this post, an energy scale equal to the sum of the Z boson mass and two times of the W boson mass is more appropriate and would tweak the relevant values a little bit.
Perhaps, both 2H=Z+2W and 2H=Higgs v.e.v. are both true if one makes the appropriate adjustments for the running of the Weinberg angle, which are not obvious. Perhaps the "energy level of 2H=Z+2W is half of the energy level of 2H=Higgs v.e.v.
If one wanted to really get numerological about it, one could even suggest that a pi/6 value for the Weinberg angle might be related in some deep way to the fact that it is a component of a formula with six bosons in it: two Higgs bosons and the four electroweak bosons. Probably not. But, who knows.
Even a tenfold increase in the accuracy with which we know what appears to be the Higgs boson mass greatly narrows the range of numerical combinations that can produce it, and if there some sort of simple formula like 2H=Z+2W that relates the masses of the respective bosons, with or without adjustments for the running of the Weinberg angle, then this suggests that there may be some deeper and previously unknown relationship between the Standard Model constants.
This, in turn, could suggest theories such as the Higgs boson as a linear combination or otherwise composite state of the electro-weak bosons. And, any newly discovered relationship of these constants, whatever its character, would make the Standard Model more parsimonious, would reduce the number of degrees of freedom in the model, and would point the way towards a more fundamental theory from which the Standard Model is emergent.
If the Higgs boson mass runs with energy scale, one could also imagine this relationship tweaking the theoretical expectations about the high energy behavior of the Standard Model, for example, to cause the coupling constants of the three Standard Model forces to converge at a single point when extrapolated to a "GUT scale" energy, something that they don't quite do under existing Standard Model formulas for the running of the coupling constants. Or, this might resolve divergent terms in Standard Model calculations at high energy levels.
Thursday, April 19, 2012
Dark Matter Missing In Milky Way
In our Milky Way galaxy, in the vicinity of the solar system, the difference between the dark matter effects which are observed, and conventional estimates of its amounts, show that all or most of the dark matter conventionally predicted to be present in the general region of the solar system by leading dark matter theories isn't there. The results are consistent with zero and not consistent with results more than about 20% of the predicted value.
Their abstract ends with this highly pertinent conclusion:
A "prolate DM halo" would be one that spews up and down from the black hole at the center of the galaxy while being thin in the radial direction of the disk (which would be favored in what I have dubbed a "black hole barf" scenario).
Now, this isn't by any means the first experimental data set that is inconsistent with Cold Dark Matter theory, although this experiment's methodology is particularly direct and compelling.
As I've noted previously at this blog in multiple posts, the accumulating evidence tends to favor a combination of significant undercounts of ordinary visible matter in previous studies, possible failures to consider general relativistic effects properly, unmodeled dynamics associated with black holes at galactic cores, and some quantity of "warm dark matter."
As I explained in a September 20, 2011 post:
The trouble is that any weak force or electromagnetic force interacting particle at this mass scale should have been discovered in particle accelerator experiments by now.
Weak force decays of heavier particles predict very specific branching fractions for neutrinos and those data are a good fit to a universe which has only three generations of left handed neutrinos with masses of under 45 GeV. To a first order approximation, weak force decays are "democratic", i.e. every energetically permitted fundamental particle possibility in a W or Z boson decay that conserves certain other quantum quantities is equally likely to appear. Hence, an extra kind of electromagetically neutral fundamental particle, regardless of its mass, would have a branching fraction equal to the branching fraction of ordinary neutrinos, and this is well within the range of the accuracy of current branching fraction determinations from experiment which are consistent with the Standard Model.
All known fundamental particles that have a non-zero rest mass interact via the weak force, and no known fundamental particles that have a zero rest mass interact via the weak force. A "sterile neutrino" would be the sole exception to this rule if it existed.
Any electromagnetically interacting light fundamental particles other than electron, muon, tau, the six quarks, and the W boson would have been even more obvious in the experiments. While electrically neutral neutrinos are inferred from "massing mass" in reactions, electrically charged particles are detected more or less directly.
Glueballs, a well motivated but hypothetical composite particle made entirely of gluons which would be electrically neutral, would be too heavy and probably wouldn't be sufficiently stable. All but a few of the hypothetically possible composite particles that include quarks have been observed, and from what we can infer about the strong force physics that bind them and the masses of these composite particles, there cannot be any composite particles with masses in keV range or any mass range that is even remotely close. There are simply no indications in the data that there are missing strong force interacting particles either. And, none of the hypothetical particles predicted by string theory or supersymmetry appear to be light enough.
On the other hand, the universal prediction of all theories of gravity that predict a particle the mediates the gravitational force the way that the known Standard Model bosons do for electromagnetism (photons), the strong force (gluons) and the weak force (W bosons and Z bosons) predict a zero mass, spin-2 graviton, which would not be a good fit for dark matter (in addition to the fact that such a graviton would hypothetically reproduce general relativity). And, the tentatively discovered Higgs boson is both too ephemeral and too heavy to be a good fit.
This leaves as dark matter candidates either (1) one or more lighter than electron fundamental particles that don't interact with electromagnetism, the weak force or the strong force (a class of particles genericallly known as "sterile neutrinos"), or (2) some sort of composite particle phenomena, presumably heavier than its component parts and hence made only out of neutrinos, such as a "neutrino condensate."
Any other fundamental particles that could have escaped detection in high energy physics experiments would have to be too heavy to fit the data - they would be, almost by definition, cold dark matter. Yet, it is becoming increasingly clear that cold dark matter theory is wrong.
In some sterile neutrino and composite dark matter scenarios, dark matter interacts with some previously unknown fundamental force that has no observable effect on ordinary matter but interacts with dark matter or dark matter components in some way.
Of course, the possibilities aren't really so constrained, because the estimates regarding the possible mass of a warm dark matter particle themselves assume certain properties of those particles, and if wark dark matter doesn't have those properties, the mass estimates could be wildly wrong.
In the current experimental and theoretical environment, my inclination is to expect that: (1) there is no undiscovered fundamental fermion that gives rises to dark matter phenomena which we observe, (2) the amount of dark matter in the universe is greatly overestimated, and (3) if there are dark matter particles at all, they are probably some sort of keV scale composite particle comprised of neutrinos in some manner, either with a new fundamental force or arising from a new understanding of the fundamental forces. I also don't rule out entirely the possibility that some subtle modification to the equations of gravity could explain the observed dark matter effects, not withstanding the seemingly damning evidence of the bullet cluster.
This latest report also suggests that direct detection of dark matter may be impossible anywhere remotely close to Earth because there is little, if any, dark matter anywhere in Earth's vicinity for many light years around.
Since dark matter phenomena increasingly looks like one of the very few gaps in physics in which beyond the standard model physics with interesting phenomenologically detectable effects are even possible, this is a pretty glum picture. No more fermions. Maybe one more class of undiscovered bosons. And, that's it.
Theories predict that the average amount of dark matter in the Sun’s part of the galaxy should be in the range 0.4-1.0 kilograms of dark matter in a volume the size of the Earth. The new measurements find 0.00±0.07 kilograms of dark matter in a volume the size of the Earth. The new results also mean that attempts to detect dark matter on Earth by trying to spot the rare interactions between dark matter particles and “normal” matter are unlikely to be successful.
“Despite the new results, the Milky Way certainly rotates much faster than the visible matter alone can account for. So, if dark matter is not present where we expected it, a new solution for the missing mass problem must be found. Our results contradict the currently accepted models. The mystery of dark matter has just become even more mysterious. . . .” concludes Christian Moni Bidin.
Citation: C. Moni Bidin, G. Carraro, R. A. Méndez and R. Smith, “Kinematical and chemical vertical structure of the Galactic thick disk II. A lack of dark matter in the solar neighborhood", The Astrophysical Journal (upcoming)
Their abstract ends with this highly pertinent conclusion:
Only the presence of a highly prolate (flattening q >2) DM halo can be reconciled with the observations, but this is highly unlikely in CDM models. The results challenge the current understanding of the spatial distribution and nature of the Galactic DM. In particular, our results may indicate that any direct DM detection experiment is doomed to fail, if the local density of the target particles is negligible.
A "prolate DM halo" would be one that spews up and down from the black hole at the center of the galaxy while being thin in the radial direction of the disk (which would be favored in what I have dubbed a "black hole barf" scenario).
Now, this isn't by any means the first experimental data set that is inconsistent with Cold Dark Matter theory, although this experiment's methodology is particularly direct and compelling.
As I've noted previously at this blog in multiple posts, the accumulating evidence tends to favor a combination of significant undercounts of ordinary visible matter in previous studies, possible failures to consider general relativistic effects properly, unmodeled dynamics associated with black holes at galactic cores, and some quantity of "warm dark matter."
As I explained in a September 20, 2011 post:
"WDM refers to keV scale DM particles. This is not Hot DM (HDM). (HDM refers to eV scale DM particles, which are already ruled out). CDM refers to heavy DM particles (so called wimps of GeV scale or any scale larger than keV)." . . . For comparison sake, an electron has a mass of about 511 keV. So, a keV scale sterile neutrino would be not more than about 3% of the mass of an electron, and possibly close to 0.2% of the mass of an electron, but would have a mass on the order of 1,000 to 50,000 times that of an ordinary electron neutrino, or more.
The trouble is that any weak force or electromagnetic force interacting particle at this mass scale should have been discovered in particle accelerator experiments by now.
Weak force decays of heavier particles predict very specific branching fractions for neutrinos and those data are a good fit to a universe which has only three generations of left handed neutrinos with masses of under 45 GeV. To a first order approximation, weak force decays are "democratic", i.e. every energetically permitted fundamental particle possibility in a W or Z boson decay that conserves certain other quantum quantities is equally likely to appear. Hence, an extra kind of electromagetically neutral fundamental particle, regardless of its mass, would have a branching fraction equal to the branching fraction of ordinary neutrinos, and this is well within the range of the accuracy of current branching fraction determinations from experiment which are consistent with the Standard Model.
All known fundamental particles that have a non-zero rest mass interact via the weak force, and no known fundamental particles that have a zero rest mass interact via the weak force. A "sterile neutrino" would be the sole exception to this rule if it existed.
Any electromagnetically interacting light fundamental particles other than electron, muon, tau, the six quarks, and the W boson would have been even more obvious in the experiments. While electrically neutral neutrinos are inferred from "massing mass" in reactions, electrically charged particles are detected more or less directly.
Glueballs, a well motivated but hypothetical composite particle made entirely of gluons which would be electrically neutral, would be too heavy and probably wouldn't be sufficiently stable. All but a few of the hypothetically possible composite particles that include quarks have been observed, and from what we can infer about the strong force physics that bind them and the masses of these composite particles, there cannot be any composite particles with masses in keV range or any mass range that is even remotely close. There are simply no indications in the data that there are missing strong force interacting particles either. And, none of the hypothetical particles predicted by string theory or supersymmetry appear to be light enough.
On the other hand, the universal prediction of all theories of gravity that predict a particle the mediates the gravitational force the way that the known Standard Model bosons do for electromagnetism (photons), the strong force (gluons) and the weak force (W bosons and Z bosons) predict a zero mass, spin-2 graviton, which would not be a good fit for dark matter (in addition to the fact that such a graviton would hypothetically reproduce general relativity). And, the tentatively discovered Higgs boson is both too ephemeral and too heavy to be a good fit.
This leaves as dark matter candidates either (1) one or more lighter than electron fundamental particles that don't interact with electromagnetism, the weak force or the strong force (a class of particles genericallly known as "sterile neutrinos"), or (2) some sort of composite particle phenomena, presumably heavier than its component parts and hence made only out of neutrinos, such as a "neutrino condensate."
Any other fundamental particles that could have escaped detection in high energy physics experiments would have to be too heavy to fit the data - they would be, almost by definition, cold dark matter. Yet, it is becoming increasingly clear that cold dark matter theory is wrong.
In some sterile neutrino and composite dark matter scenarios, dark matter interacts with some previously unknown fundamental force that has no observable effect on ordinary matter but interacts with dark matter or dark matter components in some way.
Of course, the possibilities aren't really so constrained, because the estimates regarding the possible mass of a warm dark matter particle themselves assume certain properties of those particles, and if wark dark matter doesn't have those properties, the mass estimates could be wildly wrong.
In the current experimental and theoretical environment, my inclination is to expect that: (1) there is no undiscovered fundamental fermion that gives rises to dark matter phenomena which we observe, (2) the amount of dark matter in the universe is greatly overestimated, and (3) if there are dark matter particles at all, they are probably some sort of keV scale composite particle comprised of neutrinos in some manner, either with a new fundamental force or arising from a new understanding of the fundamental forces. I also don't rule out entirely the possibility that some subtle modification to the equations of gravity could explain the observed dark matter effects, not withstanding the seemingly damning evidence of the bullet cluster.
This latest report also suggests that direct detection of dark matter may be impossible anywhere remotely close to Earth because there is little, if any, dark matter anywhere in Earth's vicinity for many light years around.
Since dark matter phenomena increasingly looks like one of the very few gaps in physics in which beyond the standard model physics with interesting phenomenologically detectable effects are even possible, this is a pretty glum picture. No more fermions. Maybe one more class of undiscovered bosons. And, that's it.
Monday, April 16, 2012
Japanese Conveys Information More Slowly Than Other Languages
The money table from a study of how quickly various languages can convey the same information is shown below.
"LANGUAGE INFORMATION DENSITY-- SYLLABIC RATE --INFORMATION RATE
IDL (#syl/sec)
English 0.91 (± 0.04) 6.19 (± 0.16) 1.08 (± 0.08)
French 0.74 (± 0.04) 7.18 (± 0.12) 0.99 (± 0.09)
German 0.79 (± 0.03) 5.97 (± 0.19) 0.90 (± 0.07)
Italian 0.72 (± 0.04) 6.99 (± 0.23) 0.96 (± 0.10)
Japanese 0.49 (± 0.02) 7.84 (± 0.09) 0.74 (± 0.06)
Mandarin 0.94 (± 0.04) 5.18 (± 0.15) 0.94 (± 0.08)
Spanish 0.63 (± 0.02) 7.82 (± 0.16) 0.98 (± 0.07)
Vietnamese 1 (reference) 5.22 (± 0.08) 1 (reference)
TABLE 1. Cross-language comparison of information density, syllabic rate, and information rate (mean values and 95% confidence intervals). Vietnamese is used as the external reference."
Japanese speakers actually speak more quickly than those of any other language, but the increased speed (similar to Spanish) isn't enough to make up for the increased number of syllables necessary to convey the same information in Japanese relative to many other languages.
German conveyed information the next most slowly. There was no statistically signficant difference at even a one standard deviation level between the other landuages, and German was just 1.5 standard deviations from the mean, which is hardly exceptional in a study looking at eight languages. Japanese, in contrast, was four standard deviations below the norm in information transmission rate.
One important factor that distinguished Japanese from the other languages was its exceptionally low inventory of syllables (416 v. runner up 1,191 in Mandarin before accounting for tones v. 7,931 in English, with the most options). In languages other than Japanese, the rate at which people speak and the differences in the number of syllables need to convey information that flows largely from the number of sylllables available in the language are basically sufficient to compensate for each other. But, the amount of information that can be conveyed in a given number of syllables is empirically linked to the number of syllables available in that language, and the lack of phonetic options in Japanese is so great that a fast pace of speech can't overcome it.
This is great news for anyone worried about being able to pronounce Japanese words, as there are fewer syllable sounds to learn. But, it means that Japanese take about 35% longer to convey the same information than its linguistic peers in this simple test.
Perhaps this reality may also explain why the most characteristic of all Japanese poetry forms, the Haiku, is so terse. Efficiency in formulating thoughts matters more if it takes more time to convey them.
Now, it may be that there are other linguistic strategies that are used to address this that a simply loose translation of twenty identical short texts can't convey - for example, maybe conversational Japanese lets more go unspoken. And, some kinds of information in Japanese, for example, the Chinese style numbers use for mathematical calculations (Japanese has quite a few parallel number word systems for smaller numbers), are actually more phonetically compact in Japanese than in any of the Eurpoean languages. But, the outlier is quite interesting and deserves further examination.
"LANGUAGE INFORMATION DENSITY-- SYLLABIC RATE --INFORMATION RATE
IDL (#syl/sec)
English 0.91 (± 0.04) 6.19 (± 0.16) 1.08 (± 0.08)
French 0.74 (± 0.04) 7.18 (± 0.12) 0.99 (± 0.09)
German 0.79 (± 0.03) 5.97 (± 0.19) 0.90 (± 0.07)
Italian 0.72 (± 0.04) 6.99 (± 0.23) 0.96 (± 0.10)
Japanese 0.49 (± 0.02) 7.84 (± 0.09) 0.74 (± 0.06)
Mandarin 0.94 (± 0.04) 5.18 (± 0.15) 0.94 (± 0.08)
Spanish 0.63 (± 0.02) 7.82 (± 0.16) 0.98 (± 0.07)
Vietnamese 1 (reference) 5.22 (± 0.08) 1 (reference)
TABLE 1. Cross-language comparison of information density, syllabic rate, and information rate (mean values and 95% confidence intervals). Vietnamese is used as the external reference."
Japanese speakers actually speak more quickly than those of any other language, but the increased speed (similar to Spanish) isn't enough to make up for the increased number of syllables necessary to convey the same information in Japanese relative to many other languages.
German conveyed information the next most slowly. There was no statistically signficant difference at even a one standard deviation level between the other landuages, and German was just 1.5 standard deviations from the mean, which is hardly exceptional in a study looking at eight languages. Japanese, in contrast, was four standard deviations below the norm in information transmission rate.
One important factor that distinguished Japanese from the other languages was its exceptionally low inventory of syllables (416 v. runner up 1,191 in Mandarin before accounting for tones v. 7,931 in English, with the most options). In languages other than Japanese, the rate at which people speak and the differences in the number of syllables need to convey information that flows largely from the number of sylllables available in the language are basically sufficient to compensate for each other. But, the amount of information that can be conveyed in a given number of syllables is empirically linked to the number of syllables available in that language, and the lack of phonetic options in Japanese is so great that a fast pace of speech can't overcome it.
This is great news for anyone worried about being able to pronounce Japanese words, as there are fewer syllable sounds to learn. But, it means that Japanese take about 35% longer to convey the same information than its linguistic peers in this simple test.
Perhaps this reality may also explain why the most characteristic of all Japanese poetry forms, the Haiku, is so terse. Efficiency in formulating thoughts matters more if it takes more time to convey them.
Now, it may be that there are other linguistic strategies that are used to address this that a simply loose translation of twenty identical short texts can't convey - for example, maybe conversational Japanese lets more go unspoken. And, some kinds of information in Japanese, for example, the Chinese style numbers use for mathematical calculations (Japanese has quite a few parallel number word systems for smaller numbers), are actually more phonetically compact in Japanese than in any of the Eurpoean languages. But, the outlier is quite interesting and deserves further examination.
Tuesday, April 10, 2012
Autosomal Data Confirm Jomon Contribution
Uniparental DNA data from Japan have long suggested that the genetic contribution from the Jomon hunter-gatherer-fishing population on its islands, which was mostly isolated from modern populations from 30,000 years ago until the arrival of the Yayoi rice farming population via Korea (ca 2,300 years ago), was substantial.
A new open access study of Japanese autosomal genetics reaffirms that conclusion and suggests a Northest Asian affinity of a Jomon component.
Dienekes' rightly notes, however the possibility that some of the putatively Jomon genetic contributions may actually be a component of the Yayoi, who may have been an admixed population with Altaic (e.g. Turkish and Mongolian) and East Asian (e.g. Han Chinese) components.
Some background material in the paper is helpful in framing the issues, although less well fleshed out than one might hope if one was hoping for a really compelling multidisciplinary case for a particular hypothesis, and also a bit less open minded concerning more complex possibilities that the evidence suggests than one might hope.
The linguistic reference made really doesn't fairly represent the linguistic evidence about the origins of the Japanese language (probably the strongest contender is that the Japanese is a descendant of a language of the Korean penninsula that died out in Korea when the Korean consolidated politically into a kingdom that spoke a different Korean language).
Japanese and Korean linguistic affinities, to the extent that they are not true linguistic isolates, are more Altaic than Sino-Tibetan (i.e. Chinese), even though Japanese has heavy lexical borrowings from Chinese and even though Japan shows a substantial East Asian genetic contribution. This is suggestive of a Yayoi population with an Altaic pastoralist horse riding warrior superstrate and a large East Asian rice farmer foot soldier substrate.
The paper also fails to address just how divergent D2 is from the other Y-DNA D haplogroups or how "bushy" it is, suggesting a long period of independent divergence from the other Y-DNA D haplogroups which overlap each other more completely. It is also a bit odd that mtDNA evidence, which is available for Japan, isn't discussed at all. Likewise, likely overall scenarios on the human settlement of Asia drawn from uniparental data and paleoclimate studies is not discussed.
It is disappointing to see a paper on Jomon genetic contributions that doesn't reference what little is known about Ainu genetics or even acknowledge its relevance. And, an absence of a fuller exploration of the settlement history of the Ryukyuans, which is mostly within or immediately prior to the historically documented era, or of the historically documented history of the settlement of Japan's main island, Hondo, on which Yayoi settlement was mostly restricted to the Southern part of the island until about 1000 C.E., is also regrettable (likewise, an absence of a sample from modern Hokkaidō limits the usefulness of the analysis).
The failure to consider those points matters because there are population models that deserve attention that don't receive analysis in this paper.
The Ainu are the closest living descendants of the Jomon and persisted longest in Northern Japan, where they were also most succeptible to subsequent influence from and admixture with Paleosiberians. Ainu genetic studies, limited as they are, suggest an original Ainu population that admixes gradually with Northeast Asian trading partners of Paleosiberian stock.
It also isn't obvious, however, that the Jomon were monolithic. This is a subject of considerable debate in the world of linguistics. It is possible that Southern Japan and Northern Japan may have had distinct indigeneous populations prior to the arrival of the Yayoi.
A new open access study of Japanese autosomal genetics reaffirms that conclusion and suggests a Northest Asian affinity of a Jomon component.
Our results showed that the genetic contributions of Jomon, the Paleolithic contingent in Japanese, are 54.3∼62.3% in Ryukyuans and 23.1∼39.5% in mainland Japanese, respectively.
Dienekes' rightly notes, however the possibility that some of the putatively Jomon genetic contributions may actually be a component of the Yayoi, who may have been an admixed population with Altaic (e.g. Turkish and Mongolian) and East Asian (e.g. Han Chinese) components.
Some background material in the paper is helpful in framing the issues, although less well fleshed out than one might hope if one was hoping for a really compelling multidisciplinary case for a particular hypothesis, and also a bit less open minded concerning more complex possibilities that the evidence suggests than one might hope.
In the ‘continuity’ model, modern Japanese are considered as direct decedents of Jomon, the inhabitants of Japan in Paleolithic time, while their morphology showed secular changes. In the ‘admixture’ model, Jomon admixed with the Yayoi, more recent continental immigrants, which is consistent with the rapid changes in morphology and culture which took place synchronically about 2,500 years before present (BP). In the ‘replacement’ model, Paleolithic Jomon was completely replaced by the continental immigrants (Yayoi) after their arrival. To date, the ‘admixture’ model is seemingly better supported by the increasing lines of evidence of multiple genetic components found in modern Japanese.
The upper Paleolithic populations, i.e. Jomon, reached Japan 30,000 years ago from somewhere in Asia when the present Japanese Islands were connected to the continent. The separation of Japanese archipelago from the continent led to a long period (∼13,000 – 2,300 years B.P) of isolation and independent evolution of Jomon. The patterns of intraregional craniofacial diversity in Japan suggest little effect on the genetic structure of the Jomon from long-term gene flow stemming from an outside source during the isolation. The isolation was ended by large-scale influxes of immigrants, known as Yayoi, carrying rice farming technology and metal tools via the Korean Peninsula. The immigration began around 2,300 years B.P. and continued for the subsequent 1,000 years. Based on linguistic studies, it is suggested that the immigrants were likely from Northern China, but not a branch of proto-Korean.
Genetic studies on Y-chromosome and mitochondrial haplogroups disclosed more details about origins of modern Japanese. In Japanese, about 51.8% of paternal lineages belong to haplogroup O6, and mostly the subgroups O3 and O2b, both of which were frequently observed in mainland populations of East Asia, such as Han Chinese and Korean. Another Y haplogroup, D2, making up 35% of the Japanese male lineages, could only be found in Japan. The haplogroups D1, D3, and D*, the closest relatives of D2, are scattered around very specific regions of Asia, such as the Andaman Islands, Indonesia, Southwest China, and Tibet. In addition, C1 is the other haplogroup unique to Japan. It was therefore speculated that haplogroups D2 and O may represent Jomon and Yayoi migrants, respectively. However, no mitochondrial haplotypes, except M7a, that shows significant difference in distribution between modern Japanese and mainlanders5. Interestingly, a recent study of genome-wide SNPs showed that 7,003 Japanese individuals could be assigned to two differentiated clusters, Hondo and Ryukyu, further supporting the notion that modern Japanese may be descendent of the admixture of two different components.
The linguistic reference made really doesn't fairly represent the linguistic evidence about the origins of the Japanese language (probably the strongest contender is that the Japanese is a descendant of a language of the Korean penninsula that died out in Korea when the Korean consolidated politically into a kingdom that spoke a different Korean language).
Japanese and Korean linguistic affinities, to the extent that they are not true linguistic isolates, are more Altaic than Sino-Tibetan (i.e. Chinese), even though Japanese has heavy lexical borrowings from Chinese and even though Japan shows a substantial East Asian genetic contribution. This is suggestive of a Yayoi population with an Altaic pastoralist horse riding warrior superstrate and a large East Asian rice farmer foot soldier substrate.
The paper also fails to address just how divergent D2 is from the other Y-DNA D haplogroups or how "bushy" it is, suggesting a long period of independent divergence from the other Y-DNA D haplogroups which overlap each other more completely. It is also a bit odd that mtDNA evidence, which is available for Japan, isn't discussed at all. Likewise, likely overall scenarios on the human settlement of Asia drawn from uniparental data and paleoclimate studies is not discussed.
It is disappointing to see a paper on Jomon genetic contributions that doesn't reference what little is known about Ainu genetics or even acknowledge its relevance. And, an absence of a fuller exploration of the settlement history of the Ryukyuans, which is mostly within or immediately prior to the historically documented era, or of the historically documented history of the settlement of Japan's main island, Hondo, on which Yayoi settlement was mostly restricted to the Southern part of the island until about 1000 C.E., is also regrettable (likewise, an absence of a sample from modern Hokkaidō limits the usefulness of the analysis).
The failure to consider those points matters because there are population models that deserve attention that don't receive analysis in this paper.
The Ainu are the closest living descendants of the Jomon and persisted longest in Northern Japan, where they were also most succeptible to subsequent influence from and admixture with Paleosiberians. Ainu genetic studies, limited as they are, suggest an original Ainu population that admixes gradually with Northeast Asian trading partners of Paleosiberian stock.
It also isn't obvious, however, that the Jomon were monolithic. This is a subject of considerable debate in the world of linguistics. It is possible that Southern Japan and Northern Japan may have had distinct indigeneous populations prior to the arrival of the Yayoi.
Monday, April 9, 2012
Thank you recommenders.
Google has a "plus one" button at the bottom of posts which I note that readers have clicked upon in a few cases for posts at this blog. I'd just like to let whomever has clicked these know that your recommendations are appreciated. Thank you. I hope to continue to deliver content that readers think has value.
Sunday, April 8, 2012
Rapid Sea Level Rise Quantified And Dated
We know that following the last glacial maximum, there was a rapid rise in sea level. But, when did it happen and how fast did it take place. A new study published in the journal Nature (abstract reproduced here), tightens these estimates:
Thus, there was about 16 meters of seawater level surge in the South Pacific Ocean in less than 340 years. Pinning down this surge in sea levels allows for more accurate timing of key events in modern human dispersals outside Africa, such as a firmer estimate of when the Bering land bridge between Asia and North America closed, and when Indonesia was broken into an island chain after being, up to the point of the Wallace line, a mere pennisular extension of the mainland.
Since we know just how rapidly this happened, we can also get some sense of how traumatic or non-traumatic those events would have seen to people experiencing them (Tahiti, of course, where the data were collected, would have been uninhabited by any kind of hominin at the time).
We know that the sea level was lowest at around the Last Glacial Maximum (ca. 20,000 years ago), but it turns out that most of the post-LGM rise in sea levels was deferred for more than 5,000 years.
The sea level surge date should roughly correspond to Epipaleolithic events, i.e. those prior to the Neolithic Revolution but after the glaciers of the Last Glacial Maximum had retreated, such as the repopulation of Europe from Southern European refugia.
This data also allows for a better pin pointing of when in time (we already knew where geographically and at what ocean depth to look) we should see mass abandonment of coastal settlements from the LGM era of low sea levels, whose remains it should be possible to locate with marine archaeology. Since these ruins would be pre-Neolithic, however, their traces could be quite subtle after tens of thousands of years of being encased beneath the waves.
[T]he last deglaciation was punctuated by a dramatic period of sea-level rise, of about 20 metres, in less than 500 years. . . . Here we show that MWP-1A [meltwater pulse 1A] started no earlier than 14,650 years ago and ended before 14,310 years ago, making it coeval with the Bølling warming. Our results, based on corals drilled offshore from Tahiti . . . reveal that the increase in sea level at Tahiti was between 12 and 22 metres, with a most probable value between 14 and 18 metres, establishing a significant meltwater contribution from the Southern Hemisphere.
Thus, there was about 16 meters of seawater level surge in the South Pacific Ocean in less than 340 years. Pinning down this surge in sea levels allows for more accurate timing of key events in modern human dispersals outside Africa, such as a firmer estimate of when the Bering land bridge between Asia and North America closed, and when Indonesia was broken into an island chain after being, up to the point of the Wallace line, a mere pennisular extension of the mainland.
Since we know just how rapidly this happened, we can also get some sense of how traumatic or non-traumatic those events would have seen to people experiencing them (Tahiti, of course, where the data were collected, would have been uninhabited by any kind of hominin at the time).
We know that the sea level was lowest at around the Last Glacial Maximum (ca. 20,000 years ago), but it turns out that most of the post-LGM rise in sea levels was deferred for more than 5,000 years.
The sea level surge date should roughly correspond to Epipaleolithic events, i.e. those prior to the Neolithic Revolution but after the glaciers of the Last Glacial Maximum had retreated, such as the repopulation of Europe from Southern European refugia.
This data also allows for a better pin pointing of when in time (we already knew where geographically and at what ocean depth to look) we should see mass abandonment of coastal settlements from the LGM era of low sea levels, whose remains it should be possible to locate with marine archaeology. Since these ruins would be pre-Neolithic, however, their traces could be quite subtle after tens of thousands of years of being encased beneath the waves.
Friday, April 6, 2012
Early Higgs data fits Standard Model except for electro-weak boson ratios
It is starting to become possible to compare early probably Higgs boson data to theoretical predictions (which are quite specific in the Standard Model and have margins of error no more than 5%-10% at given Higgs boson masses for many branching ratios). Mostly they fit the Standard Model, but the production of electro-weak bosons isn't quite as predicted. The are more diphoton decays (involving the electromagnetic force boson, the photon) and fewer weak force boson decays (involving the W and Z bosons) than expected.
From Resonnances. (Update 4/8/12: A scholarly version of essentially the same analysis from Moriond 2012 data, is found here.)
One way to better fit the data would be to assume that fermions and bosons have different coupling constants, with the coupling of fermions to the Higgs boson either being less strong than for bosons, or of opposite sign (although a zero coupling with fermions does not fit the data).
Of course, these are early days, there are many ways one could model the differences between the data and the Standard Model prediction, and deviations from a Standard Model prediction could be due either to errors in the way the calcuation is done or experimental error (all of the results but the diphoton production rates are within the experimental error bars and since diphotons are the most diagnostic of the decays used to identify the existence of the Higgs boson, the sample may be biased, i.e. it may be that but for a diphoton fluke we wouldn't have discovered convincing signs of a Higgs boson at all until many years later).
Still, a slight tweak in the characteristics of the Standard Model Higgs boson, particularly if it were something so elegant as an opposite sign coupling constant for fermions and bosons, would be some interesting beyond the Standard Model physics (interestingly of a kind which had few proponents in advance of experimental indications for it in 2011-2012).
UPDATE 4/8/12: Why are diphoton decays such a big deal diagnostically for the Higgs boson?
Mostly because they are a strong indicator of a spin zero particle, which is what a Standard Model Higgs boson (unlike every other form of fundamental particle yet discovered, although there are pseudo-scalar composite baryons with an overall spin of zero from fundamental particle components that are not spin zero).
Also, perhaps coincidentally, the 125 GeV Higgs boson mass happens to be the one at which predicted branching ratios (i.e. percentage production rates in decays) for diphoton decays is near maximal among all possible Higg boson mass decay rates.
A decaying fundamental fermion (spin 1/2) or spin-1 boson doesn't produce diphoton decays, because spin is preserved in decay processes, and a diphoton decay must have either a spin of zero or a spin of two, neither of which are consistent with fermion or spin-1 boson decays. These particles could decay to produce products including two photons, but then there would be something else produced as well. You can have a decay in which the experiment misses the extra particle, making two photons and something else look like merely two photons, but the more diphoton decays you see, and the better you can establish the rate at which the experiment will miss particles from other decays, the more strongly you can rule out that possibility and establish that you must be seeing a Higgs boson (or other spin zero or spin two particle) decay.
Once one establishes that the particle that you have has (1) a neutral electric charge (something true of a Standard Model Higgs boson, but not of many beyond the Standard Model Higgs bosons, W bosons, hypothetical W' bosons, higher order quarks or electron-muon-taus), (2) a spin of zero or two (which rules out electrically neutral neutrinos, photons, Z bosons and gluons), (3) interacts via the weak force (which implicitly must be the case in anything that decays into something else), and (4) has a mass of around 125 GeV (which rules out gravitons, which would have no mass, and a host of other hypothetical particles which would have to have zero or much smaller masses or much greater masses), all of which have been accomplished, all of the properties that are necessary to distinguish a Standard Model Higgs boson from all other Standard Model particle and a vast variety of beyond the Standard Model particles, are in place.
Coupling constants are among the only moving parts that could even theoretically be different between the conventional Standard Model Higgs boson (which couples equally to fermions and bosons), and something that presents the way that the experiments seem to indicate. And, a difference in Higgs boson coupling constants from the default minimal Standard Model Higgs doesn't necessarily prevent it from solving the theoretical gaps that it was hypothesized to fill in the Standard Model.
This Standard Model extension, if it was present, would be more on a par with the discovery that neutrinos have mass, than the discovery of a new kind of fundamental particle or fundamental force (although, describing the interactions mediated by the Higgs boson, basicallly inertia, as a fundamental force in its own right, wouldn't be profoundly wrong minded, as the term "force" has come to mean in particle physics a "type of interaction mediated by a particular kind of boson").
A variety of alternatives to the Standard Model Higgs boson that might still be consistent with the data are explored in a March 30, 2012 preprint. A March 26, 2012 preprint suggests strategies for experimentally distinguishing CP odd and CP even Higgs bosons, something akin to the distinction between left handed and right handed fermions.
Given the deep connections between the weak force and the Higgs boson in the Standard Model (three of the four Goldstone bosons predicted by electro-weak theory are "eaten" by the three weak force bosons, the W+, W- and Z), a conjecture comes to mind. It might very well be the case that a unpredicted discrepency between the Higgs boson interactions with bosons and fermions, if real, could flow from the possibility that the Higgs boson in only coupling to left handed fermions (the way that the other weak force bosons do), cutting its effective strength with fermions in half.
A theory favored by the kind of anomolous Higgs couplings seen would be a light composite Higgs boson model. Higgs couplings with invisible dark sector particles are disafavorex.
[O]n average, the production rate is consistent with that predicted by the Standard Model (the green line). Furthermore, one can read off that the the null hypothesis (the red line) is disfavored at the more than 4 sigma level. Thus, black-market combinations confirm that Higgs is practically discovered . . . while the Standard Model is a good fit to the combined data (chi-squared of 16 per 15 degrees of freedom), there are a few glitches here and there. Namely, the inclusive rate in the diphoton channel is somewhat enhanced (in both ATLAS and CMS) while that in the WW* and ZZ* channel is somewhat suppressed (especially ZZ* in CMS and WW* in ATLAS). Moreover, the exclusive final states studied in the diphoton channel (with 2 forward jets in CMS, and with a large diphoton transverse momentum in ATLAS) show even more dramatic enhancements, by more than a factor of 3. It may well be a fluke that will go away with more data, or maybe the simulations underestimate the Higgs event rates in these channels. Or, something interesting is going on, for example the way the Higgs boson is produced in proton collisions not exactly the way predicted by the Standard Model....
From Resonnances. (Update 4/8/12: A scholarly version of essentially the same analysis from Moriond 2012 data, is found here.)
One way to better fit the data would be to assume that fermions and bosons have different coupling constants, with the coupling of fermions to the Higgs boson either being less strong than for bosons, or of opposite sign (although a zero coupling with fermions does not fit the data).
Of course, these are early days, there are many ways one could model the differences between the data and the Standard Model prediction, and deviations from a Standard Model prediction could be due either to errors in the way the calcuation is done or experimental error (all of the results but the diphoton production rates are within the experimental error bars and since diphotons are the most diagnostic of the decays used to identify the existence of the Higgs boson, the sample may be biased, i.e. it may be that but for a diphoton fluke we wouldn't have discovered convincing signs of a Higgs boson at all until many years later).
Still, a slight tweak in the characteristics of the Standard Model Higgs boson, particularly if it were something so elegant as an opposite sign coupling constant for fermions and bosons, would be some interesting beyond the Standard Model physics (interestingly of a kind which had few proponents in advance of experimental indications for it in 2011-2012).
UPDATE 4/8/12: Why are diphoton decays such a big deal diagnostically for the Higgs boson?
Mostly because they are a strong indicator of a spin zero particle, which is what a Standard Model Higgs boson (unlike every other form of fundamental particle yet discovered, although there are pseudo-scalar composite baryons with an overall spin of zero from fundamental particle components that are not spin zero).
Also, perhaps coincidentally, the 125 GeV Higgs boson mass happens to be the one at which predicted branching ratios (i.e. percentage production rates in decays) for diphoton decays is near maximal among all possible Higg boson mass decay rates.
A decaying fundamental fermion (spin 1/2) or spin-1 boson doesn't produce diphoton decays, because spin is preserved in decay processes, and a diphoton decay must have either a spin of zero or a spin of two, neither of which are consistent with fermion or spin-1 boson decays. These particles could decay to produce products including two photons, but then there would be something else produced as well. You can have a decay in which the experiment misses the extra particle, making two photons and something else look like merely two photons, but the more diphoton decays you see, and the better you can establish the rate at which the experiment will miss particles from other decays, the more strongly you can rule out that possibility and establish that you must be seeing a Higgs boson (or other spin zero or spin two particle) decay.
Once one establishes that the particle that you have has (1) a neutral electric charge (something true of a Standard Model Higgs boson, but not of many beyond the Standard Model Higgs bosons, W bosons, hypothetical W' bosons, higher order quarks or electron-muon-taus), (2) a spin of zero or two (which rules out electrically neutral neutrinos, photons, Z bosons and gluons), (3) interacts via the weak force (which implicitly must be the case in anything that decays into something else), and (4) has a mass of around 125 GeV (which rules out gravitons, which would have no mass, and a host of other hypothetical particles which would have to have zero or much smaller masses or much greater masses), all of which have been accomplished, all of the properties that are necessary to distinguish a Standard Model Higgs boson from all other Standard Model particle and a vast variety of beyond the Standard Model particles, are in place.
Coupling constants are among the only moving parts that could even theoretically be different between the conventional Standard Model Higgs boson (which couples equally to fermions and bosons), and something that presents the way that the experiments seem to indicate. And, a difference in Higgs boson coupling constants from the default minimal Standard Model Higgs doesn't necessarily prevent it from solving the theoretical gaps that it was hypothesized to fill in the Standard Model.
This Standard Model extension, if it was present, would be more on a par with the discovery that neutrinos have mass, than the discovery of a new kind of fundamental particle or fundamental force (although, describing the interactions mediated by the Higgs boson, basicallly inertia, as a fundamental force in its own right, wouldn't be profoundly wrong minded, as the term "force" has come to mean in particle physics a "type of interaction mediated by a particular kind of boson").
A variety of alternatives to the Standard Model Higgs boson that might still be consistent with the data are explored in a March 30, 2012 preprint. A March 26, 2012 preprint suggests strategies for experimentally distinguishing CP odd and CP even Higgs bosons, something akin to the distinction between left handed and right handed fermions.
Given the deep connections between the weak force and the Higgs boson in the Standard Model (three of the four Goldstone bosons predicted by electro-weak theory are "eaten" by the three weak force bosons, the W+, W- and Z), a conjecture comes to mind. It might very well be the case that a unpredicted discrepency between the Higgs boson interactions with bosons and fermions, if real, could flow from the possibility that the Higgs boson in only coupling to left handed fermions (the way that the other weak force bosons do), cutting its effective strength with fermions in half.
A theory favored by the kind of anomolous Higgs couplings seen would be a light composite Higgs boson model. Higgs couplings with invisible dark sector particles are disafavorex.
Descriptions Of mtDNA Now Made With Reference To Eve
Mitochondrial DNA is passed from mother to child, so it is uniparental genetic marker that traces matrilines all of the way back to a hypothetical mitochondrial eve is the most recent shared matrilineal ancestor of every human now living.
Most mtDNA, and indeed most DNA generally, is identical for all humans, so describing a person's mtDNA by reporting a person's entire genome including the parts shared by all humans would be an exceedingly inefficient approach.
Instead, what geneticists do is establish a common reference genome and report the way some other mtDNA sequence differs from the reference genome. Until now, the standard reference genome, called the Cambridge Reference Sequence (CRS for short), was the mtDNA of some random white European person who probably went to Harvard, at one extreme of a particular part of the mtDNA phylogeny. When the CRS was adopted as the international standard in 1981, of course, we knew a great deal less about the mtDNA sequences that existed in the world and how they were related to each other.
The new reference sequence is instead a hypothetical one, constructed by creating an mtDNA tree including all known human mtDNA haplogroups, and bit by bit working back towards their common ancestor. The last few steps involve mtDNA sequences that have never been seen in anyone, by can be logically reconstructed as necessary to bridge the differences between the most basal branches of the mtDNA tree that is observed. In other words, they have reconstructed the mtDNA genome of mitchondrial Eve who would have lived something on the order of 177,000 years ago.
Essentially everyone in the world alive today has between ten and fifty mutations that distinguish them from mitochondrial Eve and the new method for scientifically describing mtDNA sequences in the most rudimentary form will be to list those ten to fifty mutations from the mtDNA Eve reference sequence called the Reconstructed Sapiens Reference Sequence (RSRS for short). All of the mtDNA sequences that are not predominantly African or of relatively recent African origin (i.e. mtDNA haplogroups the have left Africa only within the last 10,000 years or so), have at least thirty mutations from the RSRS. The combination that all non-African mtDNA types have in common (L3), which can function as a reference sequence for non-African mtDNA sequences, is estimated at about 67,000 years old based upon its thirty or so mutations from the RSRS.
Mutation rate dating of the emergence of mtDNA haplogroups is more art than science, but those estimates aren't wildly out of line with archaeologically based dates for an out of Africa event (modern scholarship tends to point to a bit earlier dates, but it is possible that some of the earliest Out of Africans don't have matriline ancestors in modern populations). The average mutation rate for mtDNA is on the order of one mutation per about 3,500 years. This isn't the post where I'm going to obsess about the calibration or miscalibration or even inherent limitations of mutation rate mtDNA dates and these dates in this post are "for entertainment purposes only."
While the exact dates at which particular branches of the mtDNA tree diverged from each other are debatable, the number of mutations that exist from the RSPS in any given mtDNA sequence are not, and there is very little room for much adjustment to the mtDNA phylogeny from which the RSPS was deduced. The fact that neither of these facts are subject to much reasonable dispute is a remarkable accomplishment of science generally and genetics in particular.
UPDATE: A more detailed post at Ethio Helix makes clear that some mtDNA haplogroups have far more than fifty mutations.
Most mtDNA, and indeed most DNA generally, is identical for all humans, so describing a person's mtDNA by reporting a person's entire genome including the parts shared by all humans would be an exceedingly inefficient approach.
Instead, what geneticists do is establish a common reference genome and report the way some other mtDNA sequence differs from the reference genome. Until now, the standard reference genome, called the Cambridge Reference Sequence (CRS for short), was the mtDNA of some random white European person who probably went to Harvard, at one extreme of a particular part of the mtDNA phylogeny. When the CRS was adopted as the international standard in 1981, of course, we knew a great deal less about the mtDNA sequences that existed in the world and how they were related to each other.
The new reference sequence is instead a hypothetical one, constructed by creating an mtDNA tree including all known human mtDNA haplogroups, and bit by bit working back towards their common ancestor. The last few steps involve mtDNA sequences that have never been seen in anyone, by can be logically reconstructed as necessary to bridge the differences between the most basal branches of the mtDNA tree that is observed. In other words, they have reconstructed the mtDNA genome of mitchondrial Eve who would have lived something on the order of 177,000 years ago.
Essentially everyone in the world alive today has between ten and fifty mutations that distinguish them from mitochondrial Eve and the new method for scientifically describing mtDNA sequences in the most rudimentary form will be to list those ten to fifty mutations from the mtDNA Eve reference sequence called the Reconstructed Sapiens Reference Sequence (RSRS for short). All of the mtDNA sequences that are not predominantly African or of relatively recent African origin (i.e. mtDNA haplogroups the have left Africa only within the last 10,000 years or so), have at least thirty mutations from the RSRS. The combination that all non-African mtDNA types have in common (L3), which can function as a reference sequence for non-African mtDNA sequences, is estimated at about 67,000 years old based upon its thirty or so mutations from the RSRS.
Mutation rate dating of the emergence of mtDNA haplogroups is more art than science, but those estimates aren't wildly out of line with archaeologically based dates for an out of Africa event (modern scholarship tends to point to a bit earlier dates, but it is possible that some of the earliest Out of Africans don't have matriline ancestors in modern populations). The average mutation rate for mtDNA is on the order of one mutation per about 3,500 years. This isn't the post where I'm going to obsess about the calibration or miscalibration or even inherent limitations of mutation rate mtDNA dates and these dates in this post are "for entertainment purposes only."
While the exact dates at which particular branches of the mtDNA tree diverged from each other are debatable, the number of mutations that exist from the RSPS in any given mtDNA sequence are not, and there is very little room for much adjustment to the mtDNA phylogeny from which the RSPS was deduced. The fact that neither of these facts are subject to much reasonable dispute is a remarkable accomplishment of science generally and genetics in particular.
UPDATE: A more detailed post at Ethio Helix makes clear that some mtDNA haplogroups have far more than fifty mutations.
Tuesday, April 3, 2012
Modern Humans Didn't Invent Fire
Controlled use of fire predates the evolution of modern humans by hundreds of thousands of years. I had thought this was relatively uncontroversial, but perhaps I was wrong. At any rate, new evidence strengthens the case that Homo erectus or Homo ergaster, rather than modern humans or someone else, where the first to make controlled use of fire.
Monday, April 2, 2012
Cosmological Constant Still Fits Data
The latest astronomy measurements from the BOSS experiment fit the equations of general relativity with a cosmological constant with a precision of +/- 1.7% as far back as a point at which the universe was 63% as its current size, a new record in fitting this data. Thus, a single number of the equations of general relativity which is known with some accuracy is all we need to explain "dark energy."
A Conjecture Regarding Asymptotic Freedom and Neutron Stars
Empirically, no one has ever observed anything with more mass per volume than a neutron star, which is just slightly more dense than atomic nuclei.
The conventional argument for an absence of superdense objects.
The reason that there is nothing with a greater volume that is more dense is fairly straightforward. Anything with a volume much greater than a neutron star and the same mass per volume would be a black hole. And, once something is a black hole, its event horizon which defines its volume, is purely a function of its mass. And, the more massive a black hole happens to be, the lower its mass per volume becomes.
The reason that there is nothing with a lesser volume than a neutron star that has more mass per volume is less obvious. The conventional account goes something like this:
Non-black hole mass that existed after the Big Bang had cooled down a bit congeals over time into bigger and bigger lumps through gravity that sometime form bigger and bigger starts that, if they are large enough, give rise to stellar black holes caused by the collapse of big stars.
But, until enough mass congeals in this way, gravity isn't strong enough to make the glob of matter collapse in upon itself, and so a new black hole doesn't form.
Maybe there were smaller black holes once upon a time, but Hawking radiation from these smaller black holes would have caused them to gradually lose mass over the more than thirteen billion years since the Big Bang, so now any remaining ones are tiny or have ceased to exist.
The ansatz supporting a maximum density conjecture.
In this post, I'd like to consider an alternative explanation. Perhaps it is not possible for matter that is stable for more than a moment to have density greater than a neutron star.
The maximum density conjecture explored in this point is one that I suggested in a previous post at this blog.
This post expands on the hypothesis by suggesting a mechanism arising from the asymptotic freedom demostrated by the strong force and the fact that most of the ordinary mass in the universe comes from the gluons in protons and neutrons, to explain how a theoretical maximum density might arise from ordinary standard model physics.
If this is true, it follows that there are not now and never have been at any time in the last thirteen billion years or so, black holes with the same volume as neutron star, or a smaller volume.
There are a few key observations that suggest this alternative explanation:
1. Dark energy appears to have a uniform density throughout the universe, which is infinitesimally small, or may not be a substance at all and may instead be simply a cosmological constant which is one part of the equations of general relativity. So, mass-energy attributable to dark energy can be ignored in this analysis.
2. Virtually all of the ordinary matter in the universe whose composition we understand comes from baryonic matter in the form of protons and neutrons. The proportion of the mass in the universe that comes from electrons orbiting atomic nuclei or in free space, and from neutrinos appears to be negligible by comparison. There is also no evidence whatsoever that there is any place in the universe outside of a black hole in which there exist stable baryons or mesons other than protons and neutrons that make up a meaningful percentage of matter.
There is also no evidence that dark matter plays an important role in giving rise to highly compact objects in space, and that the amount of ordinary matter relative to dark matter is partially due to an undercounting of ordinary matter. The role of dark matter is one of the weaker links in this analysis, however, and it limits its rigor and generality. If dark matter is a neutrino condensate, for example, that type of matter might or might not escape the maximum density limitations considered here. There would have to be a completely separate analysis to determine the impact of dark matter on this conjecture. The analysis that follows is really only a weak version of the conjecture applicable to cases in which dark matter does not play a material role.
3. Within a proton or neutron, the vast majority of the composite particle's rest mass is not attributable to the three quarks that make up the particle. Instead, the bulk of the rest mass of these baryons is attributable to the energy in the strong force interactions of the three component quarks which come quantized in gluons which are the strong force equivalent of photons. (While not obviously relevant to this particular result, it is also worth noting as an aside that most of the strong force field is localized a the center of the three quark system, rather than at the edges.)
4. The strength of the strong force between two quarks is a function of their distance. When two quarks are sufficiently close the strength of the strong force between them declines and they become what is known as asymptotically free. Thus, if you reduce the average distance between a group of quarks, the amount of energy in the strong force field declines. A proton whose quarks are momentarily closer to each other with a given combined kinetic energy should weigh less than a proton whose quarks are at the usual average distance from each other with the same combined kinetic energy.
5. Thus, it would seem that, the rest mass of a group of tightly spaced quarks bound to each other by gluons is lower than the group of a less tightly spaced quarks bound to each other by gluons. Unlike the substances we are familiar with if you could squeeze quarks close enough they would get less dense not more dense like all ordinary matter does.
6. The quark spacing at which the mass per volume of the quark-gluon system is greatest is that found in a neutron star. If the same number of quarks in a gluon star were squished together more tightly and had the same average kinetic energy as they did within a neutron star, they would have less mass per volume.
7. But, if they had more kinetic energy per quark on average than in a neutron star, they would rapidly spread out until the asymptotic freedom that the strong force affords to closely spaced quarks faded away and the strong force got stronger.
8. A black hole smaller than a neutron star in volume must be much more dense than a neutron star, but because most of a neutron star's mass comes from the strong force fields between its quarks, and because momentary conversions of strong force sourced mass into kinetic energy can't reach this threshold in a large enough volume with such a large number of particles that the law of averages becomes overwhelming even for very short time frames, systems as large as a neutron star can't "tunnel" into a density high enough to create a black hole by converting strong force energy into kinetic energy in a manner that randomly happens to materially reduce the volume of the system by an amount sufficient to create a black hole.
Is the loophole for asymtoptically free quark masses real?
Now, there is a conceivable loophole here, because is you get enough quarks close to each other (packing them ten to a hundred times more tightly than in an ordinary proton or neutron), the increase in mass from squishing the quarks together would exceed the decrease in mass from the virtual disappearance of the strong force field between these quarks.
But, the problem with this loophole is that there is nothing in the strong force or weak force or gravitational force that should squish so many quarks together so tightly over any substantial volume for any meaningful length of time, and presumably at some point the repulsive effects of electromagnetic charges between same charged quarks will kick in as well. And, at the scale of a simple proton or neutron's volume this concentrated bunch of asymptotically free quarks must be dense indeed to give rise to a black hole. Even if this could theoretically happen, the time frame for it to happen once on average may be much longer than the age of the universe.
Also, the fact that some quarks would be a full strength strong force distances from each other in such a system could be problematic. It isn't clear what happens with the strong force in such closely spaced quark circumstances involving large numbers of quarks.
A maximum density conjecture could resolve the theoretical inconsistency betwen quantum mechanics and general relativity by rendering the inconsistent possibilities unphysical.
In addition to explaining the absence of any observed small black holes, and removing any concern that something like the LHC could ever create a mini-black hole, this analysis could also have another useful theoretical benefit.
A maximum density conjecture cordons off many, if not all, of the situations in which there is conflict between general relativity's assumptions about space-time being continuous, and quantum mechanic's assumptions about fundamental particles being point-like, as unphysical. The conflicts are deepest when there is a general relativity singularity and this seems to be avoided in most if not all systems small enough to exhibit strongly quantum mechanical behavior with a maximum density conjecture.
Also, it may be that if one attributes reality for the purposes of general relativity calculations to the notion that point-like quantum mechanical particles are actually smeared over the entire volume of the probability distribution for their location at any given point in time, rather than being truly point-like, that the problem associated with anything that has mass and is truly a point giving rise to a singularity (i.e. a black hole) of infinite density in general relativity could be resolved (and indeed, this principle might even provide a theoretical limit on the maximum mass of any fundamental particle). I have to assume that someone has looked at this issue before but I don't know what the analysis showed and have never seen any secondary source provide an answer one way or the other even though the calculations would seem to be pretty elementary in most cases. The probability distribution needs to be roughly on the order of 10^-41 meters for a fundamental particle (also here) to escape this inconsistency, which is much smaller the the Planck length, so intuitively it seems like this smearing principle should suffice to resolve the infinite mass of a point-like particle singularity.
In effect, this maximum density conjecture, if correct, might imply that it is not necessary to modify general relativity at all in order to make it theoretically consistent in every physically possible circumstance with quantum mechanics, allowing a sort of unification without unification of the two sides of fundamental physics. It has long been known that the quantum mechanical assumptions that are at odds with general relativity theoretically generally aren't a problem in real world situations where we apply GR and visa versa, but we've never had any good way to draw a boundary between the two, which would provide theoretists to way to explore the very limits of physically possible circumstances where they can overlap. A maximum density conjecture might resolve that point.
Implications of a maximum density conjecture for inflation in cosmology
A maximum density conjecture also suggests that it may not be sound to extrapolate the Big Bang back any further than the point at which it reached a volume at which it had maximum density. Indeed, the aggregate mass of the universe would give rise to a black hole singularity at a volume at which it would have a mass per volume of much much less than a neutron star. The notion of inflation in Big Bang cosmology could simple be a mathematical artifact of extrapolating back to a point in time of more than the maximum density and hence assuming an unphysical state for the infant universe. Arguably, it really doesn't make any sense to extrapolate further back than the first point at which the universe's mass per volume would be insufficient to give rise to a black hole which would be much later in time than the point at which a maximum density threshold would be reached. Unless inflation happens after this point in time, inflation itself may be unphysical.
This wouldn't resolve the why was there something rather than nothing at the moment of the Big Bang, but the current GR without maximum density formulation doesn't answer that question either. The only real difference would be in the initial conditions of the two models.
Implications for causality
A theoretical and practical maximum density would also act to preserve causality and to even strengthen it beyond the limitations that would apply in the absence of this limitation by putting a cap upon the maximum extent to which gravity slows down time outside a black hole, with the cap being particularly strict outside the vicinity of large maximum density objects like neutron stars. This effect is tiny in the vicinity of atomic nuclei compared to similar density black holes because they have a much smaller volume.
Why is there a gap of medium sized maximum density objects?
A maximum density conjecture doesn't itself solve another empirical question. Why are there no intermediate sized maximum density objects, and for that answer to that one might need to resort to dynamical arguments.
There do not appear to be any objects even approaching maximum density for volumes greater than large atomic nuclei, which the weak force and limited range of the strong force tends to shatter and in which gravity is negligible in effect, and collapsed stars on the same order of magnitude of mass as the sun, countless orders of magnitude more massive. The strong force, the weak force, the electromagnetic force, and gravity all seem to lack the capacity to compact matter to that degree at those scales.
The conventional argument for an absence of superdense objects.
The reason that there is nothing with a greater volume that is more dense is fairly straightforward. Anything with a volume much greater than a neutron star and the same mass per volume would be a black hole. And, once something is a black hole, its event horizon which defines its volume, is purely a function of its mass. And, the more massive a black hole happens to be, the lower its mass per volume becomes.
The reason that there is nothing with a lesser volume than a neutron star that has more mass per volume is less obvious. The conventional account goes something like this:
Non-black hole mass that existed after the Big Bang had cooled down a bit congeals over time into bigger and bigger lumps through gravity that sometime form bigger and bigger starts that, if they are large enough, give rise to stellar black holes caused by the collapse of big stars.
But, until enough mass congeals in this way, gravity isn't strong enough to make the glob of matter collapse in upon itself, and so a new black hole doesn't form.
Maybe there were smaller black holes once upon a time, but Hawking radiation from these smaller black holes would have caused them to gradually lose mass over the more than thirteen billion years since the Big Bang, so now any remaining ones are tiny or have ceased to exist.
The ansatz supporting a maximum density conjecture.
In this post, I'd like to consider an alternative explanation. Perhaps it is not possible for matter that is stable for more than a moment to have density greater than a neutron star.
The maximum density conjecture explored in this point is one that I suggested in a previous post at this blog.
This post expands on the hypothesis by suggesting a mechanism arising from the asymptotic freedom demostrated by the strong force and the fact that most of the ordinary mass in the universe comes from the gluons in protons and neutrons, to explain how a theoretical maximum density might arise from ordinary standard model physics.
If this is true, it follows that there are not now and never have been at any time in the last thirteen billion years or so, black holes with the same volume as neutron star, or a smaller volume.
There are a few key observations that suggest this alternative explanation:
1. Dark energy appears to have a uniform density throughout the universe, which is infinitesimally small, or may not be a substance at all and may instead be simply a cosmological constant which is one part of the equations of general relativity. So, mass-energy attributable to dark energy can be ignored in this analysis.
2. Virtually all of the ordinary matter in the universe whose composition we understand comes from baryonic matter in the form of protons and neutrons. The proportion of the mass in the universe that comes from electrons orbiting atomic nuclei or in free space, and from neutrinos appears to be negligible by comparison. There is also no evidence whatsoever that there is any place in the universe outside of a black hole in which there exist stable baryons or mesons other than protons and neutrons that make up a meaningful percentage of matter.
There is also no evidence that dark matter plays an important role in giving rise to highly compact objects in space, and that the amount of ordinary matter relative to dark matter is partially due to an undercounting of ordinary matter. The role of dark matter is one of the weaker links in this analysis, however, and it limits its rigor and generality. If dark matter is a neutrino condensate, for example, that type of matter might or might not escape the maximum density limitations considered here. There would have to be a completely separate analysis to determine the impact of dark matter on this conjecture. The analysis that follows is really only a weak version of the conjecture applicable to cases in which dark matter does not play a material role.
3. Within a proton or neutron, the vast majority of the composite particle's rest mass is not attributable to the three quarks that make up the particle. Instead, the bulk of the rest mass of these baryons is attributable to the energy in the strong force interactions of the three component quarks which come quantized in gluons which are the strong force equivalent of photons. (While not obviously relevant to this particular result, it is also worth noting as an aside that most of the strong force field is localized a the center of the three quark system, rather than at the edges.)
4. The strength of the strong force between two quarks is a function of their distance. When two quarks are sufficiently close the strength of the strong force between them declines and they become what is known as asymptotically free. Thus, if you reduce the average distance between a group of quarks, the amount of energy in the strong force field declines. A proton whose quarks are momentarily closer to each other with a given combined kinetic energy should weigh less than a proton whose quarks are at the usual average distance from each other with the same combined kinetic energy.
5. Thus, it would seem that, the rest mass of a group of tightly spaced quarks bound to each other by gluons is lower than the group of a less tightly spaced quarks bound to each other by gluons. Unlike the substances we are familiar with if you could squeeze quarks close enough they would get less dense not more dense like all ordinary matter does.
6. The quark spacing at which the mass per volume of the quark-gluon system is greatest is that found in a neutron star. If the same number of quarks in a gluon star were squished together more tightly and had the same average kinetic energy as they did within a neutron star, they would have less mass per volume.
7. But, if they had more kinetic energy per quark on average than in a neutron star, they would rapidly spread out until the asymptotic freedom that the strong force affords to closely spaced quarks faded away and the strong force got stronger.
8. A black hole smaller than a neutron star in volume must be much more dense than a neutron star, but because most of a neutron star's mass comes from the strong force fields between its quarks, and because momentary conversions of strong force sourced mass into kinetic energy can't reach this threshold in a large enough volume with such a large number of particles that the law of averages becomes overwhelming even for very short time frames, systems as large as a neutron star can't "tunnel" into a density high enough to create a black hole by converting strong force energy into kinetic energy in a manner that randomly happens to materially reduce the volume of the system by an amount sufficient to create a black hole.
Is the loophole for asymtoptically free quark masses real?
Now, there is a conceivable loophole here, because is you get enough quarks close to each other (packing them ten to a hundred times more tightly than in an ordinary proton or neutron), the increase in mass from squishing the quarks together would exceed the decrease in mass from the virtual disappearance of the strong force field between these quarks.
But, the problem with this loophole is that there is nothing in the strong force or weak force or gravitational force that should squish so many quarks together so tightly over any substantial volume for any meaningful length of time, and presumably at some point the repulsive effects of electromagnetic charges between same charged quarks will kick in as well. And, at the scale of a simple proton or neutron's volume this concentrated bunch of asymptotically free quarks must be dense indeed to give rise to a black hole. Even if this could theoretically happen, the time frame for it to happen once on average may be much longer than the age of the universe.
Also, the fact that some quarks would be a full strength strong force distances from each other in such a system could be problematic. It isn't clear what happens with the strong force in such closely spaced quark circumstances involving large numbers of quarks.
A maximum density conjecture could resolve the theoretical inconsistency betwen quantum mechanics and general relativity by rendering the inconsistent possibilities unphysical.
In addition to explaining the absence of any observed small black holes, and removing any concern that something like the LHC could ever create a mini-black hole, this analysis could also have another useful theoretical benefit.
A maximum density conjecture cordons off many, if not all, of the situations in which there is conflict between general relativity's assumptions about space-time being continuous, and quantum mechanic's assumptions about fundamental particles being point-like, as unphysical. The conflicts are deepest when there is a general relativity singularity and this seems to be avoided in most if not all systems small enough to exhibit strongly quantum mechanical behavior with a maximum density conjecture.
Also, it may be that if one attributes reality for the purposes of general relativity calculations to the notion that point-like quantum mechanical particles are actually smeared over the entire volume of the probability distribution for their location at any given point in time, rather than being truly point-like, that the problem associated with anything that has mass and is truly a point giving rise to a singularity (i.e. a black hole) of infinite density in general relativity could be resolved (and indeed, this principle might even provide a theoretical limit on the maximum mass of any fundamental particle). I have to assume that someone has looked at this issue before but I don't know what the analysis showed and have never seen any secondary source provide an answer one way or the other even though the calculations would seem to be pretty elementary in most cases. The probability distribution needs to be roughly on the order of 10^-41 meters for a fundamental particle (also here) to escape this inconsistency, which is much smaller the the Planck length, so intuitively it seems like this smearing principle should suffice to resolve the infinite mass of a point-like particle singularity.
In effect, this maximum density conjecture, if correct, might imply that it is not necessary to modify general relativity at all in order to make it theoretically consistent in every physically possible circumstance with quantum mechanics, allowing a sort of unification without unification of the two sides of fundamental physics. It has long been known that the quantum mechanical assumptions that are at odds with general relativity theoretically generally aren't a problem in real world situations where we apply GR and visa versa, but we've never had any good way to draw a boundary between the two, which would provide theoretists to way to explore the very limits of physically possible circumstances where they can overlap. A maximum density conjecture might resolve that point.
Implications of a maximum density conjecture for inflation in cosmology
A maximum density conjecture also suggests that it may not be sound to extrapolate the Big Bang back any further than the point at which it reached a volume at which it had maximum density. Indeed, the aggregate mass of the universe would give rise to a black hole singularity at a volume at which it would have a mass per volume of much much less than a neutron star. The notion of inflation in Big Bang cosmology could simple be a mathematical artifact of extrapolating back to a point in time of more than the maximum density and hence assuming an unphysical state for the infant universe. Arguably, it really doesn't make any sense to extrapolate further back than the first point at which the universe's mass per volume would be insufficient to give rise to a black hole which would be much later in time than the point at which a maximum density threshold would be reached. Unless inflation happens after this point in time, inflation itself may be unphysical.
This wouldn't resolve the why was there something rather than nothing at the moment of the Big Bang, but the current GR without maximum density formulation doesn't answer that question either. The only real difference would be in the initial conditions of the two models.
Implications for causality
A theoretical and practical maximum density would also act to preserve causality and to even strengthen it beyond the limitations that would apply in the absence of this limitation by putting a cap upon the maximum extent to which gravity slows down time outside a black hole, with the cap being particularly strict outside the vicinity of large maximum density objects like neutron stars. This effect is tiny in the vicinity of atomic nuclei compared to similar density black holes because they have a much smaller volume.
Why is there a gap of medium sized maximum density objects?
A maximum density conjecture doesn't itself solve another empirical question. Why are there no intermediate sized maximum density objects, and for that answer to that one might need to resort to dynamical arguments.
There do not appear to be any objects even approaching maximum density for volumes greater than large atomic nuclei, which the weak force and limited range of the strong force tends to shatter and in which gravity is negligible in effect, and collapsed stars on the same order of magnitude of mass as the sun, countless orders of magnitude more massive. The strong force, the weak force, the electromagnetic force, and gravity all seem to lack the capacity to compact matter to that degree at those scales.
Sunday, April 1, 2012
Ark of Covenant Sold For Storage Lien
An undisclosed federal government employee at the General Services Administration has disclosed that the Lost Ark of the Covenant, placed in a secret private warehouse for safe keeping in the 1940s, was sold to an anonymous buyer at a storage lien sale.
The United States Department of State had failed to pay the annual registration fee of the Colorado shell corporation that had been paying the warehouse storage fees since the 1940s because Colorado Secretary of State stopped sending paper copies of the renewal notice to registered agents. An undisclosed civil servant made a typo when entering the contact e-mail address for the Colorado shell corporation when the transition took place, causing the notice to go to an unrelated import-export corporation that ignored it. As a result of its lapsed registration, the Colorado shell corporation did not receive the notice of enforcement of storage lien that would have otherwise been sent through its registered agent to individual at the General Services Administration responsible for the Secretary of State's domestic office and storage space. So, the Ark of the Covenant was sold by the warehouse company to a cash buyer at a storage lien auction, who saw only a large, unlabeled wooden crate containing it when making the bid, for two hundred dollars in 2008. Two other bidders had dropped out of the bidding at $40 and $160 respectively.
The General Services Agency discovered the incident in early 2011, and its Inspector General conducted a confidential investigation. But, the Inspect General's office has been unable to locate any sign of the highly significant religious artifact on public market listings or E-Bay or Craig's List postings since then and there has been no rumor of the discovery among private artifact dealers that the Inspector General's office could locate.
Given that the adverse possession period for personal property is three years in Nevada, and that a storage lien sale to a bona fide purchaser for value generally extinguishes all other claims to title, it isn't clear that the sale could have been undone, even if the buyer was discovered. The U.S. Antiquities Act does not apply either, because the Ark is not a U.S. source antiquity and may not have even been in the lawful possession of the United States government which reputedly seized it without any claim of title from the German government during World War II as war spoils and reputedly knew that Nazi Germany had in turn taken it without any claim of title from its previous lawful owners.
The United States Department of State had failed to pay the annual registration fee of the Colorado shell corporation that had been paying the warehouse storage fees since the 1940s because Colorado Secretary of State stopped sending paper copies of the renewal notice to registered agents. An undisclosed civil servant made a typo when entering the contact e-mail address for the Colorado shell corporation when the transition took place, causing the notice to go to an unrelated import-export corporation that ignored it. As a result of its lapsed registration, the Colorado shell corporation did not receive the notice of enforcement of storage lien that would have otherwise been sent through its registered agent to individual at the General Services Administration responsible for the Secretary of State's domestic office and storage space. So, the Ark of the Covenant was sold by the warehouse company to a cash buyer at a storage lien auction, who saw only a large, unlabeled wooden crate containing it when making the bid, for two hundred dollars in 2008. Two other bidders had dropped out of the bidding at $40 and $160 respectively.
The General Services Agency discovered the incident in early 2011, and its Inspector General conducted a confidential investigation. But, the Inspect General's office has been unable to locate any sign of the highly significant religious artifact on public market listings or E-Bay or Craig's List postings since then and there has been no rumor of the discovery among private artifact dealers that the Inspector General's office could locate.
Given that the adverse possession period for personal property is three years in Nevada, and that a storage lien sale to a bona fide purchaser for value generally extinguishes all other claims to title, it isn't clear that the sale could have been undone, even if the buyer was discovered. The U.S. Antiquities Act does not apply either, because the Ark is not a U.S. source antiquity and may not have even been in the lawful possession of the United States government which reputedly seized it without any claim of title from the German government during World War II as war spoils and reputedly knew that Nazi Germany had in turn taken it without any claim of title from its previous lawful owners.
Subscribe to:
Posts (Atom)