Thursday, March 31, 2016
Peanuts Are Native To South America
The modern peanut is a hybrid of two kinds of wild peanuts native to the Andes Mountains and Bolivia.
Progress Made In Determining CP Violation Angle Of Standard Model
You have to run to establish the pass.
In particle physics, that means that you have to have solid measurements of the fundamental constants of the Standard Model to support all other more glamorous predictions of the Standard Model and to constrain any possible beyond the Standard Model physics (the Hail Mary pass of the physics world).
One of the least precisely know physical constants, which is measured mostly in the decays of B mesons through intermediate D meson states, is one of the three angles of the "unitary triangle" implied by the CKM matrix that governs quark flavor changes in weak force interactions called gamma. It is known only to about +/- 10% accuracy and plays a large role in CP violation (i.e. discrepancies in the rate at which interactions happen going "forward" and "backward" in time). Gamma is also insensitive to beyond the Standard Model physics and top quark physics.
While the unitary triangle partial parameterization of the CKM matrix doesn't do so, it is possible to explain all CP violation in the Standard Model with just one of the four CKM matrix parameters chosen appropriately, unifying myriad observations of CP violation in many possible kinds of hadron decays that are not obviously related to each other until you understand the Standard Model.
While the unitary triangle partial parameterization of the CKM matrix doesn't do so, it is possible to explain all CP violation in the Standard Model with just one of the four CKM matrix parameters chosen appropriately, unifying myriad observations of CP violation in many possible kinds of hadron decays that are not obviously related to each other until you understand the Standard Model.
A new paper summarizes the data from 20 million B meson decays in the roughly 600,000 cases involving intermediate D meson states, allowing for precision measurements of rare four body decays and minimizing statistical error. This is the most accurate measurement ever of twenty-one observables related to these decays.
CP violation is established at the five sigma discovery level in some decays where it was expected but not previously demonstrated experimentally. Weaker, but still strong, evidence of CP violation is seen where expected in some other decays with very small branching fractions and/or in which CP violation is suppressed at the "tree level" for some (predicted and well understood) reason, reducing the statistical power associated even with the observation of 20 million decays.
CP violation is established at the five sigma discovery level in some decays where it was expected but not previously demonstrated experimentally. Weaker, but still strong, evidence of CP violation is seen where expected in some other decays with very small branching fractions and/or in which CP violation is suppressed at the "tree level" for some (predicted and well understood) reason, reducing the statistical power associated even with the observation of 20 million decays.
Most of these results are either a good fit to the existing world average measurements of gamma and related derived constants, or are in only very modest tension with it. But, there is only one out of 19 results are not consistent with the world average at the two-sigma level (the chart in the paper shows one sigma intervals). This is about par for the course for two-sigma confidence intervals, which on average, produce one in twenty results outside the confidence intervals.
Perhaps unsurprisingly, the greatest tension appears to be in the observed CP-violation in four body decays that produce three pions and one kaon, which constitute about 678 of the 600,000 or so relevant decays (the smallest number of events of any of the possibilities measured), which could simply be due to underestimates of the error bars that use a Gaussian distribution when some other distribution with somewhat fatter or asymmetric tails (perhaps a Poisson distribution) would be a more accurate description of the true expected probability distribution. Or, it could just be that small samples are more prone to be quirky, all other things being equal, than bigger ones.
The new paper does not derive a new world averages for gamma or any more fundamental Standard Model constant from its new data in this paper, as opposed to comparing its results to the old world average, although all the data required to do so is available in the paper. But, within a year or two, the precision with which we know the value of gamma should be improved somewhat and the world average bets fit value should change modestly. It would probably take me a couple of weeks of research and calculations to work this out as I'm rusty when it comes to the relationships between the various observables and the fundamental constants that they are derived from in principle.
If the Standard Model is correct, the sum of the three angles of the "unitary triangle" calculated using the true value of the relevant fundamental constants in the CKM matrix should sum to 180 degrees. Prior to this study, the data showed a sum of the three angles (alpha, beta and gamma) was 175 +/- 9 degrees, which is consistent at the one-half sigma level, with the 7 degree uncertainty in gamma dominating the uncertainty in the total. A global fit of the three angles suggests that the true value of gamma should be at the high end of its +/- 7 degree range of experimentally measured values.
I look forward to seeing the revised estimate based on this data soon. If the best fit value and margin of error for gamma both go down, this hints that the Standard Model CKM matrix may be insufficient to explain the observed phenomena and that beyond the Standard Model theories may be needed. If the best fit value of gamma rises and the margin of error decreases, but not so much that the sum is inconsistent with 180 degrees, then it is likely that the four parameters of the Standard Model CKM matrix completely describe quark flavor transitions via the weak force, and that any beyond the Standard Model particles don't interact with Standard Model particles via the weak force.
New Dates Suggest Only Brief Hobbit-Human Cohabitation On Flores
A new paper in the journal Nature makes the case that "Hobbit" remains on the island of Flores are 110,000 to 60,000 and that the associated stone tools are not younger than 50,000 years BP. Thus, hobbits and modern humans may have co-existed no longer than Neanderthals did in any one place, rather than co-existing for 30,000+ years as suggested by previously estimated dates.
This actually makes a great deal of sense. In every other case where modern humans co-existed with another hominin species, or even with another highly diverged and technological disparate population, the "less advanced" population quickly went extinct leaving only minor genetic traces in the surviving population and perhaps some tiny isolated relict populations.
Hobbits (a.k.a. Homo Florensis) are certainly the most plausible candidates to have admixed with early Papuans and Australians who arrived in Flores ca. 50,000 years ago to give them high levels of Denisovan admixture (although they may have been genetic close relations of a taller and larger Denisovan species from the mainland that experience island dwarfism on Flores and this size disparity may have made love rather than war a more palatable option upon encountering them since they wouldn't have been as threatening to modern humans).
But, there is no genetic evidence to suggest a sustained period of admixture as recently as 12,000 to 18,000 years ago, with 32,000-38,000 years of co-existence, when the previous dates suggested that Hobbits went extinct on Flores. We would also expect significant change in the hobbit phenotype over that time period as hybrid individuals came into being for sustained periods of time. If that was the case, modern humans on Flores would have much higher proportions of archaic admixture than Australian Aborigines or Papuans who swiftly moved on from Flores to more distant destinations on what was apparently a one way voyage. Yet, this is not what we observe.
This does leave up in the air the speculative linguistic evidence that Hobbit language learners may have influenced the language of Flores (making it simpler), and the plausibility of the oldest oral histories regarding people who might have been Hobbits, each of which seem more doubtful at a time depth of 50,000 years than they do at a time depth of as little as 12,000 years.
One possibility that could partially reconcile the two lines of evidence is that a minority of Hobbits who survived encounters with modern humans relocated to less accessible locales and were more wary of modern humans after negative initial encounters with them, but still persisted in small, isolated relict populations whose remains have not been located yet that gave rise to the oral histories at least.
This actually makes a great deal of sense. In every other case where modern humans co-existed with another hominin species, or even with another highly diverged and technological disparate population, the "less advanced" population quickly went extinct leaving only minor genetic traces in the surviving population and perhaps some tiny isolated relict populations.
Hobbits (a.k.a. Homo Florensis) are certainly the most plausible candidates to have admixed with early Papuans and Australians who arrived in Flores ca. 50,000 years ago to give them high levels of Denisovan admixture (although they may have been genetic close relations of a taller and larger Denisovan species from the mainland that experience island dwarfism on Flores and this size disparity may have made love rather than war a more palatable option upon encountering them since they wouldn't have been as threatening to modern humans).
But, there is no genetic evidence to suggest a sustained period of admixture as recently as 12,000 to 18,000 years ago, with 32,000-38,000 years of co-existence, when the previous dates suggested that Hobbits went extinct on Flores. We would also expect significant change in the hobbit phenotype over that time period as hybrid individuals came into being for sustained periods of time. If that was the case, modern humans on Flores would have much higher proportions of archaic admixture than Australian Aborigines or Papuans who swiftly moved on from Flores to more distant destinations on what was apparently a one way voyage. Yet, this is not what we observe.
This does leave up in the air the speculative linguistic evidence that Hobbit language learners may have influenced the language of Flores (making it simpler), and the plausibility of the oldest oral histories regarding people who might have been Hobbits, each of which seem more doubtful at a time depth of 50,000 years than they do at a time depth of as little as 12,000 years.
One possibility that could partially reconcile the two lines of evidence is that a minority of Hobbits who survived encounters with modern humans relocated to less accessible locales and were more wary of modern humans after negative initial encounters with them, but still persisted in small, isolated relict populations whose remains have not been located yet that gave rise to the oral histories at least.
Monday, March 28, 2016
Elevated Levels Of Denisovan Ancestry In South Asians
Razib Khan captures the key findings of a new study of Denisovan DNA in a variety of populations succinctly (read every sentence twice and ponder it; he packs a lot of content in this paragraph):
The paper notes with respect to this finding that:
Thus, on average, South Asians, East Asians, Native Americans, Central Asians and the Onge all have mean Denisovan ancestry between 0.05% and 0.06%, while Oceanians are at 0.85% +/- 0.43% (and 0.18% +/- 0.17% on the X chromosome), and West Eurasians are at 0.02%.
The results are generally consistent with admixture non-West Eurasians at the same rate subject to minor further dilution by West Eurasians in Central Asians and Native Americans, with additional admixture in Papuans and Australians (which enhances the percentages of populations admixed with them) which was only mildly diluted, and additional admixture (pre-dilution) in Tibetans. If so, this would imply that Denisovan admixture took place somewhere in the vicinity of Afghanistan and the Indian subcontinent, rather than further east, right around the time of the West Eurasian-East Eurasian population split.
Razib Khan also notes this language from the abstract of the paper (emphasis his):
We know a lot about Denisovans (with some people having as much as 5% Denisovan DNA) for a population that we still have not yet managed to associate with skeletons that could tell us which archaic hominin species they belong to, or what they looked like in broad general outline.
The big new open question is "How did South Asians end up with elevated levels of Denisovan ancestry in a scenario in which the Onge did not?"
I don't have an answer to that one yet.
Raw Data By Population Below The Break
(The comparison is actually as a function of non-West Eurasian ancestry and not Australian ancestry, but his description is otherwise on point.)The South Asian groups consistently jump well above the trend line for inferred Denisovan as a function of shared ancestry with Australians. Also, if you look at the admixture patterns for Denisovan ancestry in South Asia you see they follow the ANI-ASI cline. That is, it seems to come into the South Asian populations through the “Ancestral South Indians.” Interestingly, the Onge sample of Andaman Islanders has less Denisovan than low caste South Asian groups, reminding us that though the Onge and their kin are the closest modern populations to the ASI, they are not descended from the ASI. The highest fraction of inferred Denisovan is in the Sherpa people of Nepal. . . . . The proportion of Denisovan in low caste South Asians indicates that the fraction in ASI was about at the same level as the Sherpa. I suspect that ASI and the Tibetan groups got their Denisovan via different paths, but it doesn’t seem like we know yet.
The paper notes with respect to this finding that:
To take this in stride, however, the estimated percentage of Denisovan ancestry in South Asians is only 0.06% +/- 0.03% (and 0.01% +/- 0.03% on the X chromosome) v. 0.06% +/- 0.02% (and 0.00% +/- 0.01% on the X chromsome) for East Asians. Native Americans and Central Asians are 0.05% +/- 0.01% (and 0.00% +/- 0.00% on the X chromosome). West Eurasians average 0.02% +/- 0.01% (and 0.00% +/- 0.00% on the X chromsomes).[W]e were surprised to detect a peak of Denisovan ancestry estimates in South Asians, both in the Himalayan region and in South and Central India. The highest estimate is in Sherpas (0.10%), who have a Denisovan point estimate about one-tenth of that seen in Papuans (1.12%). Although this is notable in light of the likely Denisovan origin of the EPAS1 allele that confers high-altitude adaptation in Tibetans, EPAS1 is not sufficient to explain the observation as Sherpas have the highest point estimate even without chromosome 2, on which EPAS1 resides. To determine whether the peak of Denisovan ancestry in South Asia is significant, we tested whether the Denisovan ancestry proportion in diverse mainland Eurasians can be explained by differential proportions of non-West Eurasian ancestry (as it is already known that there is more Denisovan ancestry in East Eurasians than in West Eurasians). For each Eurasian population X, we computed an allele frequency correlation statistic that is proportional to eastern non-African ancestry. . . . South Asian groups as a whole have significantly more Denisovan ancestry than expected (block jackknife Z score for residuals = 3.2, p = 0.0013 by a two-sided test for the null hypothesis that the Denisovan ancestry estimate in South Asians is predicted by their proportion of non-West Eurasian ancestry[.] . . . The signal remains significant (Z = 3.1) when we remove from the analysis five populations that have ancestry very different from the majority of South Asians (Tibetan, Sherpa, Hazara, Kusunda, and Onge); however, the signals are non-significant for Central Asians (Z = 1.2) and Native Americans (Z = 0.1). Taken together, the evidence of Denisovan admixture in modern humans could in theory be explained by a single Denisovan introgression into modern humans, followed by dilution to different extents in Oceanians, South Asians, and East Asians by people with less Denisovan ancestry. If dilution does not explain these patterns, however, a minimum of three distinct Denisovan introgressions into the ancestors of modern humans must have occurred.
Thus, on average, South Asians, East Asians, Native Americans, Central Asians and the Onge all have mean Denisovan ancestry between 0.05% and 0.06%, while Oceanians are at 0.85% +/- 0.43% (and 0.18% +/- 0.17% on the X chromosome), and West Eurasians are at 0.02%.
The results are generally consistent with admixture non-West Eurasians at the same rate subject to minor further dilution by West Eurasians in Central Asians and Native Americans, with additional admixture in Papuans and Australians (which enhances the percentages of populations admixed with them) which was only mildly diluted, and additional admixture (pre-dilution) in Tibetans. If so, this would imply that Denisovan admixture took place somewhere in the vicinity of Afghanistan and the Indian subcontinent, rather than further east, right around the time of the West Eurasian-East Eurasian population split.
Razib Khan also notes this language from the abstract of the paper (emphasis his):
The paper explains that the raw data call for ridiculously recent dates of admixture (1000 +/- 8 generations for Denisovans and 1121 +/- 16 generations for Neanderthals) but explains that all likely sources of bias (mostly incomplete genome samples and dramatic population events) which could make the linkage disequalibrium estimates seem younger than they are would impact both admixture date estimates proportionately. So, Denisovan admixture happened about 10% fewer years ago than Neanderthal admixture, which based upon other estimates of Neanderthal admixture dates would suggest a date about 6,000 years after Neanderthal admixture (ca. 44,000-54,000 years ago).In Oceanians, the average size of Denisovan fragments is larger than Neanderthal fragments, implying a more recent average date of Denisovan admixture in the history of these populations (p = 0.00004). We document more Denisovan ancestry in South Asia than is expected based on existing models of history, reflecting a previously undocumented mixture related to archaic humans (p = 0.0013). Denisovan ancestry, just like Neanderthal ancestry, has been deleterious on a modern human genetic background, as reflected by its depletion near genes. Finally, the reduction of both archaic ancestries is especially pronounced on chromosome X and near genes more highly expressed in testes than other tissues (p = 1.2 × 10−7 to 3.2 × 10−7 for Denisovan and 2.2 × 10−3 to 2.9 × 10−3 for Neanderthal ancestry even after controlling for differences in level of selective constraint across gene classes). This suggests that reduced male fertility may be a general feature of mixtures of human populations diverged by >500,000 years.
We know a lot about Denisovans (with some people having as much as 5% Denisovan DNA) for a population that we still have not yet managed to associate with skeletons that could tell us which archaic hominin species they belong to, or what they looked like in broad general outline.
The big new open question is "How did South Asians end up with elevated levels of Denisovan ancestry in a scenario in which the Onge did not?"
I don't have an answer to that one yet.
Raw Data By Population Below The Break
Precision Hadron Mass Measurement
The bottom lambda baryon is a short lived (its mean lifetime is roughly 1.41*10-12 seconds), heavy hadron (only eight baryons and one meson have heavier measured masses, although many more which have not been produced and measured in experiments are predicted to be heavier) made of an up quark a down quark and a bottom quark. Its existence is predicted by the Standard Model, and it has all of the properties that it is predicted to have in the Standard Model.
A new paper describing some of its (completely expected) properties, including its mass, is mostly exceptional for the precision of the mass measurement which has a margin of error of less than half an electron mass (the combined error is +/- 0.24 MeV/c2, while the electron mass in the same units is 0.511), which is one part in 23,515. (The relative frequency of the rare branching fractions examined are measured to a precision of roughly 5%).
M(Λ0b)=5619.65±0.17±0.17 MeV/c2
What's Special About The Λ0b?
The mass of a hadron can be decomposed into the sum of the quark rest masses and the mass-energy of the gluon fields binding the quarks together (with a small adjustment for electro-weak fields between the quarks). Since all quarks have a color charge of the same magnitude, the gluon field contribution is, to first order, roughly the same in all hadrons with the same number of quark and the same spin and electric charge. This isn't a perfect approximation (the gluon field in a Λ0b has roughly 50% more mass-energy than the gluon field in a neutron which is identical to it, except that the bottom quark is replaced by a down quark), but it is a good starting point heuristically to think about the question.
The Λ0b is special in terms of comparing measurement to theory, because in hadrons with a bottom quark, the quark rest masses make up a majority of the mass (unlike protons and neutrons where the sum of the quark rest masses make up less than 2% of the total mass). Since the precision of the measurements is roughly the same in absolute terms regardless of the mass of the hadron measured, heavier hadrons can be measured with greater precision on a percentage basis than lighter hadrons.
Still, even measured in terms of absolute accuracy, for which 1 MeV is the norm, 1/4 of an MeV is still excellent. As the paper notes at page 10: "This is the most precise measurement of any b-hadron mass reported to date. . . . Previous direct measurements of the Λ0b mass by LHCb were made using the decay Λ0b → J/ψΛ0 and are statistically independent of the results of this study. The combination obtained here is consistent with, and more precise than, the results of these earlier studies." Also notably:
QCD Still Has Great Experimental Accuracy And Poor Theoretical Accuracy
The precision of the experimental measurement is much greater than the precision of the theoretically predicted mass of the baryon which is in the vicinity of one part per 100 to one part per 1000.
We can get predictions for hadron masses that rival the accuracy of state of the art first principles QCD by playing around with linear regression models of existing hadron mass data sets (including a few non-linear terms of the properties in the equation, such as the square of the quark masses and dummy variable for factors like hadron spin), although the best predictions using either approach don't rival the precision of the measurements themselves. And, knowing what parts of a first principles calculation are important isn't an obvious thing that often implicates factors that are only discernible after the fact.
The theoretical estimates are imprecise because, while we have the exact equations of QCD that are used to determine it* and have more than enough high precision calibration points in the form of measurements of myriad hadron properties (for example, we know the proton and neutron masses with a precision of roughly one part per 100,000,000), the relevant QCD calculations are extremely difficult to conduct even with supercomputers.
While this is most obvious in the imprecision of the QCD coupling constant, the imprecision in that constant is almost entirely due to the imprecision in QCD theoretical calculations, as opposed to measurement error. Since quarks are confined, the only way to measure the QCD coupling constant is to calculation hadron properties as a function of the QCD coupling constant and then to calibrate the results against the measured hadron properties.
The hadron measurements are very precise, but the theoretical calculations from which the QCD coupling constant can be reverse engineered using the measured hadron properties have great uncertainties because you have to calculate the values of gillions of path integral terms to get to even a several loop level and the infinite series that give the exact QCD value doesn't converge very rapidly. The most precise QCD calculations these days done from scratch are done numerically, rather than analytically, to a two to four loop level, and use one of several simplifications of the full QCD calculations.
We could increase the accuracy with which we know the QCD coupling constant to the same accuracy as the theoretical calculation, if we could calculate just one experimentally well calibrated hadron property to that level of accuracy theoretically. And, this would also greatly improve our ability to precisely measure the quark masses, since one of the inputs into the equation would be much more precisely known. But, even if we were able to improve the precision of these constants due to an isolated highly symmetric or cancellation prone setup, our generalized QCD calculation precision would improve only a little, because the main limiting factor is our inability to add in a sufficient number of terms to the slowly converging infinite series approximation and not the imprecision with which we know the physical constants.
* This is a widely held belief of physicists who do QCD and of Standard Model particle physics more generally, for a variety of reasons, some theoretical and some because we haven't seen the kind of systemic deviations from their predictions that we would expect if they were wrong. We know it to be true to the level of precision that we can do the calculations. For all practical purposes, a theoretical calculation precision of one part in 1,000,000 or less (i.e. a 1,000 to 10,000 fold improvement in precision) would be exact relative to current experimental precision. Any adjustment due to the admittedly ignored adjustments for gravity between the particles would be smaller than that level of precision.
A new paper describing some of its (completely expected) properties, including its mass, is mostly exceptional for the precision of the mass measurement which has a margin of error of less than half an electron mass (the combined error is +/- 0.24 MeV/c2, while the electron mass in the same units is 0.511), which is one part in 23,515. (The relative frequency of the rare branching fractions examined are measured to a precision of roughly 5%).
M(Λ0b)=5619.65±0.17±0.17 MeV/c2
What's Special About The Λ0b?
The mass of a hadron can be decomposed into the sum of the quark rest masses and the mass-energy of the gluon fields binding the quarks together (with a small adjustment for electro-weak fields between the quarks). Since all quarks have a color charge of the same magnitude, the gluon field contribution is, to first order, roughly the same in all hadrons with the same number of quark and the same spin and electric charge. This isn't a perfect approximation (the gluon field in a Λ0b has roughly 50% more mass-energy than the gluon field in a neutron which is identical to it, except that the bottom quark is replaced by a down quark), but it is a good starting point heuristically to think about the question.
The Λ0b is special in terms of comparing measurement to theory, because in hadrons with a bottom quark, the quark rest masses make up a majority of the mass (unlike protons and neutrons where the sum of the quark rest masses make up less than 2% of the total mass). Since the precision of the measurements is roughly the same in absolute terms regardless of the mass of the hadron measured, heavier hadrons can be measured with greater precision on a percentage basis than lighter hadrons.
Still, even measured in terms of absolute accuracy, for which 1 MeV is the norm, 1/4 of an MeV is still excellent. As the paper notes at page 10: "This is the most precise measurement of any b-hadron mass reported to date. . . . Previous direct measurements of the Λ0b mass by LHCb were made using the decay Λ0b → J/ψΛ0 and are statistically independent of the results of this study. The combination obtained here is consistent with, and more precise than, the results of these earlier studies." Also notably:
From the value of the Λ0b mass . . . and a precise measurement of the mass difference between the Λ0b and B0 hadrons reported in Ref. [6], the mass of the B0 meson is calculated to be:
M(B0) = 5279.93 ± 0.39 MeV/c2,
where the correlation of 41% between the LHCb measurements of the Λ0b mass and the Λ0b–B0 mass splitting has been taken into account. This is in agreement with the current world average of 5279.61 ± 0.16 MeV/c2.A neutral B meson is a two quark composite particle which (to oversimplify) is made up of a bottom quark and an anti-down quark.
QCD Still Has Great Experimental Accuracy And Poor Theoretical Accuracy
The precision of the experimental measurement is much greater than the precision of the theoretically predicted mass of the baryon which is in the vicinity of one part per 100 to one part per 1000.
We can get predictions for hadron masses that rival the accuracy of state of the art first principles QCD by playing around with linear regression models of existing hadron mass data sets (including a few non-linear terms of the properties in the equation, such as the square of the quark masses and dummy variable for factors like hadron spin), although the best predictions using either approach don't rival the precision of the measurements themselves. And, knowing what parts of a first principles calculation are important isn't an obvious thing that often implicates factors that are only discernible after the fact.
The theoretical estimates are imprecise because, while we have the exact equations of QCD that are used to determine it* and have more than enough high precision calibration points in the form of measurements of myriad hadron properties (for example, we know the proton and neutron masses with a precision of roughly one part per 100,000,000), the relevant QCD calculations are extremely difficult to conduct even with supercomputers.
While this is most obvious in the imprecision of the QCD coupling constant, the imprecision in that constant is almost entirely due to the imprecision in QCD theoretical calculations, as opposed to measurement error. Since quarks are confined, the only way to measure the QCD coupling constant is to calculation hadron properties as a function of the QCD coupling constant and then to calibrate the results against the measured hadron properties.
The hadron measurements are very precise, but the theoretical calculations from which the QCD coupling constant can be reverse engineered using the measured hadron properties have great uncertainties because you have to calculate the values of gillions of path integral terms to get to even a several loop level and the infinite series that give the exact QCD value doesn't converge very rapidly. The most precise QCD calculations these days done from scratch are done numerically, rather than analytically, to a two to four loop level, and use one of several simplifications of the full QCD calculations.
We could increase the accuracy with which we know the QCD coupling constant to the same accuracy as the theoretical calculation, if we could calculate just one experimentally well calibrated hadron property to that level of accuracy theoretically. And, this would also greatly improve our ability to precisely measure the quark masses, since one of the inputs into the equation would be much more precisely known. But, even if we were able to improve the precision of these constants due to an isolated highly symmetric or cancellation prone setup, our generalized QCD calculation precision would improve only a little, because the main limiting factor is our inability to add in a sufficient number of terms to the slowly converging infinite series approximation and not the imprecision with which we know the physical constants.
* This is a widely held belief of physicists who do QCD and of Standard Model particle physics more generally, for a variety of reasons, some theoretical and some because we haven't seen the kind of systemic deviations from their predictions that we would expect if they were wrong. We know it to be true to the level of precision that we can do the calculations. For all practical purposes, a theoretical calculation precision of one part in 1,000,000 or less (i.e. a 1,000 to 10,000 fold improvement in precision) would be exact relative to current experimental precision. Any adjustment due to the admittedly ignored adjustments for gravity between the particles would be smaller than that level of precision.
Friday, March 25, 2016
Lucy's Kind Had A Range That Extended Outside Africa's Rift Valley
The primate species Australopithecus afarensis for which the type fossil is "Lucy" was a diminutive archaic species intermediate between other great apes and modern humans (it isn't clear if this species was directly ancestral to modern humans or merely a cadet branch of the lineage) that existed roughly 3.7 million to 3 million years ago.
Previously, remains from this species had been found only in the African Rift Valley, but a new example of remains from this species in Kenya in the outskirts of Nairobi demonstrates that Lucy's kind had a wider geographic and ecological range.
Monday, March 21, 2016
The Extinct Megafauna Blog
Usually, I write about when megafauna extinction happened and how. But, the megafauna that went extinct when modern humans expanded were themselves pretty cool and there is a blog about them called TwilightBeasts. It is worth your time to peruse.
The image below in anachronistic but illustrates scale well:
Friday, March 18, 2016
The LHC Is That Incredible
The Large Hadron Collider produces gamma rays with energies just two orders of magnitude short of those produced by the intense collisions that take place near the event horizon of the supermassive black hole at the center of the Milky Way galaxy, which has a mass of roughly four million times the mass of the Sun which in turn has roughly 333,000 times the mass of the Earth.
But, the fact that we have already created conditions 1% as intense already in an Earth based laboratory environment, and could fairly easily scale the existing technology up to be 100 times more powerful for something on the order of tens of billions to hundreds of billions of dollars over a decade or so (not cheap, but a tiny percentage of the world's GDP), is really pretty stunning.
For more than ten years the H.E.S.S. observatory in Namibia, run by an international collaboration of 42 institutions in 12 countries, has been mapping the center of our galaxy in very-high-energy gamma rays. These gamma rays are produced by cosmic rays from the innermost region of the Galaxy. A detailed analysis of the latest H.E.S.S. data reveals for the first time a source of this cosmic radiation at energies never observed before in the Milky Way: the supermassive black hole at the center of the Galaxy, likely to accelerate cosmic rays to energies 100 times larger than those achieved at the largest terrestrial particle accelerator.While the linked story emphasizes how intense the gamma rays at the in the vicinity of Sagittarius A* at the center of our galaxy are (which is the most energetically intense environment that exists in nature for perhaps millions of light years or more around us), honestly, I would pretty much expect the environment next to such a huge black hole to be incredibly intense.
But, the fact that we have already created conditions 1% as intense already in an Earth based laboratory environment, and could fairly easily scale the existing technology up to be 100 times more powerful for something on the order of tens of billions to hundreds of billions of dollars over a decade or so (not cheap, but a tiny percentage of the world's GDP), is really pretty stunning.
Friday, March 11, 2016
Jati As A Communities Of Mutual Support
A post at the economics blog Marginal Revolution (original source in the link) sheds some light on the social role of jati in India which is important to understanding the institution, even though this is looking only at its modern manifestation (emphasis added):
The analogy that comes to mind is that the jati served some of the functional roles of religious denominations (particularly Roman Catholics and Mormons) and political party machines in American history, and the Tong (a.k.a. "Benevolent Association") in Chinese immigration history. Koreans have something more ephemeral, but analogous, where a group of a dozen or two people meet regularly to pool monthly savings to allow one of them to have start up capital for businesses, rotating each meeting.
Even more apt analogies may be to medieval guilds and professions, and to unions organized on an industry level such as the Actor's Equity Association and the Screen Actor's Guild. The latter, because the industries are "gig based" with each production organized as a separate firm, have historically provided a central clearinghouse of job opportunities as well as many of the fringe benefits like health insurance and retirement vehicles that would be secured from an employer during the era when lifetime employment with a single firm was the norm.
Imagine the India in which these institutions came into place, as one in which all professions and occupations worth the name were unionized and only people engaged in work so unskilled and menial that its people did not unionize and were instead atomized in society became the Dalits, a path which at one point the U.S. economy was headed towards, but ultimately veered away from as private sector unionization has slowly faded away, but remains alive and well in places like Germany where even being a waiter is a regulated profession.
Needless to say, what distinguishes jati from these Western institutions is the strong tradition of endogamy at the fine grained level of Indian jati, while the West's traditions of endogamy were weaker and at the more general level of social class (roughly as course grained as Indian varna). But, in many other respects the analogy is quite strong.
Adding a kinship dimension to jati as well as a professional one, may have allowed this institution of Indian civil society to hold fast, in the face of a weak state that did not have the capacity to enforce binding long term mutual support obligations or to organize a welfare state of its own as many of the Bronze Age and Iron Age states of West Eurasia did in "bread and circus" systems of palace/state based taxation and food rationing that emerged throughout the Mediterranean.
The kinship element can be seen as analogous to the kinship but not professionally based clans common in societies with cultures of honor, often with herding economies.
In contrast, in China, rice farming estate were largely required to support themselves and share excess with the state, without substantial social welfare food assistance outside the rice farming estate, creating incentives for rice farming groups to produce so they would not starve.
Thus, the Indian caste system can be seen as a civil society based tool that facilitated an economy more complex than that of feudal estates in Medieval Europe or clan based herder societies, that was more akin to Europe's "free cities", even in the absence of the strong state institutions or compact walled cities, that would otherwise be necessary to organize such an economy.
Analogies[T]he real wage gap [rural to urban] in India is at least 16 percentage points larger than it is in China and Indonesia. There is evidently some friction that prevents rural Indian workers from taking advantage of more remunerative job opportunities in the city.Indian migration to the cities is much lower than for China or Indonesia. Here is part of the answer:The explanation that we propose for India’s low mobility is based on a combination of well-functioning rural insurance networks and the absence of formal insurance, which includes government safety nets and private credit.…In rural India, informal insurance networks are organized along caste lines. The basic marriage rule in India (which recent genetic evidence indicates has been binding for nearly two thousand years) is that no individual is permitted to marry outside the sub-caste or jati (for expositional convenience, we use the term caste interchangeably with sub-caste). Frequent social interactions and close ties within the caste, which consists of thousands of households clustered in widely dispersed villages, support very connected and exceptionally extensive insurance networks.Households with members who have migrated to the city will have reduced access to rural caste networks…
The analogy that comes to mind is that the jati served some of the functional roles of religious denominations (particularly Roman Catholics and Mormons) and political party machines in American history, and the Tong (a.k.a. "Benevolent Association") in Chinese immigration history. Koreans have something more ephemeral, but analogous, where a group of a dozen or two people meet regularly to pool monthly savings to allow one of them to have start up capital for businesses, rotating each meeting.
Even more apt analogies may be to medieval guilds and professions, and to unions organized on an industry level such as the Actor's Equity Association and the Screen Actor's Guild. The latter, because the industries are "gig based" with each production organized as a separate firm, have historically provided a central clearinghouse of job opportunities as well as many of the fringe benefits like health insurance and retirement vehicles that would be secured from an employer during the era when lifetime employment with a single firm was the norm.
Imagine the India in which these institutions came into place, as one in which all professions and occupations worth the name were unionized and only people engaged in work so unskilled and menial that its people did not unionize and were instead atomized in society became the Dalits, a path which at one point the U.S. economy was headed towards, but ultimately veered away from as private sector unionization has slowly faded away, but remains alive and well in places like Germany where even being a waiter is a regulated profession.
Needless to say, what distinguishes jati from these Western institutions is the strong tradition of endogamy at the fine grained level of Indian jati, while the West's traditions of endogamy were weaker and at the more general level of social class (roughly as course grained as Indian varna). But, in many other respects the analogy is quite strong.
Adding a kinship dimension to jati as well as a professional one, may have allowed this institution of Indian civil society to hold fast, in the face of a weak state that did not have the capacity to enforce binding long term mutual support obligations or to organize a welfare state of its own as many of the Bronze Age and Iron Age states of West Eurasia did in "bread and circus" systems of palace/state based taxation and food rationing that emerged throughout the Mediterranean.
The kinship element can be seen as analogous to the kinship but not professionally based clans common in societies with cultures of honor, often with herding economies.
In contrast, in China, rice farming estate were largely required to support themselves and share excess with the state, without substantial social welfare food assistance outside the rice farming estate, creating incentives for rice farming groups to produce so they would not starve.
Thus, the Indian caste system can be seen as a civil society based tool that facilitated an economy more complex than that of feudal estates in Medieval Europe or clan based herder societies, that was more akin to Europe's "free cities", even in the absence of the strong state institutions or compact walled cities, that would otherwise be necessary to organize such an economy.
Wednesday, March 9, 2016
Another Top Quark Mass Measurement
The latest lepton decay based measurement of the top quark mass from CMS at the LHC is 173.8 -1.7+1.8 GeV. This tends to pull up the LHC and world average measurement of this key parameter of the Standard Model.
The state of the art prior to this measurement was recapped in a December 2015 post at this blog:
It is also consistent with the refined possibility that the sum of the square of the fundamental fermion masses equals the sum of the square of the fundamental boson masses.
This leaves very little room for new beyond the Standard Model particles (at least if they interact with the 125 GeV Higgs boson's field).
The state of the art prior to this measurement was recapped in a December 2015 post at this blog:
The best available estimate of the mass of the top quark from the Large Hadron Collider (LHC) combining data from both the CMS and ATLAS experiments is now 172.38 +/- 0.66 GeV. The final Tevatron mass measurement for the top quark was 174.34 +/- 0.64 GeV. This brings the error weighted world average mass measurement of the top quark to about 173.35 GeV, which is consistent with both the LHC measurement and the Tevatron measurement at the 1.5 sigma level.The new measurement is right on the button of value that I would expect given measurements of the Higgs boson to date, suggesting that the sum of the squares of the fundamental particle masses do indeed equal the sum of the square of the Higgs vacuum expectation value.
The previous top quark mass estimate from ATLAS (as of April of 2015) was 172.99 +/- 0.91 GeV. The latest combined LHC measurement excluding that ATLAS estimate was 173.34 +/- 0.76 GeV. Thus, the LHC mass measurement is trending down.
As noted in the Tevatron mass estimate post:
The expected value of the top mass from the formula that the sum of the square of each of the fundamental particle masses equals the square of the Higgs vaccum expectation value, given the state of the art Higgs boson mass measurement (and using a global fit value of 80.376 GeV for the W boson rather than the PDG value) is 173.73 GeV. . . . If the the sum of the square of the boson masses equals the sum of the square of the fermion masses the implied top quark mass is 174.03 GeV if pole masses of the quarks are used, and 174.05 GeV if MS masses at typical scales are used.Thus, there are theoretical conjectures that pull the expected value of the top quark mass up from the current estimates, although those estimates are not in great tension with the current global average.
It is also consistent with the refined possibility that the sum of the square of the fundamental fermion masses equals the sum of the square of the fundamental boson masses.
This leaves very little room for new beyond the Standard Model particles (at least if they interact with the 125 GeV Higgs boson's field).
Thursday, March 3, 2016
Work In Process Analyzing Deep South Asian History And South Asian Population Genetics
I've been reading a lot of articles on the history and population genetics of the South Asian caste system, and have bookmarked them, but it is taking time to get to the point where I feel that I have a firm enough command of the material to synthesize that information into a well referenced post here.
The story at the top of the caste pyramid is pretty well understood and is fairly familiar to me (although still more complex than conventional wisdom would suggest), as this is deeply interrelated to the relatively familiar topic of Indo-European and Dravidian linguistic origins.
In the middle, the main story seems to be one of regional variation and of the extent to which geography and caste are more relevant to a particular population's genetic makeup. This story is less familiar, but still seems amenable to an analysis not unlike that done at the top of the pyramid.
But, I'm particularly intrigued by and still coming to terms with understanding the distinction between middle to lower caste individuals and people who are beneath the main varna structure entirely, which consists of two distinct populations: "untouchables" a.k.a. Dalits a.k.a. Scheduled Castes, on one hand, and "tribal" populations, on the other.
Dipping my toe in so far, the population genetics of the Scheduled Castes seem to be dominated by distinctively South Asian features not found in any other population on earth and properly characterized as indigenous. This begs the question of how the "otherness" of these castes came to be, with a tentative hypothesis that the Scheduled Castes may have been mostly made up of the descendants of prisoners of war captured in wars between South Asian micro-states when the subcontinent had a balkanized political landscape who were held in a slave-like status at some point.
In contrast, so far, the population genetics of "tribal" populations in India appear to vary greatly from tribe to tribe, are sometimes not dominated by genetic features private to South Asia, and do not always have genetics that reflect their current language suggesting that there have been instances of language shift in tribal populations.
It tentatively appears that many of the tribal populations that have been foragers in historical time periods reverted to that status after having ancestors who were probably food producers who migrated to South Asia from outside the subcontinent at some point during the latter half of the Holocene era. This is contrary to the conventional wisdom that sees tribal populations as the indigenous hunter-gatherers of India, which is true if one looks only to the historic era, but may not be true when viewed from the sweeping perspective of the entire Holocene era.
As I note, these tentative conclusions are subject to change, undigested, unrefined, and not carefully sourced to the references that support them at this point, as I go through the process of assimilating and making sense of the rather large literature on the subject, not all of which is consistent. But, this post does provide some work in process update to what I've been looking at and thinking about on that score.
I'd welcome comments on these tentative ideas, pointers to different well supported hypotheses, and sources that address these questions from various disciplinary perspectives.
The story at the top of the caste pyramid is pretty well understood and is fairly familiar to me (although still more complex than conventional wisdom would suggest), as this is deeply interrelated to the relatively familiar topic of Indo-European and Dravidian linguistic origins.
In the middle, the main story seems to be one of regional variation and of the extent to which geography and caste are more relevant to a particular population's genetic makeup. This story is less familiar, but still seems amenable to an analysis not unlike that done at the top of the pyramid.
But, I'm particularly intrigued by and still coming to terms with understanding the distinction between middle to lower caste individuals and people who are beneath the main varna structure entirely, which consists of two distinct populations: "untouchables" a.k.a. Dalits a.k.a. Scheduled Castes, on one hand, and "tribal" populations, on the other.
Dipping my toe in so far, the population genetics of the Scheduled Castes seem to be dominated by distinctively South Asian features not found in any other population on earth and properly characterized as indigenous. This begs the question of how the "otherness" of these castes came to be, with a tentative hypothesis that the Scheduled Castes may have been mostly made up of the descendants of prisoners of war captured in wars between South Asian micro-states when the subcontinent had a balkanized political landscape who were held in a slave-like status at some point.
In contrast, so far, the population genetics of "tribal" populations in India appear to vary greatly from tribe to tribe, are sometimes not dominated by genetic features private to South Asia, and do not always have genetics that reflect their current language suggesting that there have been instances of language shift in tribal populations.
It tentatively appears that many of the tribal populations that have been foragers in historical time periods reverted to that status after having ancestors who were probably food producers who migrated to South Asia from outside the subcontinent at some point during the latter half of the Holocene era. This is contrary to the conventional wisdom that sees tribal populations as the indigenous hunter-gatherers of India, which is true if one looks only to the historic era, but may not be true when viewed from the sweeping perspective of the entire Holocene era.
As I note, these tentative conclusions are subject to change, undigested, unrefined, and not carefully sourced to the references that support them at this point, as I go through the process of assimilating and making sense of the rather large literature on the subject, not all of which is consistent. But, this post does provide some work in process update to what I've been looking at and thinking about on that score.
I'd welcome comments on these tentative ideas, pointers to different well supported hypotheses, and sources that address these questions from various disciplinary perspectives.
Subscribe to:
Posts (Atom)