Pages

Saturday, September 30, 2017

Blue Eyes Are A Recent Mutation

New research shows that people with blue eyes have a single, common ancestor. Scientists have tracked down a genetic mutation which took place 6,000-10,000 years ago and is the cause of the eye color of all blue-eyed humans alive on the planet today.
From here.

The paper is:

Hans Eiberg, Jesper Troelsen, Mette Nielsen, Annemette Mikkelsen, Jonas Mengel-From, Klaus W. Kjaer, Lars Hansen. "Blue eye color in humans may be caused by a perfectly associated founder mutation in a regulatory element located within the HERC2 gene inhibiting OCA2 expression." 123(2) Human Genetics 177 (2008) DOI: 10.1007/s00439-007-0460-x

Thursday, September 28, 2017

Experimental Confirmation Of Koide's Rule And Lepton Universality In Tau Leptons

Koide's Rule Confirmed Again

Koide's rule, a formula proposed in 1981, six years after the tau lepton was discovered, when its mass was known much less accurately, predicts the mass of the tau lepton based upon the mass of the electron and the muon. This prediction using current electron and muon mass measurements is:

1776.96894 ± 0.00007 MeV/c^2.

The uncertainty is entirely due to uncertainty in the electron and muon mass measurements. The low uncertainty in the Koide's rule prediction reflects the fact that the electron and muon mass have been measured much more precisely than the tau lepton mass.

The latest measurement from  BESIII, which is the most precise single experimental measurement to date (UPDATE: From 2014) is:

1776.91 ± 0.12 + 0.10/− 0.13 MeV/c^2 (the combined error is ± 0.17).

This result is 0.06 MeV less than the Koide's rule prediction which is consistent to less than one-half of a standard deviation of experimental uncertainty from the predicted value.

The new result is closer to the Koide's rule prediction than the Particle Data Group (PDG) value for 2016 which is:

1776.83 ± 0.12 MeV/c^2

The PDG value is within about 1.2 standard deviations of the Koide's rule prediction. This new result will probably push the next PDG value closer to the Koide's rule prediction.

Koide's rule is one of the most accurate phenomenological hypotheses in existence which has no Standard Model theoretical explanation, although given the precision to which it is true, there is almost certainly some explanation for this correspondence based upon new physics beyond (or really "within") the Standard Model.

Another Confirmation Of Lepton Universality

The same experiment analyzed its data to determine if it was consistent with lepton universality between tau leptons and muons.  The Standard Model predicts lepton universality, but some B meson decay data seem to show weak evidence of a lepton universality violation. 

Lepton universality means that all charged leptons have precisely the same properties except mass. The experiment confirmed this prediction of the Standard Model comparing a ratio of two experimental results that should be 1.0 if lepton universality is correct. The measured value of that ratio is:

1.0016 ± 0.0042

Thus, the experimental outcome was again less than half of a standard deviation due to experimental uncertainty from the predicted value and lepton universality is confirmed.

Wednesday, September 27, 2017

The Source Of the Proto-Chadic Y-DNA R1b-V88 People Of Africa

For reasons that I spell out in several comments to a post at the Bell Beaker blog, I think that there is very good reason to think that the Y-DNA R1b-V88 bearing Chadic people are derived from migrants who originated in the Bug-Dneister culture of Ukraine departing about 5500 BCE.

They made their way to the Southwest to Western Anatolia then along the Levantine coast to Cairo, Egypt then down the Nile to Upper Egypt or Sudan and then up what is now a dry riverbed, but then was the White Nile tracing it to its source, and then making their way over the low mountains/hills there to Lake Chad, where they settled around 5200 BCE, having picked up Cushitic wives not long before making the final leg of the 300 years long folk migration.

The cultural changes, language shift to the language of their wives, and the great time depth of this journey limits our ability to discern what they were like culturally during their long journey during which proto-Chadic ethnogenesis took place.

This trip was probably made in several periodic spurts, but these cattle herders were probably periodically forced to uproot themselves and move on by climate change events happening at the time and unfriendly and possibly more militarily powerful farmer neighbors.

I will copy some of the analysis that supports this and throw in some maps as well as time allows.

Ultimately, however, I'm really quite comfortable that this is now a solved mystery to the fullest extent it may ever be possible to solve it.

UPDATE September 29, 2017

After rereading Chapter 8 of David Anthony's "The Horse, Wheel and Language" (2007), it was possible to further narrow the window of space and time.

Cattle were only widely adopted by the Bug-Dneister culture in the Dneister River Valley on the western edge of the Bug-Dneister culture's range and became a majority food source in the time period from 5400 BCE to 5000 BCE, which means that the departure had to take place between 5400 BCE and 5200 BCE and could have take much less than the previously estimated 300 years, indeed definitely no more than about 200 years and possibly quite a bit less than that. This would have put them at the tail end of the introduction of cattle into the region where they arrived.



Map of the Dneister basin per Wikipedia

The Bug-Dneister people of the Dneister basin were concentrated in a region of forest-steppe roughly corresponding to the northern half of the eastern border of Moldova, a stretch of river about 100 miles long and perhaps 20 miles wide. This is an area between the size of the state of Rhode Island and the state of Delaware.

David Anthony speculates that the Bug-Dneister people may have spoken a language that was part of the same language family that was ancestral to the Indo-European languages, but he does not make a particularly confident pronouncement on that point.

Unlike their Cris culture neighbors to their immediate west in Moldova, from whom they acquired cattle husbandry and some limited farming of several ancient kinds of wheat (a smaller number of crops than the Cris people cultivated), they did not eat sheep. The Bug-Dneister people provided more of their diet through hunting than the Cris did. The Cris people were early European farmers derived from Anatolia and were ultimately replaced by LBK (Linear Pottery) farmers who never crossed the cultural dividing line of the Dneister River.

This narrow window in time and space for the source population is comparable to the precision with which the place of origin of (1) the Bantu expansion (near the coastal southern border of Nigeria), (2) the Asian component of the founding population of Madagascar on a particular part of the southern coast of island of Borneo, and (3) the founding population of the Austronesian people on a particular part of the island of Formosa. The window for the place of origin of European Roma is also quite specific in time and place. END UPDATE

A Nice Standard Model Illlustration


The only flaw here is that it doesn't distinguish between neutrinos, which don't couple to photons but do couple to the W and Z bosons (and probably to the Higgs boson although their coupling to the Higgs boson is really an unresolved issue in the Standard Model) and charged leptons which demonstrably couple to photons, W and Z bosons and the Higgs boson.

A hypothetical graviton would couple to all of the particles in the diagram including itself.

Friday, September 22, 2017

Easter Island Had A Peak Population Of 17,500

Easter Island, known as Rapa Nui by its inhabitants, has been surrounded in mystery ever since the Europeans first landed in 1722. Early visitors estimated a population of just 1,500-3,000, which seemed at odds with the nearly nine hundred giant statues dotted around the Island. How did this small community construct, transport and erect these large rock figures?
From Science Daily.

According to a new study, this was possible because the island had a peak population of 17,500 people. The determination was based upon the number of people that island could support if it was farmed in accordance with the Austronesian farming practices used at the time.

The paper is:

Cedric O. Puleston, et al., "Rain, Sun, Soil, and Sweat: A Consideration of Population Limits on Rapa Nui (Easter Island) before European Contact." 5 Frontiers in Ecology and Evolution (2017). DOI: 10.3389/fevo.2017.00069

Wednesday, September 20, 2017

Settling Oceania



Oceania Map via Wikipedia.

A major study of the current autosomal genetics of Oceania largely supports the existing paradigm with some fairly subtle nuances in the post-Austronesian period that also demonstrates that there was linguistic diversity among the founding populations of the Solomon Islands.
A widely accepted two-wave scenario of human settlement of Oceania involves the first out-of-Africa migration ca 50,000 ya, and one of the most geographically-widespread dispersals of people, known as the Austronesian expansion, which reached the Bismarck Archipelago by about 3,450 ya. While earlier genetic studies provided evidence for extensive sex-biased admixture between the incoming and the indigenous populations, some archaeological, linguistic and genetic evidence indicates a more complicated picture of settlement. To study regional variation in Oceania in more detail, we have compiled a genome-wide dataset of 823 individuals from 72 populations (including 50 populations from Oceania) and over 620,000 autosomal SNPs. 
We show that the initial dispersal of people from the Bismarck Archipelago into Remote Oceania occurred in a "leapfrog" fashion, completely by-passing the main chain of the Solomon Islands, and that the colonization of the Solomon Islands proceeded in a bi-directional manner. Our results also support a divergence between western and eastern Solomons, in agreement with the sharp linguistic divide known as the Tryon-Hackman line. 
We also report substantial post-Austronesian gene flow across the Solomons. In particular, Santa Cruz (in Remote Oceania) exhibits extraordinarily high levels of Papuan ancestry that cannot be explained by a simple bottleneck/founder event scenario.
From here (links added editorially).

What Is The Minimum Size Of A Hunter-Gatherer Society's Population Bottleneck?

In order to have sustainable in a hunter-gatherer population (i.e. one that is not moribund), you need a gender balanced population of 150 people, although "less restrictive marriage rules" allow for a somewhat smaller population (as low as 40-60 people) to remain viable. 
A non-spatial agent-based model is used to explore how marriage behaviors and fertility affect the minimum population size required for hunter-gatherer systems to be demographically viable. The model incorporates representations of person- and household-level constraints and behaviors affecting marriage, reproduction, and mortality. Results suggest that, under a variety of circumstances, a stable population size of about 150 persons is demographically viable in the sense that it is largely immune from extinction through normal stochastic perturbations in mortality, fertility, and sex ratio. Less restrictive marriage rules enhance the viability of small populations by making it possible to capitalize on a greater proportion of the finite female reproductive span and compensate for random fluctuations in the balance of males and females.
Andrew White, "A Model-Based Analysis of the Minimum Size of Demographically-Viable Hunter-Gatherer Populations" 20(4) Journal of Artificial Societies and Social Simulation 9 (published online September 20, 2017 in advance of October 31, 2017 print publication) (open access). DOI: 10.18564/jasss.3393

The study does not look carefully at inbreeding concerns other than assuming a basic incest taboo.

This is order of magnitude compatible for known extreme founder events such as the founding populations of Madagascar, the Americas, Australia, and New Zealand, and the paper notes that colonizing populations such as these could actually be considerably smaller and still be viable, although at that point, a serious look at inbreeding depression issues would be necessary. This is particularly an issue since the available data suggest that colonizing populations often involve a small number of extended families.

Ethnographically, widespread (but not universal) cousin marriage appears to be sustainable, but starts to become problematic over multiple generations as cousins become increasingly more closely related than two cousins from otherwise non-inbred populations over time. If more closely related couples are allowed, e.g. sibling marriage (e.g. in Egyptian royal dynasties) or aunt-nephew marriage (seen in a number of contexts), the available ethnographic evidence suggests that this is not sustainable as congenital defects rise to problematically high levels.

It isn't entirely clear what impact the founding population means of food production has on the total. Would the minimum size of a population of herders or farmers or fishermen be higher or lower respectively? What about a technologically advanced civilization?

More material from the body of the paper:
In this paper, a "viable" population is defined as one that survives over a span of 400 years. . . . Many ethnographically-documented hunter-gatherer populations spend at least part of the year in dispersed, autonomous foraging groups generally containing less than 35 persons (Binford 2001; Kelly 1995). The size and mobility behaviors of these foraging groups are strongly conditioned by subsistence ecology, which often makes it impossible to support larger aggregations of persons for extended periods of time (Binford 1980, 1983, 2001; Kelly 1995). While these small foraging groups constitute functional economic units for subsistence purposes, they are not presumed to be demographically self-sustaining over the course of many generations. Thus groups large enough to be demographically viable are too large to be economically viable, while groups small enough to be economically viable on a day-to-day basis are not demographically viable over the long term. Ethnographic hunter-gatherer systems "solve" this dilemma by using cultural mechanisms (e.g., kinship, exchange, and personal/group mobility) to build and maintain a social system that binds together a dispersed population, facilitating the transfer of information and persons between groups (see White 2012). . . . 
Birdsell (1953, 1958, 1968) approached the problem of determining equilibrium size by considering ethnographic data from Australia. He noted that the size of "dialectical tribes" of Australian hunter-gatherers had a central tendency of around 500 persons, and reasoned that endogamous social systems of this size were probably also typical of Paleolithic hunter-gatherers. It is worth pointing out, as Birdsell (1953, p. 197) himself did, that the number 500 was a statistical abstraction produced from disparate sources of ethnographic data: the documented size range of Australian "tribes," while varying around a central tendency of about 500 persons, commonly included groups of less than 200 (see Birdsell 1953, Figure 8, 1958, p. 186; Kelly 1995, pp. 209-210). . . .
Other settings being equal, polygynous systems in the model generally have smaller MVP sizes than monogamous ones. Polygynous systems are associated with higher mean total fertilities (the number of children born to a female over the course of her reproductive years), lower mean female ages at marriage, and lower mean inter-birth intervals (see Table 6 and Figure 3). The negative relationship between total fertility and female age at marriage is clear when the two variables are plotted against one another (Figure 10). The polygynous model systems provide more opportunities for females to marry earlier in life and potentially bear more offspring during the course of their reproductive spans, enhancing the viability of small populations by increasing fertility and compensating for imbalances in sex ratio. Imposition of marriage divisions has less effect on MVP size in polygynous systems than monogamous ones. . . .
Results from the model suggest that populations limited to as few as 40-60 people can be demographically viable over long spans of time when cultural marriage rules are no more restrictive than the imposition of a basic incest taboo. Even under the most restrictive marriage rules and most severe morality conditions imposed during model experiments, populations of at least 150 persons were demographically viable over 400 years of model time. These estimates of MVP size were calculated for populations with no logistical constraints (e.g., no impediments to information flow and no spatial component of interaction) to identifying and obtaining marriage partners. 
With regard to MVP size, my results are generally consistent with the conclusions of Wobst (1974). Forty runs of his model under varying conditions returned values of MES (minimal equilibrium size) between 79 and 332 people (Wobst 1974, pp. 162-163, 166). As discussed above, Wobst’s higher, often-quoted estimate of MES range of 175-475 persons was tied to specific assumptions about both population density and the arrangement of the population in space. Neither density nor spatial distribution of population are represented in my model. It is possible that those differences alone account for the majority of differences in the results from the two models. It is also possible that differences in the representation of marriage, mortality, fertility, and family structure are important. One way to investigate this would be to implement Wobst’s model as an ABM and make a direct comparison. 
Keeping in mind that MVP and MES are not strictly synonymous, the broad agreement between my results and those of Wobst is noteworthy. Both sets of results are consistent with the idea that, in the absence of strong cultural restrictions on marriage, populations limited to perhaps 150 people are of sufficient size to nearly ensure survival over long spans of time and populations limited to as few as 75 persons have a good chance of long-term survival under the same conditions.
Although my conclusion is at odds with the generality offered by Moore and Moseley (2001, p. 526) that "bands of such small size [175 persons, 50 persons, or 25 persons] usually do not constitute viable mating systems that would guarantee the reproductive future of the band," it is actually in broad agreement with the data that they present. Figure 11 compares the percentage of populations of a given initial size range that survive a span of 400 years in data provided by Moore (2001, Table 6) and two sets of my own data. Moore’s (2001) results are similar to mine in terms of the relationship between initial population size and survivorship. Although Moore (2001) does not provide data for initial populations larger than 60 persons, the trends in his data clearly suggest that populations of around 100 persons would have a very high survival rate over a 400-year span in his model, even under the low fertility regime he employed to produce the results in his Table 6 (i.e., a mean of 2.56 births during a full female reproductive span [see Moore 2001, Table 1]). . . .
While this analysis wasn’t aimed directly at issues of the viable size of colonizing populations, it is worth noting that the results do not necessarily conflict with the idea that successful colonizing populations of hunter-gatherers could initially be much smaller than MVP size. The experiments discussed here were performed under conditions where population sizes were stabilized through a feedback mechanism to determine at what population threshold the inherent stochastic variability in human mortality/fertility was not a threat to long-term survival. In the absence of feedbacks to constrain population size, conditions where fertility exceeds mortality could allow very small populations to grow to a point where normal stochastic perturbations no longer pose a threat of extinction. Exploration of that subject will require further experimentation.

Friday, September 15, 2017

The Case For A Funnel Beaker Substrate In Germanic Languages

A new paper makes the case that the Funnel Beaker people of Southern Scandinavia, the urheimat of the Germanic languages, provided the non-Indo-European substrate in the Germanic languages.
In this article, we approach the Neolithization of southern Scandinavia from an archaeolinguistic perspective. Farming arrived in Scandinavia with the Funnel Beaker culture by the turn of the fourth millennium B.C.E. It was superseded by the Single Grave culture, which as part of the Corded Ware horizon is a likely vector for the introduction of Indo-European speech. As a result of this introduction, the language spoken by individuals from the Funnel Beaker culture went extinct long before the beginning of the historical record, apparently vanishing without a trace. However, the Indo-European dialect that ultimately developed into Proto-Germanic can be shown to have adopted terminology from a non-Indo-European language, including names for local flora and fauna and important plant domesticates. We argue that the coexistence of the Funnel Beaker culture and the Single Grave culture in the first quarter of the third millennium B.C.E. offers an attractive scenario for the required cultural and linguistic exchange, which we hypothesize took place between incoming speakers of Indo-European and local descendants of Scandinavia’s earliest farmers.
Rune Iversen, Guus Kroonen, Talking Neolithic: Linguistic and Archaeological Perspectives on How Indo-European Was Implemented in Southern Scandinavia, 121(4) American Journal of Archaeology 511-525 (October 2017) DOI: 10.3764/aja.121.4.0511

One problem with the analysis is that proto-Germanic appears to be much more recent than the third millenium B.C.E. So, an substrate probably had to, at a minimum, penetrate an intermediate Indo-European language and then persist, before proto-Germanic arose.

Also, for what it is worth, all of my citation forms at this blog, when in doubt, follow the Bluebook conventions applicable to law review articles and legal briefs, albeit with some simplification re typesetting.

Some Dubious Numerology About The Set Of Fundamental Particles

The possibility of physics beyond the standard model is studied. The sole requirement of cancellation of the net zero point energy density between fermions and bosons or the requirement of Lorentz invariance of the zero point stress-energy tensor implies that particles beyond the standard model must exist. Some simple and minimal extensions of the standard model such as the two Higgs doublet model, right handed neutrinos, mirror symmetry and supersymmetry are studied. If, the net zero point energy density vanishes or if the zero point stress-energy tensor is Lorentz invariant, it is shown that none of the studied models of beyond the standard one can be possible extensions in their current forms.
Damian Ejlli, "Beyond the standard model with sum rules" (September 14, 2017).

The paper argues that there are three respects in which a weighted sum of terms related to fundamental fermions should equal a weighted set of terms related to fundamental bosons.

Each fundamental particle is assigned a "degeneracy factor" that serves as it weight.

Purportedly:

(1) The sum of the fermion degeneracy factor for each of the fundamental fermions should be equal to the sum of the boson degeneracy factor for each of the fundamental bosons.

(2) The sum of the fermion degeneracy factor times the square of the mass of each of the fundamental fermions should be equal to the sum of the boson degeneracy factor times the square of the mass of each of the fundamental bosons.

(3) The sum of the fermion degeneracy factor times the fourth power of the mass of each of the fundamental fermions should be equal to the sum of the boson degeneracy factor times the fourth power of the mass of each of the fundamental bosons.

The trouble is that except for some trivial cases that bear no similarity to reality, it appears that this will never be true. 

Naively, it appears to me that a sum of raw weights, squared masses with same weights, and fourth power masses with the same weights are never going to simultaneously balance, unless all of the fundamental particle masses are identical.

In that special case, the sum of the weights for the fermions equals the sum of the weights for the bosons, so if every particle on the fermion side has the same mass as every particle on the boson side, then mass squared on each side will be the same and mass to the fourth power on each side will be the same.

But, if the masses are different for each particle, as in real life, it isn't at all obvious that the weighted sum of mass squared can every be equal to the weighted sum of mass to the fourth, because the square of mass squared is not a linear transformation, but linear parity of masses terms must remain.

There is also reason to doubt the formula (1) for the weights, which was formulated in 1951 by Pauli, before second and third generation particles were known to exist, before quarks and gluons were discovered, before the modern graviton was conceived, and before neutrino mass was known to exist, is correct.

Each quark counts 12 points. Each charged lepton counts 4 points. A massive Dirac neutrino counts 4 points, while a massive Majorana neutrino or a massless neutrino counts 2 points. The W bosons count 6 points, the Z boson counts 3 points, the Higgs boson counts 1 point, the photon counts 2 points and gluons apparently count 2 points each for 8 flavor variations of gluon.

The fermion side apparently has 68 more points than the boson side. If massive Dirac neutrinos are assumed then each generation of fermions is worth 32 points, so the second and third generations are combined worth 64 points. If these higher generations were disregarded as distinct from the first generation, since they have the same quantum numbers and could be considered excited states, then the fermion side only leads by 4 points.

The basic point calculation, modified for color and the existence of distinct antiparticles is 2S+1 for massive particles and 2 for massless particles. But, both known massless particles are spin-1 and it could be that the formula for massless particles should actually be 2S, in which case a massless graviton would add 4 additional points to the boson side and balance (1).

Another way that the formula could balance if the second and third generations of fermions were disregarded would be the addition of a spin-3/2 gravitino singlet. But, while this can come close to balancing (2) and (3) with the right mass, the gravitino needs a mass of about 530 GeV to balance (2) and a mass of about 560 GeV to balance (3) (an approach that also ignores the fact that the higher generation fermion weights are ignored, although perhaps ignoring masses makes sense in an equation that doesn't include masses, but not in one that does include masses). Ignoring the graviton might actually be appropriate because it does not enter the stress-energy tensor in general relativity.

As far as I can tell, there is simply no way that both (2) and (3) can be simultaneously true in a non-trivial case, and empirically (2) is approximately true and not inconsistent with the evidence within existing error bars, only without any weighting. 

It seems more likely that the cancellation of the net zero point energy density between fermions and bosons or the requirement of Lorentz invariance of the zero point stress-energy tensor is in the first case not true, and in the second case ill defined or non-physical.

Thursday, September 14, 2017

New Top Quark Width Measurement Globally Confirms Standard Model

Background

The decay width of a particle (composite or fundamental) is inversely proportional to its mean lifetime, but has units of mass-energy, rather than units of time. A large decay width implies a more ephemeral particle, while a small decay width implies a more long lived particle. Decay width also has the virtue that it can be determined directly from observation of a graph of a resonance plotted in events detected in each mass bin of an experiment.

In the Standard Model, decay width can be calculated from other properties of a particle. One first lists every possible means by which a decay of the particle is permitted in the Standard Model, then one calculates the probability per unit time of that decay occurring, then one adds up all of the possible decays.

If you omit a possible means of decay when doing the calculation, your decay width will be smaller and you will predict that the particle decays more slowly than it does in reality. If you include a decay path that does not actually occur, your decay width will be larger and you will predict that the particle decays more rapidly than it does in reality.

As a result, decay width of a heavy particle like the top quark is sensitive in a relativity robust model-independent manner to the completeness and accuracy of the Standard Model with respect to all possible particles with masses less than the top quark that it could decay into. It bounds the extent to which your model could be missing something at lower energy scales.

As a new pre-print from ATLAS explains in the body text of its introduction (references omitted):
The top quark is the heaviest particle in the Standard Model (SM) of elementary particle physics, discovered more than 20 years ago in 1995. Due to its large mass of around 173 GeV, the lifetime of the top quark is extremely short. Hence, its decay width is the largest of all SM fermions. A next-to-leading-order (NLO) calculation evaluates a decay width of Γt = 1.33 GeV for a top-quark mass (mt) of 172.5 GeV. Variations of the parameters entering the NLO calculation, the W-boson mass, the strong coupling constant αS, the Fermi coupling constant GF and the Cabibbo–Kobayashi–Maskawa (CKM) matrix element Vtb, within experimental uncertainties yield an uncertainty of 6%. The recent next-to-next-to-leading-order (NNLO) calculation predicts Γt = 1.322 GeV for mt = 172.5 GeV and αS = 0.1181. 
Any deviations from the SM prediction may hint at non-SM decay channels of the top quark or nonSM top-quark couplings, as predicted by many beyond-the-Standard-Model (BSM) theories. The top quark decay width can be modified by direct top-quark decays into e.g. a charged Higgs boson or via flavour-changing neutral currents and also by non-SM radiative corrections. Furthermore, some vector-like quark models modify the |Vtb| CKM matrix element and thus Γt . Precise measurements of Γt can consequently restrict the parameter space of many BSM models
The last time that the top quark decay width was directly measured precisely was at Tevatron (references omitted):
A direct measurement of Γt , based on the analysis of the top-quark invariant mass distribution was performed at the Tevatron by the CDF Collaboration. A bound on the decay width of 1.10 < Γt < 4.05 GeV for mt = 172.5 GeV was set at 68% confidence level. Direct measurements are limited by the experimental resolution of the top-quark mass spectrum, and so far are significantly less precise than indirect measurements, but avoid model-dependent assumptions.
Thus, the Tevatron one sigma margin of error was 1.475 GeV.

The New Result

The ATLAS experiment as the LHC has a new direct measurement of the top quark decay width (reference omitted):
The measured decay width for a top-quark mass of 172.5 GeV is 
 Γt = 1.76 ± 0.33 (stat.) +0.79 −0.68 (syst.) GeV = 1.76+0.86 −0.76 GeV 
in good agreement with the SM prediction of 1.322 GeV. A consistency check was performed by repeating the measurement in the individual b-tag regions and confirms that the results are consistent with the measured value. A fit based only on the observable m`b leads to a total uncertainty which is about 0.3 GeV larger.  
In comparison to the previous direct top-quark decay width measurement, the total uncertainty of this measurement is smaller by a factor of around two. However, this result is still less precise than indirect measurements and, thus, alternative (BSM) models discussed in Section 1 cannot be ruled out with the current sensitivity.  
The impact of the assumed top-quark mass on the decay width measurement is estimated by varying the mass around the nominal value of mt = 172.5 GeV. Changing the top-quark mass by ±0.5 GeV leads to a shift in the measured top-quark decay width of up to around 0.2 GeV.
Analysis

The margin of error in the ATLAS result is roughly half the margin of error of the Tevatron result.

A larger than Standard Model predicted decay width by 0.43 GeV leaves open the possibility that there could be beyond the Standard Model decay paths in top quark decays but strictly limits their magnitude, although the result is perfectly consistent with the Standard Model prediction at well under a one standard deviation level. The heavier the omitted particle, the stronger the bound from this result becomes.

The deviation above the Standard Model prediction could also result (1) from underestimation of the top quark mass (172.5 GeV is at the low end of the top quark masses that are consistent with experimental measurements), (2) from inaccuracy in the strength of the strong force coupling constant (that is only known to a several parts per thousand precision), (3) from inaccuracy in the top to bottom quark element of the CKM matrix. (The uncertainties in the W boson mass and weak force coupling constant are also relevant but are much smaller than the uncertainties in the other three quantities.)

In particular, this width measurement suggests that the 172.5 GeV mass estimate for the top quark is more likely to be too low than too high.

The result also disfavors the possibility that any Standard Model permitted decay doesn't happen, which is consistent with the fact that almost all (if not all) of the permitted Standard Model decays have almost all been observed directly, placing a lower bound on a possible decay width for the top quark.

In general, this measurement is a good, robust, indirect global test that the Standard Model as a whole is an accurate description of reality at energy scales up to the top quark mass. Any big omissions in its particle content would result in an obvious increase in the top quark's decay width that is not observed.

Tuesday, September 12, 2017

Deur Considers Dark Energy As A Form Of Gravitational Shielding

Completing The Proof Of Concept

Deur's basic thesis is that dark matter and perhaps dark energy as well can arise from the self-interaction of gravitons, using the analogy of quantum chromodynamics (QCD) as a model and starting with a static/high temperature scalar case as a first approximation. He claims that this is consistent with general relativity, but given canonical results in the field, I suspect that his theory is a subtle modification of GR.

Earlier papers worked out, to a back of napkin level of precision, that his hypothesis can explain dark matter in both galaxies and galactic clusters, and can explain why elliptical galaxies that are less spherical appear to have more dark matter. The self-interaction effects cancel out in spherically symmetric systems and grow stronger as the total mass of the system grows.

His latest pre-print, for which the abstract and citation are below, makes a similar back of napkin precision estimate of the dark energy effects of his hypothesis to see if it can be fit to the cosmology data in the absence of dark energy entirely, and he concludes that it can, with dark energy effects initially being minimal, but growing as large scale structure gives rise to dark matter effects that screen gravitons from exiting those structures.

Taken together, his several papers on the topic argue that his theory can, to back of napkin precision, describe all significant dark matter and dark energy effects by correctly modeling the self-interaction of the graviton in a way that other dark matter and dark energy theorists have neglected.

If correct, the only beyond the Standard Model particle that needs to exist is the graviton, and a graviton based theory can dispense with the cosmological constant, at least in principle. In short, "core theory" would pretty much completely describe all observed phenomena except short range, extremely strong gravitational field quantum gravity phenomena, and the right path to studying that would be established. This could all play out on the Standard Model's flat Minkowski space background that recognizes the existing of special relativity but does not have a curved space-time in which the mathematics of quantum mechanics doesn't work.

For what it is worth, I think he is right, even though he is currently a lone voice in the wilderness without the time, funding, or community of colleagues who buys into his hypothesis to rigorously and thoroughly implement this paradigm in a way that is sufficient to achieve a scholarly consensus in the field. Particle dark matter theories are in trouble. There are a few modified gravity theories that rise to the occasion, but none as elegant, simple and as broad in their domains of applicability as this one. He hasn't worked out a way to get all of the constants from first principles, but the vision is there and it is a powerful one.

Gone are the epicycles. Gone are unobservable substances that in the mainstream lambda CDM model account for something like 93%-95% of the stuff in the universe. Vexing aspects of ordinary GR, like the inability to localize gravitational energy and the seeming irrelevance of its self-interactions are gone. The "coincidence problem" is solved. Fundamentally, one coupling constant is sufficient to describe it all, even if it is easier to empirically estimate some of the constants derived from it in the meantime. We have a complete set of fundamental particles (without ruling out the possibility that they might derive from something even more fundamental). We have strong analogies between QCD and GR, some of which have long been observed, to guide us. We have a theory that is corroborated by original predictions that empirical evidence supports that aren't easily explained by other theories.

It doesn't address matter-antimatter asymmetry (for which I have identified another paper with a good explanation). It isn't clear how it interacts with "inflation" theories. But, it would be a huge, unifying step forward in gravity theory, the greatest since general relativity was devised a century ago.

There is so much to like about this approach that it deserves dramatically more resources than it has received to develop further, because it is the most promising avenue to a fundamental break through in physics in existence today. If it pans out, it is work far more significant than typical Nobel prize material.

The Pre-Print
Numerical calculations have shown that the increase of binding energy in massive systems due to gravity's self-interaction can account for galaxy and cluster dynamics without dark matter. Such approach is consistent with General Relativity and the Standard Model of particle physics. The increased binding implies an effective weakening of gravity outside the bound system. In this article, this suppression is modeled in the Universe's evolution equations and its consequence for dark energy is explored. Observations are well reproduced without need for dark energy. The cosmic coincidence appears naturally and the problem of having a de Sitter Universe as the final state of the Universe is eliminated.

Monday, September 11, 2017

Why Did Solomon's Temple Have Two Pillars At Its East Facing Entrance?

The Old European Culture blog has a fascinating hypothesis about the Biblically described features of Solomon's Temple related to the traditional solar astronomy function of threshing floors. 

Basically, he argues that the Temple, which was built on a threshing floor, had two pillars because that was how a summer and winter equinox were marked causing the entry to align to true east. He also explains the grain processing and astronomical functions of placing a threshing floor on high ground and it associated function as a sacred gathering place, all of which would have predated Judaism.

Saturday, September 9, 2017

The Voynich Manuscript Deciphered

Nicholas Gibbs convincingly argues in the Times Literary Supplement that he has deciphered the 16th century illustrated manuscript known as the Voynich manuscript. He argues that is a copied anthology of medical texts, focused on women's health that trace to classical period sources for the most part, and that the text mostly consists of abbreviations of Latin words found in the source texts.

Previous efforts to decode the manuscript have eluded researchers for many decades, if not centuries.

Friday, September 8, 2017

Funny Math Jokes

As a former math major, I think these are all hilarious, but I'll only include two here and leave a review of the rest at the link as an exercise for the reader:
An engineer, a physicist and a mathematician are staying in a hotel. The engineer wakes up and smells smoke. He goes out into the hallway and sees a fire, so he fills a trash can from his room with water and douses the fire. He goes back to bed. Later, the physicist wakes up and smells smoke. He opens his door and sees a fire in the hallway. He walks down the hall to a fire hose and after calculating the flame velocity, distance, water pressure, trajectory, etc. extinguishes the fire with the minimum amount of water and energy needed. Later, the mathematician wakes up and smells smoke. He goes to the hall, sees the fire and then the fire hose. He thinks for a moment and then exclaims, "Ah, a solution exists!" and then goes back to bed.
A biologist, a physicist and a mathematician were sitting in a street cafe watching the crowd. Across the street they saw a man and a woman entering a building. Ten minutes they reappeared together with a third person.
- They have multiplied, said the biologist.
- Oh no, an error in measurement, the physicist sighed.
- If exactly one person enters the building now, it will be empty again, the mathematician concluded.
Hat tip: 4Gravitons.

LHC and XENON-100 Further Constrain Dark Matter Parameter Space

A review of the data from the Large Hadron Collider's ATLAS and CMS experiments shows that Higgs portal dark matter (or any other kind of dark matter that the LHC could detect) is pretty much completely ruled out in mass ranges from near zero to the multiple TeV range. There is one little blip at about 2.75 TeV in the data, but not significant enough to be worthy of much interest (particularly because this mass range is already strongly disfavored for stable dark matter candidates).

Meanwhile the Xenon-100 direct dark matter detection experiment has basically ruled out "Bosonic Super-WIMPs" at the heavy end of the warm dark matter spectrum. The abstract (not in blockquotes because it messes up the formatting):

"We present results of searches for vector and pseudo-scalar bosonic super-WIMPs, which are dark matter candidates with masses at the keV-scale, with the XENON100 experiment. XENON100 is a dual-phase xenon time projection chamber operated at the Laboratori Nazionali del Gran Sasso. A profile likelihood analysis of data with an exposure of 224.6 live days × 34\,kg showed no evidence for a signal above the expected background. We thus obtain new and stringent upper limits in the (8125)\,keV/c2 mass range, excluding couplings to electrons with coupling constants of gae>3×1013 for pseudo-scalar and α/α>2×1028 for vector super-WIMPs, respectively. These limits are derived under the assumption that super-WIMPs constitute all of the dark matter in our galaxy."

The most promising mass range for warm dark matter is about 2 keV to 8 keV, so this study rules out heavier candidates. Of course, only if they are bosons, rather than fermions, and only if they have any electroweak couplings as opposed to being "sterile". In principle, would could imagine a tiny fractional weak force coupling, but there is absolutely nothing in the empirical evidence to support a weak force coupling that existed with a weak force coupling constant that was much more than a million times weaker than the weak force coupling constant of every other Standard Model particle that has weak force interactions.

A truly sterile dark matter candidate is problematic because it can't explain why ordinary matter and dark matter distributions are so tightly correlated, something that it is increasingly clear that unmodified gravity alone can't cause. But, there is also no empirical or theoretical motivation for an ultra-small weak force coupling for a class of matter that would vastly exceed all other matter in the universe by mass or particle count.

A new paper also strongly constrains dark matter that only interacts with right handed up-like quarks (which the authors call "Charming Dark Matter"). Another paper looks at how to more rigorously distinguish between a single component dark matter scenario and one with more than one component - early simulation data strongly disfavored multi-component solutions but didn't necessarily rigorously proof that they were ruled out.

One by one, experimental and astronomy observation data points continue to narrow the parameter space for dark matter particles to essentially zero, leaving modified gravity theories, most likely due to infrared quantum gravity effects, as the only possible explanation for dark matter phenomena.

Wednesday, September 6, 2017

Constraining Beyond The Standard Model Physics With Big Bang Nucleosynthesis

Background: Big Bang Nucleosynthesis

One of the most impressive cosmology theories in existence is Big Bang Nucleosynthesis. It is a theory that assumes a starting point, not long after the Big Bang, at which the universe is at a high average temperature (i.e. particles are moving with high average levels of kinetic energy) and all atomic nuclei are initially simple protons and neutrons.

The theory then uses statistics to consider all possible collisions of those protons and neutrons that give rise to nuclear fusion or fission in all possible pathways, and assume that at the end of the nucleosynthesis period that nuclear fusion to create light elements becomes dramatically less common as the temperature of the universe falls as kinetic energy is captured and converted into nuclear binding energy, an indirect form of the strong nuclear force, and as collisions become less common as the size of the universe that is in the Big Bang light cone relative to the number of particles in it increases.

For the most part, the predictions of Big Bang Nucleosynthesis are confirmed by experiment. The relative abundance of light element isotypes in the universe is a reasonably close match to what we would expect if Big Bang Nucleosynthesis is an accurate description of what actually happened. The biggest discrepancy is in abundance of Lithium-7, which differs significantly from the predicted value even though it still has a right order of magnitude.

Using Big Bang Nucleosynthesis To Constrain Beyond The Standard Model Physics

Many predictions of Big Bang Nucleosynthesis are sensitive to the existence of relatively long lived particles (e.g. those with mean lifetimes on the order of seconds or more) beyond those of the Standard Model. Collisions of ordinary protons and neutrons with this particles would cause the relative abundance of light element isotypes to be greater or smaller, although the relationship isn't straightforward because one decays of such particles will tend to increase element abundances, while other decays of the same particles will tend to decrease abundances of some of the same elements.

But, plugging a hypothetical new long lived decaying particle into the Big Bang Nucleosynthesis model involves straightforward, well understood physics. If a long lived decaying particle with certain properties exists, it will decay in a very predictable way and it will have a very precisely discernible impact on light element isotype frequencies.

Thus, beyond the Standard Model physics long lived decaying particles of a very general type that is not very strongly model dependent can be ruled out if they give rise to deviations from the Big Bang Nucleosynthesis predictions by significantly more than existing margins of error in the theoretical calculation and astronomy measurements of these predictions.

The Results

A new pre-print does just that, and reaches the following conclusion:
We have revisited and updated the BBN constraints on long-lived particles. . . .
We have obtained the constraints on the abundance and lifetime of long-lived particles with various decay modes. They are shown in Figs. 11 and 12. The constraints become weaker when we include the p ↔ n conversion effects in inelastic scatterings because energetic neutrons change into protons and stop without causing hadrodissociations. On the other hand, inclusion of the energetic anti-nucleons makes the constraints more stringent. In addition, the recent precise measurement of the D abundance leads to stronger constrains. Thus, in total, the resultant constraints become more stringent than those obtained in the previous studies.  
We have also applied our analysis to unstable gravitino. We have adopted several patterns of mass spectra of superparticles and derived constraints on the reheating temperature after inflation as shown in Fig. 15. The upper bound on the reheating temperature is ∼ 10^5 − 10^6 GeV for gravitino mass m3/2 less than a several TeV and ∼ 10^9 GeV for m3/2 ∼ O(10) TeV. This implies that the gravitino mass should be ∼ O(10) TeV for successful thermal leptogenesis.  
In obtaining the constraints, we have adopted the observed 4He abundance given by Eq. (2.4) which is consistent with SBBN. On the other hand, if we adopt the other estimation (2.3), 4He abundance is inconsistent with SBBN. However, when long-lived particles with large hadronic branch have lifetime τX ∼ 0.1 − 100 sec and abundance mXYX ∼ 10^−9 , Eq. (2.3) becomes consistent with BBN. 
In this work, we did not use 7Li in deriving the constraints since the plateau value in 7Li abundances observed in metal-poor stars (which had been considered as a primordial value) is smaller than the SBBN prediction by a factor 2–3 (lithium problem) and furthermore the recent discovery of much smaller 7Li abundances in very metal-poor stars cannot be explained by any known mechanism. However, the effects of the decaying particles on the 7Li and 6Li abundances are estimated in our numerical calculation. Interestingly, if we assume that the plateau value represents the primordial abundance, the decaying particles which mainly decays into e +e − can solve the lithium problem for τX ∼ 10^2 − 10^3 sec and mXYX ∼ 10^−7.
Figures 11 and 12 basically rule out any decaying particles with a lifetime of more than a fraction of a second in the mass range of 30 GeV to 1000 TeV for hadronically decaying particles (Figure 11), and imposes similar constraints for radiatively decaying particles (Figure 12). This is a nice complement to results from the LHC and other colliders with exclude beyond the Standard Model particles that are lighter than hundreds of GeV with lifetimes up to roughly a fraction of a second. Big Bang Nucleosynthesis constraints, generally speaking, are more sensitive to masses much heavier than the LHC can reach and mean lifetimes longer than the LHC is designed to measure. This also strengthens and makes more robust exclusions based upon an entirely different methodology involving the cosmic microwave background radiation of the universe explored by astronomy experiments such as Planck 2015. This basically rules out any relatively long lived remotely "natural" supersymmetric particle unless supersymmetric particles have extremely weak interactions with ordinary matter.

A thermal relic gravitino in a supersymmetry (SUSY) model with a mass on the order of 10 TeV suggests a very high energy characteristic supersymmetry scale which while not directly ruling out some other SUSY particle as a dark matter candidate, makes a SUSY theory that could supply both a dark matter candidate and explain leptogenesis extremely "unnatural."

The potential Lithium problem solution, a particle with a mean lifetime of 100 to 1000 seconds (on the same order of magnitude as a free neutron), favors a quite light, predominantly radiatively decaying (i.e. decaying via photons, electrons and positrons) particle. The main problem with this is that such a particle ought to have shown up in collider experiments if it existed. Yet, there are no known fundamental particles or hadrons that have the right length mean lifetime but decay primarily radiatively. The muon and pion are both tens of millions of times too short lived (or more), the neutron has primarily hadronic decays (to a proton, an electron and an anti-neutrino), and all other hadrons and fundamental particles are much shorter lived. Ergo, a beyond the Standard Model particle is unlikely to be the solution to the Lithium problem of Big Bang Nucleosynthesis.

Bottom Line

Big Bang Nucleosynthesis indirectly constrains the parameter space of beyond the Standard Model physics in a manner that strongly compliments other methods while not overlapping the exclusion of other methods at all. 

This makes the case against supersymmetry models, one of the most popular kinds of beyond the Standard Model theories, significantly stronger than it already was based on other lines of reasoning from experimental evidence such as the strong evidence disfavoring a SUSY dark matter candidate.

The Paper

The pre-print and its abstract are as follows:
We study effects of long-lived massive particles, which decay during the big-bang nucleosynthesis (BBN) epoch, on the primordial abundances of light elements. Compared to the previous studies, (i) the reaction rates of the standard BBN reactions are updated, (ii) the most recent observational data of light element abundances and cosmological parameters are used, (iii) the effects of the interconversion of energetic nucleons at the time of inelastic scatterings with background nuclei are considered, and (iv) the effects of the hadronic shower induced by energetic high energy anti-nucleons are included. We compare the theoretical predictions on the primordial abundances of light elements with latest observational constraints, and derive upper bounds on relic abundance of the decaying particle as a function of its lifetime. We also apply our analysis to unstable gravitino, the superpartner of the graviton in supersymmetric theories, and obtain constraints on the reheating temperature after inflation.
Masahiro Kawasaki, et al., "Revisiting Big-Bang Nucleosynthesis Constraints on Long-Lived Decaying Particles" (September 5, 2017).

Tuesday, September 5, 2017

Lost Languages Found In Monastery In Sinai

At Saint Catherine's monastery in the Sinai in Egypt, monks erased old works to copy new ones over them when parchment was scarce. Now, imaging and computers can resort the erased works and this has revealed a couple of lost languages which were almost completely unattested in writing before now.
Since 2011, researchers have photographed 74 palimpsests, which boast 6,8000 pages between them. And the team’s results have been quite astonishing. Among the newly revealed texts, which date from the 4th to the 12th century, are 108 pages of previously unknown Greek poems and the oldest-known recipe attributed to the Greek physician Hippocrates.

But perhaps the most intriguing finds are the manuscripts written in obscure languages that fell out of use many centuries ago. Two of the erased texts, for instance, were inked in Caucasian Albanian, a language spoken by Christians in what is now Azerbaijan. According to Sarah Laskow of Atlas Obscura, Caucasian Albanian only exists today in a few stone inscriptions. Michael Phelps, director of the Early Manuscripts Electronic Library, tells Gray of the Atlantic that the discovery of Caucasian Albanian writings at Saint Catherine’s library has helped scholars increase their knowledge of the language’s vocabulary, giving them words for things like “net” and “fish.”

Other hidden texts were written in a defunct dialect known as Christian Palestinian Aramaic, a mix of Syriac and Greek, which was discontinued in the 13th century only to be rediscovered by scholars in the 18th century. “This was an entire community of people who had a literature, art, and spirituality,” Phelps tells Gray. “Almost all of that has been lost, yet their cultural DNA exists in our culture today. These palimpsest texts are giving them a voice again and letting us learn about how they contributed to who we are today.”