Friday, September 15, 2017

The Case For A Funnel Beaker Substrate In Germanic Languages

A new paper makes the case that the Funnel Beaker people of Southern Scandinavia, the urheimat of the Germanic languages, provided the non-Indo-European substrate in the Germanic languages.
In this article, we approach the Neolithization of southern Scandinavia from an archaeolinguistic perspective. Farming arrived in Scandinavia with the Funnel Beaker culture by the turn of the fourth millennium B.C.E. It was superseded by the Single Grave culture, which as part of the Corded Ware horizon is a likely vector for the introduction of Indo-European speech. As a result of this introduction, the language spoken by individuals from the Funnel Beaker culture went extinct long before the beginning of the historical record, apparently vanishing without a trace. However, the Indo-European dialect that ultimately developed into Proto-Germanic can be shown to have adopted terminology from a non-Indo-European language, including names for local flora and fauna and important plant domesticates. We argue that the coexistence of the Funnel Beaker culture and the Single Grave culture in the first quarter of the third millennium B.C.E. offers an attractive scenario for the required cultural and linguistic exchange, which we hypothesize took place between incoming speakers of Indo-European and local descendants of Scandinavia’s earliest farmers.
Rune Iversen, Guus Kroonen, Talking Neolithic: Linguistic and Archaeological Perspectives on How Indo-European Was Implemented in Southern Scandinavia, 121(4) American Journal of Archaeology 511-525 (October 2017) DOI: 10.3764/aja.121.4.0511

One problem with the analysis is that proto-Germanic appears to be much more recent than the third millenium B.C.E. So, an substrate probably had to, at a minimum, penetrate an intermediate Indo-European language and then persist, before proto-Germanic arose.

Also, for what it is worth, all of my citation forms at this blog, when in doubt, follow the Bluebook conventions applicable to law review articles and legal briefs, albeit with some simplification re typesetting.

Some Dubious Numerology About The Set Of Fundamental Particles

The possibility of physics beyond the standard model is studied. The sole requirement of cancellation of the net zero point energy density between fermions and bosons or the requirement of Lorentz invariance of the zero point stress-energy tensor implies that particles beyond the standard model must exist. Some simple and minimal extensions of the standard model such as the two Higgs doublet model, right handed neutrinos, mirror symmetry and supersymmetry are studied. If, the net zero point energy density vanishes or if the zero point stress-energy tensor is Lorentz invariant, it is shown that none of the studied models of beyond the standard one can be possible extensions in their current forms.
Damian Ejlli, "Beyond the standard model with sum rules" (September 14, 2017).

The paper argues that there are three respects in which a weighted sum of terms related to fundamental fermions should equal a weighted set of terms related to fundamental bosons.

Each fundamental particle is assigned a "degeneracy factor" that serves as it weight.

Purportedly:

(1) The sum of the fermion degeneracy factor for each of the fundamental fermions should be equal to the sum of the boson degeneracy factor for each of the fundamental bosons.

(2) The sum of the fermion degeneracy factor times the square of the mass of each of the fundamental fermions should be equal to the sum of the boson degeneracy factor times the square of the mass of each of the fundamental bosons.

(3) The sum of the fermion degeneracy factor times the fourth power of the mass of each of the fundamental fermions should be equal to the sum of the boson degeneracy factor times the fourth power of the mass of each of the fundamental bosons.

The trouble is that except for some trivial cases that bear no similarity to reality, it appears that this will never be true. 

Naively, it appears to me that a sum of raw weights, squared masses with same weights, and fourth power masses with the same weights are never going to simultaneously balance, unless all of the fundamental particle masses are identical.

In that special case, the sum of the weights for the fermions equals the sum of the weights for the bosons, so if every particle on the fermion side has the same mass as every particle on the boson side, then mass squared on each side will be the same and mass to the fourth power on each side will be the same.

But, if the masses are different for each particle, as in real life, it isn't at all obvious that the weighted sum of mass squared can every be equal to the weighted sum of mass to the fourth, because the square of mass squared is not a linear transformation, but linear parity of masses terms must remain.

There is also reason to doubt the formula (1) for the weights, which was formulated in 1951 by Pauli, before second and third generation particles were known to exist, before quarks and gluons were discovered, before the modern graviton was conceived, and before neutrino mass was known to exist, is correct.

Each quark counts 12 points. Each charged lepton counts 4 points. A massive Dirac neutrino counts 4 points, while a massive Majorana neutrino or a massless neutrino counts 2 points. The W bosons count 6 points, the Z boson counts 3 points, the Higgs boson counts 1 point, the photon counts 2 points and gluons apparently count 2 points each for 8 flavor variations of gluon.

The fermion side apparently has 68 more points than the boson side. If massive Dirac neutrinos are assumed then each generation of fermions is worth 32 points, so the second and third generations are combined worth 64 points. If these higher generations were disregarded as distinct from the first generation, since they have the same quantum numbers and could be considered excited states, then the fermion side only leads by 4 points.

The basic point calculation, modified for color and the existence of distinct antiparticles is 2S+1 for massive particles and 2 for massless particles. But, both known massless particles are spin-1 and it could be that the formula for massless particles should actually be 2S, in which case a massless graviton would add 4 additional points to the boson side and balance (1).

Another way that the formula could balance if the second and third generations of fermions were disregarded would be the addition of a spin-3/2 gravitino singlet. But, while this can come close to balancing (2) and (3) with the right mass, the gravitino needs a mass of about 530 GeV to balance (2) and a mass of about 560 GeV to balance (3) (an approach that also ignores the fact that the higher generation fermion weights are ignored, although perhaps ignoring masses makes sense in an equation that doesn't include masses, but not in one that does include masses). Ignoring the graviton might actually be appropriate because it does not enter the stress-energy tensor in general relativity.

As far as I can tell, there is simply no way that both (2) and (3) can be simultaneously true in a non-trivial case, and empirically (2) is approximately true and not inconsistent with the evidence within existing error bars, only without any weighting. 

It seems more likely that the cancellation of the net zero point energy density between fermions and bosons or the requirement of Lorentz invariance of the zero point stress-energy tensor is in the first case not true, and in the second case ill defined or non-physical.

Thursday, September 14, 2017

New Top Quark Width Measurement Globally Confirms Standard Model

Background

The decay width of a particle (composite or fundamental) is inversely proportional to its mean lifetime, but has units of mass-energy, rather than units of time. A large decay width implies a more ephemeral particle, while a small decay width implies a more long lived particle. Decay width also has the virtue that it can be determined directly from observation of a graph of a resonance plotted in events detected in each mass bin of an experiment.

In the Standard Model, decay width can be calculated from other properties of a particle. One first lists every possible means by which a decay of the particle is permitted in the Standard Model, then one calculates the probability per unit time of that decay occurring, then one adds up all of the possible decays.

If you omit a possible means of decay when doing the calculation, your decay width will be smaller and you will predict that the particle decays more slowly than it does in reality. If you include a decay path that does not actually occur, your decay width will be larger and you will predict that the particle decays more rapidly than it does in reality.

As a result, decay width of a heavy particle like the top quark is sensitive in a relativity robust model-independent manner to the completeness and accuracy of the Standard Model with respect to all possible particles with masses less than the top quark that it could decay into. It bounds the extent to which your model could be missing something at lower energy scales.

As a new pre-print from ATLAS explains in the body text of its introduction (references omitted):
The top quark is the heaviest particle in the Standard Model (SM) of elementary particle physics, discovered more than 20 years ago in 1995. Due to its large mass of around 173 GeV, the lifetime of the top quark is extremely short. Hence, its decay width is the largest of all SM fermions. A next-to-leading-order (NLO) calculation evaluates a decay width of Γt = 1.33 GeV for a top-quark mass (mt) of 172.5 GeV. Variations of the parameters entering the NLO calculation, the W-boson mass, the strong coupling constant αS, the Fermi coupling constant GF and the Cabibbo–Kobayashi–Maskawa (CKM) matrix element Vtb, within experimental uncertainties yield an uncertainty of 6%. The recent next-to-next-to-leading-order (NNLO) calculation predicts Γt = 1.322 GeV for mt = 172.5 GeV and αS = 0.1181. 
Any deviations from the SM prediction may hint at non-SM decay channels of the top quark or nonSM top-quark couplings, as predicted by many beyond-the-Standard-Model (BSM) theories. The top quark decay width can be modified by direct top-quark decays into e.g. a charged Higgs boson or via flavour-changing neutral currents and also by non-SM radiative corrections. Furthermore, some vector-like quark models modify the |Vtb| CKM matrix element and thus Γt . Precise measurements of Γt can consequently restrict the parameter space of many BSM models
The last time that the top quark decay width was directly measured precisely was at Tevatron (references omitted):
A direct measurement of Γt , based on the analysis of the top-quark invariant mass distribution was performed at the Tevatron by the CDF Collaboration. A bound on the decay width of 1.10 < Γt < 4.05 GeV for mt = 172.5 GeV was set at 68% confidence level. Direct measurements are limited by the experimental resolution of the top-quark mass spectrum, and so far are significantly less precise than indirect measurements, but avoid model-dependent assumptions.
Thus, the Tevatron one sigma margin of error was 1.475 GeV.

The New Result

The ATLAS experiment as the LHC has a new direct measurement of the top quark decay width (reference omitted):
The measured decay width for a top-quark mass of 172.5 GeV is 
 Γt = 1.76 ± 0.33 (stat.) +0.79 −0.68 (syst.) GeV = 1.76+0.86 −0.76 GeV 
in good agreement with the SM prediction of 1.322 GeV. A consistency check was performed by repeating the measurement in the individual b-tag regions and confirms that the results are consistent with the measured value. A fit based only on the observable m`b leads to a total uncertainty which is about 0.3 GeV larger.  
In comparison to the previous direct top-quark decay width measurement, the total uncertainty of this measurement is smaller by a factor of around two. However, this result is still less precise than indirect measurements and, thus, alternative (BSM) models discussed in Section 1 cannot be ruled out with the current sensitivity.  
The impact of the assumed top-quark mass on the decay width measurement is estimated by varying the mass around the nominal value of mt = 172.5 GeV. Changing the top-quark mass by ±0.5 GeV leads to a shift in the measured top-quark decay width of up to around 0.2 GeV.
Analysis

The margin of error in the ATLAS result is roughly half the margin of error of the Tevatron result.

A larger than Standard Model predicted decay width by 0.43 GeV leaves open the possibility that there could be beyond the Standard Model decay paths in top quark decays but strictly limits their magnitude, although the result is perfectly consistent with the Standard Model prediction at well under a one standard deviation level. The heavier the omitted particle, the stronger the bound from this result becomes.

The deviation above the Standard Model prediction could also result (1) from underestimation of the top quark mass (172.5 GeV is at the low end of the top quark masses that are consistent with experimental measurements), (2) from inaccuracy in the strength of the strong force coupling constant (that is only known to a several parts per thousand precision), (3) from inaccuracy in the top to bottom quark element of the CKM matrix. (The uncertainties in the W boson mass and weak force coupling constant are also relevant but are much smaller than the uncertainties in the other three quantities.)

In particular, this width measurement suggests that the 172.5 GeV mass estimate for the top quark is more likely to be too low than too high.

The result also disfavors the possibility that any Standard Model permitted decay doesn't happen, which is consistent with the fact that almost all (if not all) of the permitted Standard Model decays have almost all been observed directly, placing a lower bound on a possible decay width for the top quark.

In general, this measurement is a good, robust, indirect global test that the Standard Model as a whole is an accurate description of reality at energy scales up to the top quark mass. Any big omissions in its particle content would result in an obvious increase in the top quark's decay width that is not observed.