Friday, September 15, 2017

The Case For A Funnel Beaker Substrate In Germanic Languages

A new paper makes the case that the Funnel Beaker people of Southern Scandinavia, the urheimat of the Germanic languages, provided the non-Indo-European substrate in the Germanic languages.
In this article, we approach the Neolithization of southern Scandinavia from an archaeolinguistic perspective. Farming arrived in Scandinavia with the Funnel Beaker culture by the turn of the fourth millennium B.C.E. It was superseded by the Single Grave culture, which as part of the Corded Ware horizon is a likely vector for the introduction of Indo-European speech. As a result of this introduction, the language spoken by individuals from the Funnel Beaker culture went extinct long before the beginning of the historical record, apparently vanishing without a trace. However, the Indo-European dialect that ultimately developed into Proto-Germanic can be shown to have adopted terminology from a non-Indo-European language, including names for local flora and fauna and important plant domesticates. We argue that the coexistence of the Funnel Beaker culture and the Single Grave culture in the first quarter of the third millennium B.C.E. offers an attractive scenario for the required cultural and linguistic exchange, which we hypothesize took place between incoming speakers of Indo-European and local descendants of Scandinavia’s earliest farmers.
Rune Iversen, Guus Kroonen, Talking Neolithic: Linguistic and Archaeological Perspectives on How Indo-European Was Implemented in Southern Scandinavia, 121(4) American Journal of Archaeology 511-525 (October 2017) DOI: 10.3764/aja.121.4.0511

One problem with the analysis is that proto-Germanic appears to be much more recent than the third millenium B.C.E. So, an substrate probably had to, at a minimum, penetrate an intermediate Indo-European language and then persist, before proto-Germanic arose.

Also, for what it is worth, all of my citation forms at this blog, when in doubt, follow the Bluebook conventions applicable to law review articles and legal briefs, albeit with some simplification re typesetting.

Some Dubious Numerology About The Set Of Fundamental Particles

The possibility of physics beyond the standard model is studied. The sole requirement of cancellation of the net zero point energy density between fermions and bosons or the requirement of Lorentz invariance of the zero point stress-energy tensor implies that particles beyond the standard model must exist. Some simple and minimal extensions of the standard model such as the two Higgs doublet model, right handed neutrinos, mirror symmetry and supersymmetry are studied. If, the net zero point energy density vanishes or if the zero point stress-energy tensor is Lorentz invariant, it is shown that none of the studied models of beyond the standard one can be possible extensions in their current forms.
Damian Ejlli, "Beyond the standard model with sum rules" (September 14, 2017).

The paper argues that there are three respects in which a weighted sum of terms related to fundamental fermions should equal a weighted set of terms related to fundamental bosons.

Each fundamental particle is assigned a "degeneracy factor" that serves as it weight.

Purportedly:

(1) The sum of the fermion degeneracy factor for each of the fundamental fermions should be equal to the sum of the boson degeneracy factor for each of the fundamental bosons.

(2) The sum of the fermion degeneracy factor times the square of the mass of each of the fundamental fermions should be equal to the sum of the boson degeneracy factor times the square of the mass of each of the fundamental bosons.

(3) The sum of the fermion degeneracy factor times the fourth power of the mass of each of the fundamental fermions should be equal to the sum of the boson degeneracy factor times the fourth power of the mass of each of the fundamental bosons.

The trouble is that except for some trivial cases that bear no similarity to reality, it appears that this will never be true. 

Naively, it appears to me that a sum of raw weights, squared masses with same weights, and fourth power masses with the same weights are never going to simultaneously balance, unless all of the fundamental particle masses are identical.

In that special case, the sum of the weights for the fermions equals the sum of the weights for the bosons, so if every particle on the fermion side has the same mass as every particle on the boson side, then mass squared on each side will be the same and mass to the fourth power on each side will be the same.

But, if the masses are different for each particle, as in real life, it isn't at all obvious that the weighted sum of mass squared can every be equal to the weighted sum of mass to the fourth, because the square of mass squared is not a linear transformation, but linear parity of masses terms must remain.

There is also reason to doubt the formula (1) for the weights, which was formulated in 1951 by Pauli, before second and third generation particles were known to exist, before quarks and gluons were discovered, before the modern graviton was conceived, and before neutrino mass was known to exist, is correct.

Each quark counts 12 points. Each charged lepton counts 4 points. A massive Dirac neutrino counts 4 points, while a massive Majorana neutrino or a massless neutrino counts 2 points. The W bosons count 6 points, the Z boson counts 3 points, the Higgs boson counts 1 point, the photon counts 2 points and gluons apparently count 2 points each for 8 flavor variations of gluon.

The fermion side apparently has 68 more points than the boson side. If massive Dirac neutrinos are assumed then each generation of fermions is worth 32 points, so the second and third generations are combined worth 64 points. If these higher generations were disregarded as distinct from the first generation, since they have the same quantum numbers and could be considered excited states, then the fermion side only leads by 4 points.

The basic point calculation, modified for color and the existence of distinct antiparticles is 2S+1 for massive particles and 2 for massless particles. But, both known massless particles are spin-1 and it could be that the formula for massless particles should actually be 2S, in which case a massless graviton would add 4 additional points to the boson side and balance (1).

Another way that the formula could balance if the second and third generations of fermions were disregarded would be the addition of a spin-3/2 gravitino singlet. But, while this can come close to balancing (2) and (3) with the right mass, the gravitino needs a mass of about 530 GeV to balance (2) and a mass of about 560 GeV to balance (3) (an approach that also ignores the fact that the higher generation fermion weights are ignored, although perhaps ignoring masses makes sense in an equation that doesn't include masses, but not in one that does include masses). Ignoring the graviton might actually be appropriate because it does not enter the stress-energy tensor in general relativity.

As far as I can tell, there is simply no way that both (2) and (3) can be simultaneously true in a non-trivial case, and empirically (2) is approximately true and not inconsistent with the evidence within existing error bars, only without any weighting. 

It seems more likely that the cancellation of the net zero point energy density between fermions and bosons or the requirement of Lorentz invariance of the zero point stress-energy tensor is in the first case not true, and in the second case ill defined or non-physical.

Thursday, September 14, 2017

New Top Quark Width Measurement Globally Confirms Standard Model

Background

The decay width of a particle (composite or fundamental) is inversely proportional to its mean lifetime, but has units of mass-energy, rather than units of time. A large decay width implies a more ephemeral particle, while a small decay width implies a more long lived particle. Decay width also has the virtue that it can be determined directly from observation of a graph of a resonance plotted in events detected in each mass bin of an experiment.

In the Standard Model, decay width can be calculated from other properties of a particle. One first lists every possible means by which a decay of the particle is permitted in the Standard Model, then one calculates the probability per unit time of that decay occurring, then one adds up all of the possible decays.

If you omit a possible means of decay when doing the calculation, your decay width will be smaller and you will predict that the particle decays more slowly than it does in reality. If you include a decay path that does not actually occur, your decay width will be larger and you will predict that the particle decays more rapidly than it does in reality.

As a result, decay width of a heavy particle like the top quark is sensitive in a relativity robust model-independent manner to the completeness and accuracy of the Standard Model with respect to all possible particles with masses less than the top quark that it could decay into. It bounds the extent to which your model could be missing something at lower energy scales.

As a new pre-print from ATLAS explains in the body text of its introduction (references omitted):
The top quark is the heaviest particle in the Standard Model (SM) of elementary particle physics, discovered more than 20 years ago in 1995. Due to its large mass of around 173 GeV, the lifetime of the top quark is extremely short. Hence, its decay width is the largest of all SM fermions. A next-to-leading-order (NLO) calculation evaluates a decay width of Γt = 1.33 GeV for a top-quark mass (mt) of 172.5 GeV. Variations of the parameters entering the NLO calculation, the W-boson mass, the strong coupling constant αS, the Fermi coupling constant GF and the Cabibbo–Kobayashi–Maskawa (CKM) matrix element Vtb, within experimental uncertainties yield an uncertainty of 6%. The recent next-to-next-to-leading-order (NNLO) calculation predicts Γt = 1.322 GeV for mt = 172.5 GeV and αS = 0.1181. 
Any deviations from the SM prediction may hint at non-SM decay channels of the top quark or nonSM top-quark couplings, as predicted by many beyond-the-Standard-Model (BSM) theories. The top quark decay width can be modified by direct top-quark decays into e.g. a charged Higgs boson or via flavour-changing neutral currents and also by non-SM radiative corrections. Furthermore, some vector-like quark models modify the |Vtb| CKM matrix element and thus Γt . Precise measurements of Γt can consequently restrict the parameter space of many BSM models
The last time that the top quark decay width was directly measured precisely was at Tevatron (references omitted):
A direct measurement of Γt , based on the analysis of the top-quark invariant mass distribution was performed at the Tevatron by the CDF Collaboration. A bound on the decay width of 1.10 < Γt < 4.05 GeV for mt = 172.5 GeV was set at 68% confidence level. Direct measurements are limited by the experimental resolution of the top-quark mass spectrum, and so far are significantly less precise than indirect measurements, but avoid model-dependent assumptions.
Thus, the Tevatron one sigma margin of error was 1.475 GeV.

The New Result

The ATLAS experiment as the LHC has a new direct measurement of the top quark decay width (reference omitted):
The measured decay width for a top-quark mass of 172.5 GeV is 
 Γt = 1.76 ± 0.33 (stat.) +0.79 −0.68 (syst.) GeV = 1.76+0.86 −0.76 GeV 
in good agreement with the SM prediction of 1.322 GeV. A consistency check was performed by repeating the measurement in the individual b-tag regions and confirms that the results are consistent with the measured value. A fit based only on the observable m`b leads to a total uncertainty which is about 0.3 GeV larger.  
In comparison to the previous direct top-quark decay width measurement, the total uncertainty of this measurement is smaller by a factor of around two. However, this result is still less precise than indirect measurements and, thus, alternative (BSM) models discussed in Section 1 cannot be ruled out with the current sensitivity.  
The impact of the assumed top-quark mass on the decay width measurement is estimated by varying the mass around the nominal value of mt = 172.5 GeV. Changing the top-quark mass by ±0.5 GeV leads to a shift in the measured top-quark decay width of up to around 0.2 GeV.
Analysis

The margin of error in the ATLAS result is roughly half the margin of error of the Tevatron result.

A larger than Standard Model predicted decay width by 0.43 GeV leaves open the possibility that there could be beyond the Standard Model decay paths in top quark decays but strictly limits their magnitude, although the result is perfectly consistent with the Standard Model prediction at well under a one standard deviation level. The heavier the omitted particle, the stronger the bound from this result becomes.

The deviation above the Standard Model prediction could also result (1) from underestimation of the top quark mass (172.5 GeV is at the low end of the top quark masses that are consistent with experimental measurements), (2) from inaccuracy in the strength of the strong force coupling constant (that is only known to a several parts per thousand precision), (3) from inaccuracy in the top to bottom quark element of the CKM matrix. (The uncertainties in the W boson mass and weak force coupling constant are also relevant but are much smaller than the uncertainties in the other three quantities.)

In particular, this width measurement suggests that the 172.5 GeV mass estimate for the top quark is more likely to be too low than too high.

The result also disfavors the possibility that any Standard Model permitted decay doesn't happen, which is consistent with the fact that almost all (if not all) of the permitted Standard Model decays have almost all been observed directly, placing a lower bound on a possible decay width for the top quark.

In general, this measurement is a good, robust, indirect global test that the Standard Model as a whole is an accurate description of reality at energy scales up to the top quark mass. Any big omissions in its particle content would result in an obvious increase in the top quark's decay width that is not observed.

Tuesday, September 12, 2017

Deur Considers Dark Energy As A Form Of Gravitational Shielding

Completing The Proof Of Concept

Deur's basic thesis is that dark matter and perhaps dark energy as well can arise from the self-interaction of gravitons, using the analogy of quantum chromodynamics (QCD) as a model and starting with a static/high temperature scalar case as a first approximation. He claims that this is consistent with general relativity, but given canonical results in the field, I suspect that his theory is a subtle modification of GR.

Earlier papers worked out, to a back of napkin level of precision, that his hypothesis can explain dark matter in both galaxies and galactic clusters, and can explain why elliptical galaxies that are less spherical appear to have more dark matter. The self-interaction effects cancel out in spherically symmetric systems and grow stronger as the total mass of the system grows.

His latest pre-print, for which the abstract and citation are below, makes a similar back of napkin precision estimate of the dark energy effects of his hypothesis to see if it can be fit to the cosmology data in the absence of dark energy entirely, and he concludes that it can, with dark energy effects initially being minimal, but growing as large scale structure gives rise to dark matter effects that screen gravitons from exiting those structures.

Taken together, his several papers on the topic argue that his theory can, to back of napkin precision, describe all significant dark matter and dark energy effects by correctly modeling the self-interaction of the graviton in a way that other dark matter and dark energy theorists have neglected.

If correct, the only beyond the Standard Model particle that needs to exist is the graviton, and a graviton based theory can dispense with the cosmological constant, at least in principle. In short, "core theory" would pretty much completely describe all observed phenomena except short range, extremely strong gravitational field quantum gravity phenomena, and the right path to studying that would be established. This could all play out on the Standard Model's flat Minkowski space background that recognizes the existing of special relativity but does not have a curved space-time in which the mathematics of quantum mechanics doesn't work.

For what it is worth, I think he is right, even though he is currently a lone voice in the wilderness without the time, funding, or community of colleagues who buys into his hypothesis to rigorously and thoroughly implement this paradigm in a way that is sufficient to achieve a scholarly consensus in the field. Particle dark matter theories are in trouble. There are a few modified gravity theories that rise to the occasion, but none as elegant, simple and as broad in their domains of applicability as this one. He hasn't worked out a way to get all of the constants from first principles, but the vision is there and it is a powerful one.

Gone are the epicycles. Gone are unobservable substances that in the mainstream lambda CDM model account for something like 93%-95% of the stuff in the universe. Vexing aspects of ordinary GR, like the inability to localize gravitational energy and the seeming irrelevance of its self-interactions are gone. The "coincidence problem" is solved. Fundamentally, one coupling constant is sufficient to describe it all, even if it is easier to empirically estimate some of the constants derived from it in the meantime. We have a complete set of fundamental particles (without ruling out the possibility that they might derive from something even more fundamental). We have strong analogies between QCD and GR, some of which have long been observed, to guide us. We have a theory that is corroborated by original predictions that empirical evidence supports that aren't easily explained by other theories.

It doesn't address matter-antimatter asymmetry (for which I have identified another paper with a good explanation). It isn't clear how it interacts with "inflation" theories. But, it would be a huge, unifying step forward in gravity theory, the greatest since general relativity was devised a century ago.

There is so much to like about this approach that it deserves dramatically more resources than it has received to develop further, because it is the most promising avenue to a fundamental break through in physics in existence today. If it pans out, it is work far more significant than typical Nobel prize material.

The Pre-Print
Numerical calculations have shown that the increase of binding energy in massive systems due to gravity's self-interaction can account for galaxy and cluster dynamics without dark matter. Such approach is consistent with General Relativity and the Standard Model of particle physics. The increased binding implies an effective weakening of gravity outside the bound system. In this article, this suppression is modeled in the Universe's evolution equations and its consequence for dark energy is explored. Observations are well reproduced without need for dark energy. The cosmic coincidence appears naturally and the problem of having a de Sitter Universe as the final state of the Universe is eliminated.

Monday, September 11, 2017

Why Did Solomon's Temple Have Two Pillars At Its East Facing Entrance?

The Old European Culture blog has a fascinating hypothesis about the Biblically described features of Solomon's Temple related to the traditional solar astronomy function of threshing floors. 

Basically, he argues that the Temple, which was built on a threshing floor, had two pillars because that was how a summer and winter equinox were marked causing the entry to align to true east. He also explains the grain processing and astronomical functions of placing a threshing floor on high ground and it associated function as a sacred gathering place, all of which would have predated Judaism.

Saturday, September 9, 2017

The Voynich Manuscript Deciphered

Nicholas Gibbs convincingly argues in the Times Literary Supplement that he has deciphered the 16th century illustrated manuscript known as the Voynich manuscript. He argues that is a copied anthology of medical texts, focused on women's health that trace to classical period sources for the most part, and that the text mostly consists of abbreviations of Latin words found in the source texts.

Previous efforts to decode the manuscript have eluded researchers for many decades, if not centuries.

Friday, September 8, 2017

Funny Math Jokes

As a former math major, I think these are all hilarious, but I'll only include two here and leave a review of the rest at the link as an exercise for the reader:
An engineer, a physicist and a mathematician are staying in a hotel. The engineer wakes up and smells smoke. He goes out into the hallway and sees a fire, so he fills a trash can from his room with water and douses the fire. He goes back to bed. Later, the physicist wakes up and smells smoke. He opens his door and sees a fire in the hallway. He walks down the hall to a fire hose and after calculating the flame velocity, distance, water pressure, trajectory, etc. extinguishes the fire with the minimum amount of water and energy needed. Later, the mathematician wakes up and smells smoke. He goes to the hall, sees the fire and then the fire hose. He thinks for a moment and then exclaims, "Ah, a solution exists!" and then goes back to bed.
A biologist, a physicist and a mathematician were sitting in a street cafe watching the crowd. Across the street they saw a man and a woman entering a building. Ten minutes they reappeared together with a third person.
- They have multiplied, said the biologist.
- Oh no, an error in measurement, the physicist sighed.
- If exactly one person enters the building now, it will be empty again, the mathematician concluded.
Hat tip: 4Gravitons.

LHC and XENON-100 Further Constrain Dark Matter Parameter Space

A review of the data from the Large Hadron Collider's ATLAS and CMS experiments shows that Higgs portal dark matter (or any other kind of dark matter that the LHC could detect) is pretty much completely ruled out in mass ranges from near zero to the multiple TeV range. There is one little blip at about 2.75 TeV in the data, but not significant enough to be worthy of much interest (particularly because this mass range is already strongly disfavored for stable dark matter candidates).

Meanwhile the Xenon-100 direct dark matter detection experiment has basically ruled out "Bosonic Super-WIMPs" at the heavy end of the warm dark matter spectrum. The abstract (not in blockquotes because it messes up the formatting):

"We present results of searches for vector and pseudo-scalar bosonic super-WIMPs, which are dark matter candidates with masses at the keV-scale, with the XENON100 experiment. XENON100 is a dual-phase xenon time projection chamber operated at the Laboratori Nazionali del Gran Sasso. A profile likelihood analysis of data with an exposure of 224.6 live days × 34\,kg showed no evidence for a signal above the expected background. We thus obtain new and stringent upper limits in the (8125)\,keV/c2 mass range, excluding couplings to electrons with coupling constants of gae>3×1013 for pseudo-scalar and α/α>2×1028 for vector super-WIMPs, respectively. These limits are derived under the assumption that super-WIMPs constitute all of the dark matter in our galaxy."

The most promising mass range for warm dark matter is about 2 keV to 8 keV, so this study rules out heavier candidates. Of course, only if they are bosons, rather than fermions, and only if they have any electroweak couplings as opposed to being "sterile". In principle, would could imagine a tiny fractional weak force coupling, but there is absolutely nothing in the empirical evidence to support a weak force coupling that existed with a weak force coupling constant that was much more than a million times weaker than the weak force coupling constant of every other Standard Model particle that has weak force interactions.

A truly sterile dark matter candidate is problematic because it can't explain why ordinary matter and dark matter distributions are so tightly correlated, something that it is increasingly clear that unmodified gravity alone can't cause. But, there is also no empirical or theoretical motivation for an ultra-small weak force coupling for a class of matter that would vastly exceed all other matter in the universe by mass or particle count.

A new paper also strongly constrains dark matter that only interacts with right handed up-like quarks (which the authors call "Charming Dark Matter"). Another paper looks at how to more rigorously distinguish between a single component dark matter scenario and one with more than one component - early simulation data strongly disfavored multi-component solutions but didn't necessarily rigorously proof that they were ruled out.

One by one, experimental and astronomy observation data points continue to narrow the parameter space for dark matter particles to essentially zero, leaving modified gravity theories, most likely due to infrared quantum gravity effects, as the only possible explanation for dark matter phenomena.

Wednesday, September 6, 2017

Constraining Beyond The Standard Model Physics With Big Bang Nucleosynthesis

Background: Big Bang Nucleosynthesis

One of the most impressive cosmology theories in existence is Big Bang Nucleosynthesis. It is a theory that assumes a starting point, not long after the Big Bang, at which the universe is at a high average temperature (i.e. particles are moving with high average levels of kinetic energy) and all atomic nuclei are initially simple protons and neutrons.

The theory then uses statistics to consider all possible collisions of those protons and neutrons that give rise to nuclear fusion or fission in all possible pathways, and assume that at the end of the nucleosynthesis period that nuclear fusion to create light elements becomes dramatically less common as the temperature of the universe falls as kinetic energy is captured and converted into nuclear binding energy, an indirect form of the strong nuclear force, and as collisions become less common as the size of the universe that is in the Big Bang light cone relative to the number of particles in it increases.

For the most part, the predictions of Big Bang Nucleosynthesis are confirmed by experiment. The relative abundance of light element isotypes in the universe is a reasonably close match to what we would expect if Big Bang Nucleosynthesis is an accurate description of what actually happened. The biggest discrepancy is in abundance of Lithium-7, which differs significantly from the predicted value even though it still has a right order of magnitude.

Using Big Bang Nucleosynthesis To Constrain Beyond The Standard Model Physics

Many predictions of Big Bang Nucleosynthesis are sensitive to the existence of relatively long lived particles (e.g. those with mean lifetimes on the order of seconds or more) beyond those of the Standard Model. Collisions of ordinary protons and neutrons with this particles would cause the relative abundance of light element isotypes to be greater or smaller, although the relationship isn't straightforward because one decays of such particles will tend to increase element abundances, while other decays of the same particles will tend to decrease abundances of some of the same elements.

But, plugging a hypothetical new long lived decaying particle into the Big Bang Nucleosynthesis model involves straightforward, well understood physics. If a long lived decaying particle with certain properties exists, it will decay in a very predictable way and it will have a very precisely discernible impact on light element isotype frequencies.

Thus, beyond the Standard Model physics long lived decaying particles of a very general type that is not very strongly model dependent can be ruled out if they give rise to deviations from the Big Bang Nucleosynthesis predictions by significantly more than existing margins of error in the theoretical calculation and astronomy measurements of these predictions.

The Results

A new pre-print does just that, and reaches the following conclusion:
We have revisited and updated the BBN constraints on long-lived particles. . . .
We have obtained the constraints on the abundance and lifetime of long-lived particles with various decay modes. They are shown in Figs. 11 and 12. The constraints become weaker when we include the p ↔ n conversion effects in inelastic scatterings because energetic neutrons change into protons and stop without causing hadrodissociations. On the other hand, inclusion of the energetic anti-nucleons makes the constraints more stringent. In addition, the recent precise measurement of the D abundance leads to stronger constrains. Thus, in total, the resultant constraints become more stringent than those obtained in the previous studies.  
We have also applied our analysis to unstable gravitino. We have adopted several patterns of mass spectra of superparticles and derived constraints on the reheating temperature after inflation as shown in Fig. 15. The upper bound on the reheating temperature is ∼ 10^5 − 10^6 GeV for gravitino mass m3/2 less than a several TeV and ∼ 10^9 GeV for m3/2 ∼ O(10) TeV. This implies that the gravitino mass should be ∼ O(10) TeV for successful thermal leptogenesis.  
In obtaining the constraints, we have adopted the observed 4He abundance given by Eq. (2.4) which is consistent with SBBN. On the other hand, if we adopt the other estimation (2.3), 4He abundance is inconsistent with SBBN. However, when long-lived particles with large hadronic branch have lifetime τX ∼ 0.1 − 100 sec and abundance mXYX ∼ 10^−9 , Eq. (2.3) becomes consistent with BBN. 
In this work, we did not use 7Li in deriving the constraints since the plateau value in 7Li abundances observed in metal-poor stars (which had been considered as a primordial value) is smaller than the SBBN prediction by a factor 2–3 (lithium problem) and furthermore the recent discovery of much smaller 7Li abundances in very metal-poor stars cannot be explained by any known mechanism. However, the effects of the decaying particles on the 7Li and 6Li abundances are estimated in our numerical calculation. Interestingly, if we assume that the plateau value represents the primordial abundance, the decaying particles which mainly decays into e +e − can solve the lithium problem for τX ∼ 10^2 − 10^3 sec and mXYX ∼ 10^−7.
Figures 11 and 12 basically rule out any decaying particles with a lifetime of more than a fraction of a second in the mass range of 30 GeV to 1000 TeV for hadronically decaying particles (Figure 11), and imposes similar constraints for radiatively decaying particles (Figure 12). This is a nice complement to results from the LHC and other colliders with exclude beyond the Standard Model particles that are lighter than hundreds of GeV with lifetimes up to roughly a fraction of a second. Big Bang Nucleosynthesis constraints, generally speaking, are more sensitive to masses much heavier than the LHC can reach and mean lifetimes longer than the LHC is designed to measure. This also strengthens and makes more robust exclusions based upon an entirely different methodology involving the cosmic microwave background radiation of the universe explored by astronomy experiments such as Planck 2015. This basically rules out any relatively long lived remotely "natural" supersymmetric particle unless supersymmetric particles have extremely weak interactions with ordinary matter.

A thermal relic gravitino in a supersymmetry (SUSY) model with a mass on the order of 10 TeV suggests a very high energy characteristic supersymmetry scale which while not directly ruling out some other SUSY particle as a dark matter candidate, makes a SUSY theory that could supply both a dark matter candidate and explain leptogenesis extremely "unnatural."

The potential Lithium problem solution, a particle with a mean lifetime of 100 to 1000 seconds (on the same order of magnitude as a free neutron), favors a quite light, predominantly radiatively decaying (i.e. decaying via photons, electrons and positrons) particle. The main problem with this is that such a particle ought to have shown up in collider experiments if it existed. Yet, there are no known fundamental particles or hadrons that have the right length mean lifetime but decay primarily radiatively. The muon and pion are both tens of millions of times too short lived (or more), the neutron has primarily hadronic decays (to a proton, an electron and an anti-neutrino), and all other hadrons and fundamental particles are much shorter lived. Ergo, a beyond the Standard Model particle is unlikely to be the solution to the Lithium problem of Big Bang Nucleosynthesis.

Bottom Line

Big Bang Nucleosynthesis indirectly constrains the parameter space of beyond the Standard Model physics in a manner that strongly compliments other methods while not overlapping the exclusion of other methods at all. 

This makes the case against supersymmetry models, one of the most popular kinds of beyond the Standard Model theories, significantly stronger than it already was based on other lines of reasoning from experimental evidence such as the strong evidence disfavoring a SUSY dark matter candidate.

The Paper

The pre-print and its abstract are as follows:
We study effects of long-lived massive particles, which decay during the big-bang nucleosynthesis (BBN) epoch, on the primordial abundances of light elements. Compared to the previous studies, (i) the reaction rates of the standard BBN reactions are updated, (ii) the most recent observational data of light element abundances and cosmological parameters are used, (iii) the effects of the interconversion of energetic nucleons at the time of inelastic scatterings with background nuclei are considered, and (iv) the effects of the hadronic shower induced by energetic high energy anti-nucleons are included. We compare the theoretical predictions on the primordial abundances of light elements with latest observational constraints, and derive upper bounds on relic abundance of the decaying particle as a function of its lifetime. We also apply our analysis to unstable gravitino, the superpartner of the graviton in supersymmetric theories, and obtain constraints on the reheating temperature after inflation.
Masahiro Kawasaki, et al., "Revisiting Big-Bang Nucleosynthesis Constraints on Long-Lived Decaying Particles" (September 5, 2017).

Tuesday, September 5, 2017

Lost Languages Found In Monastery In Sinai

At Saint Catherine's monastery in the Sinai in Egypt, monks erased old works to copy new ones over them when parchment was scarce. Now, imaging and computers can resort the erased works and this has revealed a couple of lost languages which were almost completely unattested in writing before now.
Since 2011, researchers have photographed 74 palimpsests, which boast 6,8000 pages between them. And the team’s results have been quite astonishing. Among the newly revealed texts, which date from the 4th to the 12th century, are 108 pages of previously unknown Greek poems and the oldest-known recipe attributed to the Greek physician Hippocrates.

But perhaps the most intriguing finds are the manuscripts written in obscure languages that fell out of use many centuries ago. Two of the erased texts, for instance, were inked in Caucasian Albanian, a language spoken by Christians in what is now Azerbaijan. According to Sarah Laskow of Atlas Obscura, Caucasian Albanian only exists today in a few stone inscriptions. Michael Phelps, director of the Early Manuscripts Electronic Library, tells Gray of the Atlantic that the discovery of Caucasian Albanian writings at Saint Catherine’s library has helped scholars increase their knowledge of the language’s vocabulary, giving them words for things like “net” and “fish.”

Other hidden texts were written in a defunct dialect known as Christian Palestinian Aramaic, a mix of Syriac and Greek, which was discontinued in the 13th century only to be rediscovered by scholars in the 18th century. “This was an entire community of people who had a literature, art, and spirituality,” Phelps tells Gray. “Almost all of that has been lost, yet their cultural DNA exists in our culture today. These palimpsest texts are giving them a voice again and letting us learn about how they contributed to who we are today.”