Friday, August 29, 2014

More On Gravitational Field Self-Interaction

After nearly a century spent trying to make sense of general relativity and quantum physics, it looks quite possible that the missing pieces have been present and available for almost forty years (except for the late breaking discovery that neutrinos have tiny but non-zero masses).

The answer may very well lie in a more detailed mathematical analysis of the way that the non-Newtonian aspects of gravity that follow directly from the principles upon which the theory was formulated (although not necessarily precisely the same formulation of the equations derived from those principles).

Deur's Remarkable Work On Gravity

I previously noted at this blog Alexandre Deur's very important work on the phenomenological implications of the fact that massless spin-2 gravitons in a particle based quantum gravity realization of quantum gravity interact not only with other particles containing mass-energy, but also with each other.

GR and QCD Inspired Self-Interactions Of Gravity With Gravitational Energy May Explain Dark Matter

Deur's analysis suggests that this effect should give rise to essentially all of the phenomena attributed to dark matter and is a function to the extent to which baryonic matter distributions are not spherically symmetric and their scale with more massive systems exhibiting proportionately stronger dark matter effects and less spherically symmetric systems exhibiting stronger dark matter effects.

His results which rely on a low order approximation of the gravitational self-interaction term of a general relativity Lagrangian that is constructed by analogy to the Lagrangian of gluon self-interactions in QCD, reproduce the successes and predictions of theories like Milgrom's MOND theory in galactic scale systems from ellipical galaxies to spiral galaxies to dwarf galaxies, while overcoming MOND's failures in galactic clusters and the Bullet Cluster.

Moreover, while MOND adds one new fundamental parameter to general relativity and the Standard Model, Deur, as noted below, either adds none or takes one away.

In contrast, all dark matter models need, at a minimum, an average dark matter density (which has been determined empirically), and an average dark matter particle mass.  Many also need a mass for a massive dark matter self-interaction boson, and its coupling constant, and/or a cross-section of interaction between one or more dark sector particles and non-dark sector particles.

Subtle Non-Newtonian GR Effects May Explain Dark Energy Without A Cosmological Constant

Furthermore, in a point that I didn't emphasize sufficiently before, he explains how graviton self-interactions that pull gravitons emitted from ordinary matter towards strong gravitational field lines and away from the destinations we would expect them to take in the absence of gravitational self-interactions weakens the gravitational pull between systems that show dark matter effects and other clumps of matter outside those systems.

This effect is equivalent to dark energy, and according to his heuristic analysis should have the right order of magnitude, although he has not been reconciled his observation in detail with the astronomical data.

A suggestion that the order of magnitude of the non-Newtonian implications of General Relativity (possibly generalized slightly) may be sufficient to explain the entire dark sector comes from Hong Sheng Zho in a preprint last modified on June 9, 2008 and originally submitted on May 27, 2008 arXiv:0805.4046 [gr-qc] that "the negative pressure of the cosmological dark energy coincides with the positive pressure of random motion of dark matter in bright galaxies."

Another indication that these effects may be of the right order of magnitude to explain dark energy as well as dark matter comes from Greek scientists K. Kleidis and N.K. Spyrou in their paper "A conventional approach to the dark-energy concept" (arXiv: 1104.0442 [gr-qc] (April 4, 2011).  They too note that energy from the internal motions of the matter in the universe (both baryonic and dark) in a collisional dark matter model are of the right scale to account for existing observational data without dark energy or the cosmological constant.

It is also worth noting that the cosmological constant is small enough that other kinds of careful analysis of sources for dark energy effects in the Standard Model and non-Newtonian effects in general relativity other than the cosmological constant may explain some or all of it.

For example, Ralf Schutzhold in an April 4, 2002 preprint at arXiv:gr-qc/0204018 in a paper entitled "A cosmological constant from the QCD trace anomaly" noted that "non-perturbative effects of self-interacting quantum fields in curved space times may yield a significant contribution" to the observed cosmological constant.  The calculations in his four page page conclude that: "Focusing on the trace anomaly of quantum chromo-dynamics (QCD), a preliminary estimate of the expected order of magnitude yeilds a remarkable coincidence with the empirical data, indicating the potential relevance of this effect."

This Approach Eliminates One Fundamental Measured Physical Constant And Adds No New Ones.

Even more powerfully, these non-Abelian quantum gravity effects are derived from first principles, without introducing any non-Standard Model particles other than the plain vanilla massless spin-2 graviton that has been widely expected to exist for many decades, and without introducing any new experimentally measured physical constants that weren't already present in the Standard Model and General Relativity.  Indeed, it is quite possible that this analysis could reduce the number of fundamental physical constants in the two combined theories, by eliminating the need for a separate cosmological constant, which is one of the two physical constants specific to General Relativity.

Everything else would just be math.  Hard math, admittedly, but probably nothing beyond what can be accomplished using numerical simulations of the type currently used to for Lattice QCD with existing or near future computing power.

The Only Physical Constants Left To Measure (In Neutrino Physics) May Be Known In A Decade

If this analysis is correct, then the only gaps in our measurements of the physical constants in the fundamental laws of physics that govern everything in the universe that remain to be meaningfully measured are four physical constants related to neutrino physics: (1) the mass hierarchy of the neutrino masses ("normal" or "inverted"), (2) the absolute mass of at least one of the neutrino mass eigenstates (something already quite constrained by Planck data), (3) the quadrant of one of the theta angle parameter of the PMNS matrix (there are two possibilities), (4) the CP violating phase of the PMNS matrix, and (5) the Majorana or Dirac nature of neutrino masses.  This task is likely to be accomplished in the next four to ten years by experiments that are currently being conducted, or have been designed and funded and are currently in the process of being constructed.

The Standard Model Particle Set, Plus The Graviton, May Be The Complete Set

If this worked, it would strongly suggest that the Standard Model particles plus the graviton is the complete set of particles that exist in the universe:

Three generations four kinds of fermions, each with their matter and antimatter counterparts, in three color charge variations each for the two kinds of quarks, and in two parity variations each for each of the three kinds of charged fermions, plus photons, eight color charge variants of gluons, the Z boson, the W+ boson, the W- boson, the Higgs boson and the graviton.

Thus any additional fundamental particle content in any beyond the Standard Model theory seeking to unify these forces, other that preons that can produce this and only this set of composite particles, could be rejected immediately as incorrect.

Is Deur's Analysis A Modification of General Relativity?

While Deur's analysis of gravitational self-interactions follows the principles of General Relativity very closely, it is ultimately an incomplete quantum gravity theory, rather than classical General Relativity itself.

Deur's treatment of gravitational self-interaction is non-standard, and can probably be more fairly described as a modification of general relativity, rather than a straight application of Einstein's equations, manipulated into a different form.

As the leading textbook on general relativity, Gravitation by C.W. Misner, K.S. thorne and J.A. Wheller (1973), explains in Section 20.4, there are several arguments in favor of the proposition that the energy of a gravitational field cannot be localized and hence cannot be considered in the same way that all other mass-energy is considered in the energy-momentum tensor of general relativity.

This widely held textbook assumption that gravitational energy cannot be localized may be the best explanation for why it has taken so long to seriously explore approaches to incorporate gravitational force self-interaction in phenomenological analysis, because the self-interaction effects that Deur argues can be deduced from a graviton model of gravity can only work when gravitational energy is localized, with quanta, or as classical fields or as classical curvature self-interactions.

But, as A.I. Nikishov of the P.N. Lebedev Physical Institute in Moscow states in an updated July 23, 2013 version of an October 13, 2003 preprint (arXiv:gr-qc/0310072), these arguments "do not seem convincing enough."  For example, Feynman's lectures on gravitation assumed that gravity was mediated by a graviton that could be localized with a self-interaction coupling strength equal to the graviton's energy, just as the graviton would with any other particle.  String theory and supergravity theories, generically make the same assumptions.

Nikishov also made the same analysis of Deur in his paper "Problems in field theoretical approach to gravitation" dated February 4, 2008 in its latest preprint version arXiv:gr-qc/04100999 originally submitted October 20, 2004, when he states in the first sentence of his abstract that:
We consider gravitational self interaction in the lowest approximation and assume that graviton interacts with gravitational energy-momentum tensor in the same way as it interacts with particles.
Deur and Nikishov are not the only investigators to note the potential problems with the anomalous ways that conventional General Relativity treats gravitational self-interactions, and they are not alone in this respect.  Carl Brannen has also pursued some similar ideas.

As another example, consider this statement by A.L. Koshkarov from the University of Petrozavodsk, Russia in his November 4, 2004 preprint (arXiv:gr-qc/0411073) in the introduction to his paper entitled "On General Relativity extension."
But in what way, the fact that gravitation is nonabelian does get on with widely spread and prevailing view the gravity source is energy-momentum and only energy-moment?  And how about nonabelian self-interaction?  Of course, here we touch very tender spots about exclusiveness of gravity as physical field, the energy problem, etc. . . .All the facts point out the General Relaivity is not quite conventional nonabelian theory.
Koshkarov then goes on to look at what one would need to do in order to formulate gravity as a conventional nonabelian theory like conventional Yang-Mills theory.

Alexander Balakin, Diego Pavon, Dominik J. Schwarz, and Winfried Zimdahl, in their paper "Curvature force and dark energy" published at New.J.Phys.5:85 (2003), preprint at arXiv:astro-ph0302150 similarly noted that "curvature self-interaction of the cosmic gas is shown to mimic a cosmological constant or other forms of dark energy."

Balakin, et al., reach their conclusions using the classical geometric expression of general relativity, rather than a quantum gravity analysis, suggesting that the overlooked self-interaction effects do not depend upon whether one's formulation of gravity is a classical or a quantum one, but the implication once again, is that a failure to adequately account for the self-interaction of gravitational energy with itself may account for all or most dark sector phenomena.

As noted above, there is a rich academic literature expressing dissatisfaction with the precise way that General Relativity was formulated by Einstein on the grounds that it lacks one or another subtle aspects of rigor or theoretical consistency, or makes a subtle assumption that is unnecessary, or needs to be tweaked to formulate it in a way that formulates gravity in a quantum manner.

Some of the modifications proposed are more ambitious than others and there are at least half a dozen serious contenders for ways to reformulate General Relativity, adopting all or more of the core theoretical axioms from which its equations were derived, in a way that is usually indistinguishable from Einstein's equations in all contexts that have been experimentally measured.

What makes the effort by Deur stand out is that his quite simple and naive approach, despite being incomplete and not very rigorous, manages to draw out non-Newtonian effects that replicate much or all of the observed phenomenology of dark matter and dark energy, without adopting any core axioms that aren't extremely well motivated and natural.

The only axiom he adds to Einstein's formulation is that gravitational energy is localized in massless spin-2 gravitons that couple to each other with a strength proportional to their mass-energy, the same coupling that gravitons are proposed to have to all of the Standard Model's other particles.

This assumption is pretty much the most conservative and banal axiom that one could add to General Relativity and has been a mainstream assumption of physicists for decades (the graviton was named in 1934 and considerable research has been applied to associate it with properties that replicate most features of general relativity).  Honestly, there is really no other principled stance to take than the one in which gravitational energy is self-interacting with a strength proportionate to its energy, in just the way that it does with all other forms of matter and energy, as a wide swath of investigators assume.

And yet, remarkably, it turns out that this subtle additional axiom which naively wouldn't seem to have any phenomenological impact on general relativity at all, actually seems to have shockingly immense phenomenological consequences when analyzed properly.  The impact is so great that this insight may be enough to tie up all of the loose ends in fundamental physics, even without formulating a complete and rigorous theory of quantum gravity.

Indeed, even a slightly weaker assumption that gravitational energy is localized and that gravity self-interacts with this energy at a strength proportional to its energy just as it does to all other forms of mass and energy (which avoids the need to formulate the assumption as part of a quantum gravity theory), is a sufficient assumption to produce the same phenomenological consequences, although that approach is less intuitive and makes the math harder.

While the core concepts of General Relativity are a key pillar of fundamental physics, the exact manner in which Einstein expressed those core concepts mathematically has far less respect within the ranks of expert physicists than they do among educated laymen.  If Deur hasn't missed something really basic in his analysis, in a part of general relativity theory that has been widely avoided based upon the textbook lore that it was a dead end, it turns out that this scorn has been well deserved.

Prospects For Future Fundamental Physics Research

Renormalization With Quantum Gravity

One of the notable features of the Standard Model is that many of the physical constants in it are functions of the energy scale of the interactions in which they are measured.  All of the Standard Model's mass constants (with the possible exception of the neutrino masses if they are not Dirac masses), and all of its coupling constants "run" with the energy scale at which they are measured.

In a quantum gravity theory, there is every reason to expect that the gravitational coupling constant (or equivalently, the Planck mass which is the inverse of the square root of eight pi times the gravitational coupling constant G), runs with energy scale as well.  In one theoretically well motivated analysis, the square of the Plank mass at energy scale "k" is expected to run to a value of the square zero energy Plank mass plus the k^2 times a physical constant approximately equal to 0.05.

The incorporation of gravity into the Standard Model would also add a term to each of the other beta functions of the Standard Model that govern the running of each of its physical constants that run with energy scale.  At energy scales much smaller than the Plank scale, the impact of these additional terms in the beta functions is negligible.  But, at high energies approaching the Plank scale, these contributions would be appreciable.

Mikhail Shaposhnikov and Christof Wetterich, in their ground breaking paper "Asymptotic safety of gravity and the Higgs boson mass" (arXiv:0912.02088 submitted January 12, 2010) made one of the most accurate predictions of the Higgs boson mass (126 GeV +/- 2.2 GeV) before it had been measured, using this kind of analysis with an incomplete fragment of a hypothetical quantum gravity theory.

The Higgs boson mass falls as energy scales get higher.  They assumed, for some well motivated reasons, that the Higgs boson mass would hit zero at the Plank scale, and then used the Standard Model beta function for the Higgs boson, modified by a term to account for the gravitational impact on the running of the beta functions, to calculate the minimu Higgs boson mass that could be calculated back from that boundary condition, using the top quark mass and strong force coupling constant as calibration points.

If the gravitational coupling constant does run and does influence the running of the other Standard Model physical constants at high energies, which the success of Shaposhnikov and Wetterich's prediction would tend to support, and the complete set of fundamental laws of physics really does amount to the Standard Model because a GR-like graviton, then the only thing we need to add to our knowledge to predict the way that the law of physics works at high energies approaching the Plank scale is to correctly determine the form of the gravitational terms added to the Standard Model beta functions.

Shaposhnikov and Wetterich didn't need to determine those to reach their conclusion, because they found a way to make all of the beta function terms that they didn't know cancel out in their calculations.  But, if the universe turns out to be fully explained by the Standard Model plus a graviton, then these calculations become the biggest unsolved problem in fundamental physics apart from neutrino physics, and fall almost exclusively to the theoretical physics community rather than to the experimentalists.

There still isn't a really solid analysis of how inserting gravity terms into the beta functions of the Standard Model, as Shaposhnikov and Wetterich did at an ansatz level, impacts gauge coupling unification in the Standard Model.  If the introduction of these terms can be shown to lead to a gauge unification of the Standard Model coupling constants at the Planck scale (a feature of SUSY already preserved in SUGRA theories at the GUT scale), then the case that the Standard Model really is complete, even if a more elegant underlying preon or string model may explain it, becomes truly compelling.

Impact On Astronomy Research

If Deur's work is mathematically validated and confirmed when the observational evidence is re-evaluated in light of his analysis, it will dramatically take the wind out of the sails of large numbers of direct dark matter detection experiments, and the order of the day will be to reconsider a wealth of astronomy data already existing in this new framework and to identify new astronomy observations that could test it.

For example, it makes observations designed to test the strength of gravity between isolated pairs of galaxies or galactic clusters whose masses can be estimated accurately by looking at the behavior of isolated intergalactic stars a critical priority for astronomers, that otherwise might not even have been examined with any real precision.

It would also prompt a minor revolution in cosmology that would likely lead to the demise of the reigning six parameter lambda CDM model, even though it fits the available data very well.

Impact of High Energy Physics and Theoretical Physics

While Deur's work alone wouldn't directly challenge any of the beyond the Standard Model physics theories that are being tested at the LHC, most of which already include massive spin-2 gravitons that are necessary in his General Relativity extension, it would eliminate the single most powerful motivating reason to look for beyond the Standard Model physics: the near certainty that there must be one or more new beyond the Standard Model particle out there in the dark sector that needs to be explained.

Suddenly, there is no longer a need for even a singlet keV scale mass sterile neutrino to explain the experimental evidence.  Suddenly, it is possible without further experiment, to calculate how Standard Model particles should behave all of the way up to the Plank scale in a manner that fully incorporates all four known forces in the universe.

Suddenly, SUSY and supergravity theories that preserve or nearly preserve R-parity in a way that gives rise to a long lived lightest supersymmetric theory, look like they have a bug, rather than a feature.  Similarly, stable axions, which are now touted as a potential explanation of both dark matter and the strong CP problem, no longer look nearly as attractive.

The hierarchy problem, in light of the analysis of Shaposhnikov and Wetterich, strengthened in a Baysean manner by their accurate Higgs boson mass prediction and by the strong inference from Deur's analysis that gravitational energy really is localized in gravitons, looks like a natural side effect of asymptotic safety at the Plank scale.

More generally, the experimental motivation for pursuing any kind of theoretical research program that suggests the existence of beyond the Standard Model particles or forces collapses.  It may take a generation for the old guard members of the academy to die for this conclusion to really sink in, but the next generation of string theorist and GUT modelers will likely find it much more attractive to ruthlessly limit their inquiries to models that can exactly reproduce the Standard Model plus graviton particle set, than the generation that preceded them did.

Less glamorous challenges, like trying to understand the physics of dense condensed matter object in space like neutron stars, looking for exotic hadrons, explaining scalar and axial vector mesons, the theory of patron distribution functions, will start to look more promising than the grand race to find a GUT theory that merely post-dicts Standard Model and gravitational physics that we already understand, except for the "why" we have the laws of physics that we do instead of something different.  Increased precision in the measurements of the soon to be completely measured set of fundamental physical constants will also narrow down even the options for those kinds of inquiries.

In principle, these conclusions should only slightly refine research into the baryogenesis and leptogenesis, and cosmological inflation.  But, psychologically, it is going to make research programs that try to tease these phenomena out of the already known laws of the universe, like Higgs inflation theories, look considerably more attractive relative to those that propose new physics, than is the case today.

Obviously, if Deur's work does pan out, the attractiveness of working on trying to turn incomplete theories of quantum gravity into comprehensive ones blossoms, in light of a powerful hint in the right direction, although many existing branches of theoretical work in general relativity and quantum cosmology, such as massive gravity theories, wither on the vine.

[UPDATE September 2, 2014]  Physics Forum has a nice analysis of the issue of self-gravitation in Einstein's equations at a thread here.

Tuesday, August 26, 2014

Still No Experimental Evidence Of Charged Lepton LFV

Experimental evidence continues to confirm that Standard Model rule that lepton flavor violation does not occur in the charged lepton sector (e.g. a Z boson cannot decay to an electron and an anti-muon, which would not conserve lepton flavor number, even though this decay would conserve total lepton number).

According to the ATLAS experiment at the LHC, the maximum branching fraction of lepton flavor violating decays from Z bosons is experimentally bounded to be not more than 7.5*10-7 out of a total combined branching fraction of all possible decays of 1, compared to probabilities on the order of 3.363*10-2 of the least common decays into a pair of Standard Model fundamental particles.  This bound is about 2.5 times stronger than the one currently listed by the Particle Data Group.

Of course, neutrinos oscillate, which is fundamentally a lepton flavor violating process. But, neutrino oscillation according to the PMNS matrix does not give rise indirectly to experimentally discernible lepton flavor violation in the charged lepton sector in the Standard Model.

Friday, August 22, 2014

Ideogenesis and Ideocide

Ideocide is the intentional extinction of an idea or a way of thinking. Related concepts are ethnocide and civicide, the intentional cessation of an ethnicity or civilization. All three are intentionally distinct from genocide, the intentional killing of a people, in that they go to the death not of human beings, but of ways that human beings live.

Mostly, these terms have been used derisively by critics of globalisation on the radical left. At the moment, anyway, my interest is with the history of these events historically, and I am only secondarily interested in the intentions of those who bring these events about. In other words, I am interested in what causes ideas, ethnicities or civilizations to cease to exist.

The Rise and Fall of Polytheistic Paganism

One of the most striking of these events, and a favorite subject for English Romantic poets, is the death of polytheistic paganism outside Hindu India and a handful of other communities around the world (some with source populations from India, and others outside that tradition such as the Mari people of Russia). We know a fair amount about how the pagan religions of Europe and the Mediterranean came to cease to be practiced because it happened in the historical era.

The Roots of Polytheistic Paganism

The fact that this kind of paganism apparently wasn't practiced in sub-Saharan Africa, or further East than India (the exception that proves the rule, Hindu religious practice in Bali, dates from the 1st century CE), lends support to the argument that the proto-Indo-Europeans, and the proto-Semites (the people who languages ultimately gave rise to both Hebrew and Arabic), were polytheistic pagans, and that polytheism was probably developed in the Middle East.

Evidence of proto-Hindu religious practice dates to the pre-Indo-European farming people of the Indus River Valley (now mostly in Pakistan), as far back as 3300 BCE, which coincides with the oldest historical records of polytheistic Egypt and Sumer (in what is now Southern Iraq). The Middle Eastern origins of some of the domesticated crops used in the Indus River Valley civilization, and the stronger links that both Dravidian and the Indo-European languages of India have to languages with Middle Eastern origins than to languages spoken further to the East of India, suggests that the people of India who practiced polytheism had Middle Eastern roots.

The absence of polytheistic paganism in the people who farm crops originally domesticated in what is now the Sahara desert, before it was dessicated, suggest that polytheistic paganism arose after the Sahara desert formed. This started to happen around 4000 BCE.

This puts the likely date for the development polytheism very close in time to the time of the first written records and the first know kings in human history, about six or seven millenia after the Neolithic Revolution, sometime between 4000 BCE and 3300 BCE, as noted before, probably somewhere in the Middle East or closely adjacent to it (perhaps Anatolia, Southwestern Iran or along the Nile).

The evidence indicates that the people of Southeast Asia and East Asia, who did not develop polytheism in the way that the people of Europe, North Africa, the Near East and India did, independently developed agriculture around the time of the Neolithic revolution; their shared ties to the first modern humans to leave Africa for the Middle East (probably around 100,000 BCE) pre-date the Neolithic revolution (around 11,000 BCE).

We also know that the ancestors of the Native Americans, who are not polytheistic (with the possible exception of Gods of the early farming empires), left Asia for the Americas sometime after the domestication of the dog (25,000 BCE to 15,000 BCE), and before any plants or animals were domesticate (around 11,000 BCE) by which point rising sea levels isolated them from Eurasia for more than ten millenia.

And, we know that the ancestors of the indigeneous people of Australia, who are not polytheistic, arrived there around 50,000 BCE and were isolated from the rest of the world, with very minor exceptions that left little impact other than the introduction of the dingo to Australia from a small founding population, until the 19th century.

The Transition To Monotheism

The Hebrew Bible is predominantly a story of the transformation of the Jewish people from polytheistic pagan and animist beliefs to monotheism in the area from Babylon to Egypt over a period of about fifteen hundred years. It describes the Jewish people as a people surround by and persecuted by polytheistic pagans from the time that the Jewish people made a covenant with their God until the Hebrew Bible's narrative ends.

While the Hebrew Bible is hard to view as an entirely factual historical account (it differs with other historical records in some particulars, is ambiguous in other particulars and describes events that are implausible or fantastic in yet others), there is good reason to believe that there was a largely monotheistic kingdom or pair of kindgoms made up of Hebrews for a significant part of the first millenium BCE.

The principal scriptual source for Muslims, the Koran, likewise makes clear that the Arab peoples who formed the initial core of Muslims in the 600s, were also predominantly polytheistic pagans prior to adopting Islam.

The Rise of Monotheism

There is some evidence for a brief period of monotheistic or near monotheistic religion arising from Egyptian polytheism for the life of one or two rulers in Egypt at a time when some of the ancient Hebrews may have lived there or been in contact with Egypt.

There is no evidence of other major monotheistic religions in the Middle East in this era, although the dualistic Zoroastrian religion, with its roots in Iran, was encounted by and probably influential in the religious beliefs of Jews exiled in what is modern day Iraq in the period after the Hebrew bible ends and before the destruction of the Jewish Temple in Jerusalem by the Romans in 70 CE.

Christianity as an organized religion didn't come into being until the middle of the first century CE, and was not very prominent in Roman society for another century. The destruction of the Temple gave rise to the Jewish diaspora and Judaism in its modern Rabbinic form, and the formative periods of both Christianity and Rabbinic Judaism were contemporaneous. There would be no Jewish communities larger than villages or neighborhoods in a large non-Jewish city again until the rise of Israel in the 20th century.

The End of the Pagan Era

Prior to Roman Emperor Constantine's decree legalizing the practice of Christianity on a basis remarkably similar to the American First Amendment protections for religion, the religious ethos organized around multiple dieties was the state religion of the Roman Empire. The Greeks, the Egyptian Coptic civilization, the Sumerians, the Norse and India's Hindus all had comparable religious systems with different dieties in each case.

Less than two hundred years later, not long before the Western Roman Empire fell to barbarian invaders, the Roman Empire banned polytheistic pagan practice and vigorously rooted it out, enacting civil forfeiture laws to dispossess people of property used for pagan practice, destroying pagan texts, and making it a crime to conduct pagan religous practice. The late Roman Empire also sought to stamp out heretical Christian doctrines (mostly Arian doctrine and to a lesser extent Gnostic Christian beliefs which had already been greatly marginalized in the Western Roman Empire). By the time that the empire fell, living pagan religous practice was all but dead. The former Romans, and many of the barbarian invaders closest in space to the former Western Roman Empire, as well as essentially all of the residents of the Eastern Byztantine Empire (it would not acquire the name until later), were at least nominally affiliated with some branch of the Christian faith or were part of a small Rabbinic Jewish diaspora.

Starting about a century after the fall of Rome, and continuing for about two centuries after that the Islamic Empire expanded dramatically. It grew to include much of the former Roman and Byzantine Empires including the Levant (which made up much of the Southern half of the Byzantine Empire), the North African coast, and Iberia (the penninsula that is home to modern Spain and Portugal).

The Islamic empire tolerated Jews and Christians as "people of the book" and spiritual ancestors of Islam, but like the Roman Empire in its last century or so, banned and punished polytheistic pagan practice.

The Islamic empire ultimately broke up, but all of modern day North Africa, the modern day Middle East (outside Israel and a few pockets of Christianity in the Levant), Turkey and parts of the Balkans remain overwhelmingly Muslim, and have been predominantly Muslim without interruption other than a few brief periods of Crusader rule in the Levant in the Middle Ages. Polytheistic paganism has never resurfaced in any of these territories before the advant of late 19th century and 20th century neo-paganism.

Tuesday, August 19, 2014

More Dark Matter Exclusions (Updated August 20, 2014)

Background

The Dark Matter Hypothesis

Dark matter theory proposes that one or more nearly collisionless particles give rise to the phenomena attributed to dark matter in the lambda CDM standard model of cosmology, in models of large scale structure formation in the universe, in the flat rotation curves of essentially all observed galaxies, and in the large discrepancies between the inferred mass of galaxies and galactic clusters and the mass of the luminous matter in these galaxies and galactic structure observed in the kinematics and in relativistic lensing measurements of their mass.

In principle, dark matter theory does not necessarily require new fundamental particles to give rise to dark matter, which the six parameter lambda CDM parameter measurements from experiments such as the Planck satellite measurements of cosmic background radiation suggest constitute about three-quarters of all matter in the universe.

But, ordinary matter made of baryons or of any of the Standard Model leptons or bosons (such as interstellar gas made of hydrogen and helium atoms, and interstellar dust) has largely been ruled out by astronomy observations, or because all known fundamental and composite particles lack the necessary particle properties.

So dark matter theory seems to require new, beyond the Standard Model physics that gives rise to new kinds of particles that have thus far not been detected experimentally.

The State of the Effort To Detect Dark Matter Particles

Scientists have not directly observed such dark matter particles, but they have ruled out large swaths of the parameter space for such particles that seem to be inconsistent with the empirical data.  There have recently been false alarms in which dedicated direct dark matter detection experiments (recapped below), the Fermi experiment, and more recently observations of a 3.55-3.57 keV monochromatic X-ray emission line (recapped below) have observed something that seemed to be direct evidence of new physics dark matter particles.  But, each time, these false alarms have been subsequently discredited.

This post recounts the most recent false alarm, and some of those dark matter parameter space exclusions.

The Impending Existential Crisis For Dark Matter

The non-detection of dark matter is not for want of a large community of physicists and astronomers devoted immense efforts in often reasonably well funded experiments to look for it.

The experimental quest for new physics dark matter particles is barreling towards an existential crisis point at which we either (1) identify and characterized dark matter fairly precisely given the myriad experimental and observational bounds on its existence, using direct or indirect evidence, or (2) we find that dark matter theory is overconstrained by the data and disproven, because all possible dark matter candidates can be ruled out.

The Problem With Gravitational Equation Based Understandings Of Dark Matter Phenomena

The trouble is that compelling data points like the Bullet Cluster observations also rule out many versions of the main competitors to new physics dark matter particles to explain dark matter phenomena, which involve modifications to gravity, or alternatively, to refined understandings of the non-Newtonian aspects of general relativity.  Many kinds of gravitational force effects are likewise ruled out as an explanation of dark matter phenomena.

The non-Newtonian implications of general relativity have so far only been explored in selective simplifications of the equations of general relativity themselves, because these equations have so far provided too intractable mathematically to apply directly to complex real world systems.

But, it is possible, for example, that assumptions involved in simplifying the equations of general relativity to apply them to real world complex systems assume way non-Newtonian implications of these equations that would be revealed if other assumptions were made, or that the equations of general relativity are incorrect in some subtle way not revealed by experimental tests of General Relativity conducted to date. So far, experimental tests of the non-Newtonian features of general relativity have all resoundingly confirmed the real world validity of these equations, although not at anything approaching the levels of precision found in experimental confirmations of the Standard Model.

The New Data

* Jester at Resonances has noted indications that the 3.55-3.57 keV monochromatic X-ray emission from galactic clusters and Andromeda, is probably just excited potassium and chlorine atom emissions and not actually dark matter. In support of this conclusion, he cited the pre-print Tesla E. Jeltema and Stefano Profumo, "Dark matter searches going bananas: the contribution of Potassium (and Chlorine) to the 3.5 keV line" (August 7, 2014).  The abstract for this paper states that:
We examine the claimed excess X-ray line emission near 3.5 keV with a new analysis of XMM-Newton observations of the Milky Way center and with a re-analysis of the data on M31 and clusters. In no case do we find conclusive evidence for an excess. 
We show that known plasma lines, including in particular K XVIII lines at 3.48 and 3.52 keV, provide a satisfactory fit to the XMM data from the Galactic center. We assess the expected flux for the K XVIII lines and find that the measured line flux falls squarely within the predicted range based on the brightness of other well-measured lines in the energy range of interest. 
We then re-evaluate the evidence for excess emission from clusters of galaxies, including a previously unaccounted for Cl XVII line at 3.51 keV, and allowing for systematic uncertainty in the expected flux from known plasma lines and for additional uncertainty due to potential variation in the abundances of different elements. We find that no conclusive excess line emission is present within the systematic uncertainties in Perseus or in other clusters. 
Finally, we re-analyze XMM data for M31 and find no statistically significant line emission near 3.5 keV to a level greater than one sigma.
A response has been posted here and in the comments at Resonances.

There had previously been strong claims that the 3.5 keV X-ray emission line detected by Bulbul and others was a warm dark matter annihilation signal.

Another pre-print just released August 20, 2014, casts further doubt on this signal because it is not seen in observations of 170 other galaxies.
We conduct a comprehensive search for X-ray emission lines from sterile neutrino dark matter, motivated by recent claims of unidentified emission lines in the stacked X-ray spectra of galaxy clusters and the centers of the Milky Way and M31. Since the claimed emission lines lie around 3.5 keV, we focus on galaxies and galaxy groups (masking the central regions), since these objects emit very little radiation above ~2 keV and offer a clean background against which to detect emission lines. We develop a formalism for maximizing the signal-to-noise of sterile neutrino emission lines by weighing each X-ray event according to the expected dark matter profile. 
In total, we examine 81 and 89 galaxies with Chandra and XMM-Newton respectively, totaling 15.0 and 14.6 Ms of integration time. We find no significant evidence of any emission lines, placing strong constraints on the mixing angle of sterile neutrinos with masses between 4.8-12.4 keV. In particular, if the 3.57 keV feature from Bulbul et al. (2014) were due to 7.1 keV sterile neutrino emission, we would have detected it at 4.4 sigma and 11.8 sigma in our two samples. Unlike previous constraints, our measurements do not depend on the model of the X-ray background or on the assumed logarithmic slope of the center of the dark matter profile.
From Michael E. Anderson, Eugene Churazov, and Joel N. Bregman, "Non-Detection of X-Ray Emission From Sterile Neutrinos in Stacked Galaxy Spectra" (18 Aug 2014) (emphasis added).

The bottom line is that even if warm dark matter sterile neutrinos do exist, they do not annihilate in X-ray emitting events of they kinds that would have been produced by the "Bulbulon" dark matter candidate.

Given the ordinary "fertile" neutrinos and anti-neutrinos do not directly annihilate in a manner that produces photons, in the way that collisions of charged matter-antimatter particle pairs do, this isn't too surprising. (Neutrino and anti-neutrino pairs could "annihilate" an give rise to a Z boson with which both neutral leptons can couple via the weak force.  But, the neutrinos would need a combined 90.1 GeV of kinetic energy to make this possible, which would require that an ultra-relativistic neutrino and antineutrino collide, and such energetic neutrinos are largely restricted to rare cosmic ray neutrinos such as those recently detected by the IceCube experiment that are probably emitted by blazars.  Alternately, neutrino and antineutrino pair could annihilate into particle-antiparticle decay products of a Z boson (including low energy photons which would be the only Z boson decay products that are energetically permitted in many cases) with less combined mass-energy than the neutrino-antineutrino pair via a virtual Z boson.  But, while this quantum tunneling effect is possible, it is suppressed in frequency due to the large amount of mass-energy that must be "borrowed" to cross the 90.1 GeV Z boson mass threshold, if the colliding neutrinos are less energetic.)

But, the strongest direct evidence for the detection of warm dark matter now seems to be discredited.

* Meanwhile, Tomasso Dorigo reports on dark matter exclusions from the CMS experiment at the Large Hadron Collider which when added to the exclusions from the LUX direct dark matter experiment, rule out a huge part of the dark matter parameter space.



Between CMS and LUX scalar dark matter particles are ruled out in the mass range of about 1 GeV to 1 TeV, down to cross sections of interaction of 10-44 cm2, and down to cross sections of interaction of 10-45 cm2 for dark matter particles in the mass range of 1 GeV to 200 GeV. For vector dark matter particles, the corresponding cross section of interaction exclusions are 10-38 cm2 and 10-40 cm2, respectively.

The new LHC data exclude more of the parameter space mostly in the dark matter boson mass ranges from 1 GeV to 10 GeV.

These direct dark matter exclusions add another nail in the coffins of the CDM (cold dark matter) and WIMP (weakly interacting massive particle) dark matter paradigms.  All but the heaviest forms of CDM are excluded and the cross sections of interactions are too slight for any particle that has the same weak force coupling constant as a neutrino and not other Standard Model interactions.

Neither the LHC, nor the various direct dark matter detection experiments are sensitive enough at low masses to rule out WDM (warm dark matter, generically, in the mass vicinity of a keV) with the same certainty, and astronomy data tend to prefer WDM models over CDM models, or other very light dark matter particle candidates such as hypothetical particles called axions which are in the "hot dark matter" mass range that has been experimentally ruled out, but fit within an exception to that exclusion because axions would not be produced as thermal relics, unlike most proposed forms of dark matter  But, precision electroweak data from LEP does exclude the existence of any new weakly interacting particles with masses of less than 45 GeV that are heavier than the three Standard Model massive neutrinos.

Supersymmetry (SUSY) models, generically, predict the existence of beyond the Standard Model particles that are likewise too heavy, given current experimental bounds.

Other Constraints On New Dark Matter Sectors

As I noted in a post in January of this year, the astronomy data, in addition to the direct detection and collider data, severely constrain the dark matter parameter space. Some key conclusions from that post:

The purest form of sterile neutrino, with a particular mass and no non-gravitational interactions at all, is ruled out by observational evidence from the shape of dark matter halos.

* No particles that could produce the right kind of dark matter halo are produced in the decays of W and Z bosons, ruling out, for example, any neutrino-like particle with a mass of 45 GeV or less. In other words, no light dark matter candidate can be "weakly interacting".

* Direct detection of dark matter experiments such as XENON and LUX rule out essentially all of the cold dark matter mass parameter space (below 10 GeV to several hundreds of GeVs with the exclusion most definitive at 50 GeV) through cross-sections of interaction on the order of 10^-43 to 10^-45 which is a far weaker cross section of interaction than the neutrino has via the weak force.

The data rule out any kind of interaction between between cold dark matter and ordinary matter via any recognizable version of the three Standard Model forces (electromagnetic, weak and strong). Of course, by hypothesis, dark matter and ordinary matter interact via gravity just like any other massive particles. Thus, interactions between dark matter and ordinary matter other than via gravity are strongly disfavored.

* Dark matter has zero net electric charge (if dark matter is composite and confined, in principle, its components might still have electric charge) and is not produced or inferred in any strong force interactions observed to date in collider experiments.

* XENON also places strong limits on interactions between ordinary photons and "dark photons" found in some self-interacting dark matter theories.

To explain dark matter phenoma, one needs at least a new dark matter fermion and a new massive boson carrying a new force, because purely collisionless dark matter models don't fit the data.

* Purely collisionless dark matter (i.e. dark matter that interacts with other dark matter only via the gravitational force), that has a particular mass anywhere from the keV range to the TeV+ range produces cuspy halos inconsistent with observational evidence. (But, quantum mechanical effects when dark matter halos become dense, and gravitational interactions between ordinary baryonic matter and dark matter could mitigate these problems.  Arguments that Fermi pressure in the case of fermionic dark matter can solve the "cuspy core problem" were discussed at a recent conference on Warm Dark Matter for example, in this power point presentation by de Vega citing work supporting the WDM paradigm).  These papers also argued that angular momentum can discourage, but not sufficiently prevent clumpiness, and that collisionless GeV mass cold dark matter simulations always over predict the abundance of dark matter in the central of galaxies.)

* Models with multiple kinds of collisionless dark matter simultaneously present in the universe at the same time produce worse fits to the data than single variety of collisionless dark matter models.

* Collisionless bosonic dark matter, as well as fermionic collisionless dark matter, is likewise excluded over a wide range of parameters.

* Self-interactions between dark matter particles with each other with cross-sections of interaction on the order of 10^-23 to 10^-24 greatly improve the fit to the halo models observed (self-interactions on the order of 10^-22 or less, or of 10^25 or more, clearly don't produce the inferred dark matter halos that are observed). Notably, this cross section of self-interaction is fairly similar to the cross-section of interaction of ordinary matter (e.g. helium atoms) with each other. So, if dark matter halos are explained by self-interaction, the strength of that self-interaction ought to be on the same order of magnitude as electromagnetic interactions. But, our observations and simulations are now sufficiently precise that we can determine that ultimately, a simple constant coupling constant between dark matter particles, or even a velocity dependent coupling constant between dark matter particles, fails to fit the inferred pseudo iso-thermal ellipsoid shaped dark matter halos that are observed. Generically, these simple self-interacting dark matter models generate shallow spherically symmetric halos which are inconsistent with the comparatively dense and ellipsoidal halos that are observed.

* Experimental evidence has not yet ruled out next generation self-interacting dark matter models look at more a general Yukawa potential generated by dark matter to dark matter forces with massive force carriers (often called "dark photons") that have masses which empirically need to be on the order of 1 MeV to 100 MeV (i.e. between the mass of an electron and a muon, but less than the lightest hadron, the pion, which has a mass on the order of 135-140 MeV) to produce dark halos that are a better fit to the dark matter halos that are observed. Sean Carroll was a co-author on one of the early dark photon papers in 2008.

* Rapidly accumulating evidence regarding the properties of the Higgs boson disfavors new heavy particles that gain their mass via the Higgs mechanism. But, these measurements are not so precise that they could disfavor new light particles that gain their mass via the Higgs mechanism. Current experimental uncertainties in this equivalence could accommodate both a new massive boson of 100 MeV and a new massive fundamental fermion of up to about 3 GeV, so both particles could couple to the Higgs boson and obtain their mass entirely from interactions with it, even though they don't couple to the other Standard Model forces. But, reduced margins of error in measurements of the Higgs boson mass and top quark mass could tighten this constraint.

Conclusion

Two Plausible Corners of Dark Matter Parameter Space Remain In Play

The case to rule out a simple warm dark matter scenario with a 2-3 keV dark matter particle that interacts only via gravity and Fermi contact forces is not yet ruled out.  It may avoid, via Fermi contact forces, cuspy core problems that are generically a problem in heavier cold dark matter scenarios.

Another part of the dark matter parameter space (also here) generates the appropriate halos with a fairly light dark matter candidate with a dark photon of a mass ca. 1-100 MeV and a fermionic dark matter particle under 3 GeV, neither of which has any meaningful non-gravitational interaction with ordinary matter. [Update August 27, 2014 - the linked papers actually suggests a 1 TeV mass dark matter particle and manages to make a complicated mixed dark matter spectrum with additional sterile neutrinos that couple to dark photons as well work.]

De Vega, in the power point presentation linked above, however, makes a pretty convincing argument that axion dark matter is also a poor fit to the data.

On the other hand, if both of the small valid remaining corners of the dark matter parameter space are ruled out, then the experimental data is close to ruling out dark matter models entirely.

In either case, dark matter was clearly be almost entirely outside the domain of Standard Model physics.  There must be a dark sector that has almost no interactions with it.

Gravitational Approaches Aren't As Definitively Ruled Out As They Seemed To Have Been

Physicist Alexandre Deur claims to have identified a non-linear, non-Newtonian aspect of the canonical equations of General Relativity interpreted in the context of a graviton field theory involving the non-Abelian self-interactions of gravitons that could explain all or more dark matter phenomena in a manner that evades the limitations arising from observations of the Bullet Cluster that have dealt a serious blow to gravity modification theories, because his gravitational effect is suppressed in spherically symmetric systems (an assumption found in many attempts to analyze the non-Newtonian effects of general relativity).

Even if Deur is wrong in concluding that these effects are present in the general relativity equations themselves, which have not been replicated by any other the many other general relativity theorists in the last hundred years, he demonstrates that the Bullet Cluster data is not necessarily an insurmountable barrier to explanations of dark matter phenomena via the equations of gravity. So, the primary alternative to the dark matter hypothesis is not entirely dead.

Despite Deur's obscurity, and the lack of consensus regarding his findings (although I am not aware of any categorical refutation of his conclusions either), as the parameter space of potential dark matter candidates grows more narrow and requires more elaborate new physics to explain it, and as potential signals of direct dark matter detection continue to be false alarms, Occam's razor is beginning to favor gravitational alternatives to dark matter.

A Northeast Asian Lief Erikson In Peru?

Five hundred years before Columbus, and contemporaneous with the Viking Vinland settlement in maritime Canada, new genetic evidence points to a small population with Northeast Asian genetic affinities in Peru, another chapter in the fascinating Y1K moment in world history.

Japanese physical anthropologist Ken-ichi Shinoda performed DNA tests on the remains of human bodies found in the East Tomb and West Tomb in the Bosque de Pomas Historical Sanctuary, which are part of the Sican Culture Archaeological Project, funded by Japan’s government. The director of the Sican National Museum, Carlos Elera, told the daily that Shinoda found that people who lived more than 1,000 years ago in what today is the Lambayeque region, about 800 kilometers (500 miles) north of Lima, had genetic links to the contemporaneous populations of Ecuador, Colombia, Siberia, Taiwan and to the Ainu people of northern Japan.

Neither incursion from the Old World seems to have much of a demic impact on the Americas, and it isn't obvious that any of these Asian settlers made a round trip back to Asia to share their discoveries either. The only migration in the Y1K era that stuck in the New World was the circumpolar Inuit migration. A global cold snap took out the rest.

Thursday, August 14, 2014

Beta Decay Rates Vary Seasonally Due To Solar Neutrinos

A new study analyzing data from 1991-1996 beta decay measurements detects seasonal variations in beta decay rates of the same amplitude and phase over the course of the year across many different radioactive isotypes (the statistical significance of the result is eleven sigma).

A number of prior studies have seen hints of seasonal variation in beta decay rates, but none have been so definitive.

The researchers suggest an impact of solar neutrinos, whose flux through Earth varies seasonally, as a likely cause of the effect.  They aso consider why the methodology used in the experimental results they study was more sensitive to these effects than other experiments which have not disclosed such a clear trend.

Footnote: The result also suggests an engineering application - either bombarding atoms with neutrinos, or shielding them from neutrinos, to tweak their beta decay rates, either making stable isotypes radioactive, or making unstable isotypes temporarily stable while in those conditions.

Tuesday, August 12, 2014

Dineutron Bound States

A neutron is a composite particle composed of one up quark and two down quarks. It is a wee bit heavier than the rest mass of a proton (which is composed of two up quarks and one down quark) and the rest mass of an electron and the rest mass of the lighest anti-neutrino, combined. The rest mass of a neutron is 939.565378(21) MeV/c2.

A proton that is not bound into an atomic nucleus is stable (the mean lifetime of a proton which is at least 1.29*10^34 years, which implies that it would happen in not more than 1 in 10^24 protons over the entire 1.38*10^10 year lifetime of the universe), a result which follows naturally in the Standard Model from conservation of baryon number, quark confinement, and the fact that the proton is the baryon with the lowest rest mass.

When a neutron is not bound into an atomic nucleus and is at rest, it has a mean lifetime of about 14 minutes and 42 seconds +/- 1.5 seconds (which is equivalent to a half-life of 10 minutes and 11 seconds +/- 1.0 seconds), and naturally decays via weak force into a proton, an electron, an electron anti-neutrino, each of which has kinetic energy (and in about one in 1000 cases, also into electromagnetic energy in the form of a photon). The kinetic energy and the photon energy combined are equal to the difference in rest mass between the neutron and combined rest mass of the proton, electron and electron anti-neutrino times the speed of light squared, which turns out to be 0.782343 MeV/c2.

This mean lifetime is incredibly long compared to all other unstable particles in physics (except oscillating neutrinos). No other baryon (except the proton) has a mean lifetime of more than 10-10 seconds. No other meson has a mean lifetime of more than 10-8 seconds (the charge pion). The mean lifetimes of the muon (10-6 seconds), the tau lepton (10-13 seconds), and the even shorter mean lifetimes of unhadronized top quarks, the W boson, the Z boson and the Higgs boson, all of which are much less than 10-20 seconds. Gluons don't have a fixed lifetime, per se, but they are typically exchanged between other confined color charged particles in similar time frames, because they travel at speeds approximating the speed of light (a bit less than 3*10^8 meters) over distances on the order of the proton and neutron charge radius (i.e. about 0.8*10-15 meters), which is shorter than the mean muon life time.

The process by which free neutrons normally decay, called beta decay, is what causes nuclear radiation, although the rates of beta decay are lower when neutrons are bound in an atomic nucleus with protons, but the number of neutrons is high in relation to the number of protons.

In contrast, when protons are bound with a number of neutrons that produce a stable atomic isotype, beta decay does not occur and neutrons are stable. In part, this is because the decay of neutrons in a stable nucleus is offset in part by "inverse beta decay" in which an energetic proton emits a neutron, a positron (i.e. anti-electron) and an electron neutrino, and in part by electron capture in which an energetic proton and electron (perhaps one created in the ordinary decay of neutrons in the same atomic nucleus) merge to form a neutron and an electron neutrino (which is the anti-matter equivalent of the electron anti-neutrino produced in ordinary beta decay). There are also lower order (i.e. much less frequent) possible paths by which energetic neutrons and protons can form other kinds of hadrons with or without leptonic decay products.

Empirically, the neutron to proton ratio needed to make an atomic nucleus stable is between 1 and 1.537 (gradually increasing with larger atomic numbers), except in the degenerate cases of hydrogen (a bare proton without a neutron), and helium-3 (two protons and a neutron). As Wikipedia explains:

Neutron-proton ratio (N/Z ratio or nuclear ratio) is the ratio of the number of neutrons to protons in an atomic nucleus. The ratio generally increases with increasing atomic numbers due to increasing nuclear charge due to repulsive forces of protons. Light elements, up to calcium (Z = 20), have stable isotopes with N/Z ratio of one except for beryllium (N/Z ratio=1.25), and every element with odd proton numbers from fluorine to potassium. Hydrogen-1 (N/Z ratio=0) and helium-3 (N/Z ratio=0.5) are the only stable isotopes with neutron–proton ratio under one. Uranium-238 and plutonium-244 have the highest N/Z ratios of any primordial nuclide at 1.587 and 1.596, respectively, while lead-208 has the highest N/Z ratio of any known stable isotope at 1.537.

One can imagine an atomic nucleus which has zero protons and two neutrons, which is called a "bound dineutron". An atomic nucleus with no protons and an arbitrary number of neutrons in known, in general, as neutronium. A bound dineutron particle, like a neutron, would have no electrons and would therefore be chemically inert. It would have a rest mass of about 1.879 GeV before adjusting for mass due to the nuclear binding energy of the two neutrons. Since the strong nuclear force and weak nuclear force only act at short ranges, it would be collisionless and only influenced by gravity at ranges of less than the radius of a typical helium atom nucleus. Thus, if it were stable, it would be an excellent cold dark matter candidate.

Dineutron states were observed in 2012, but were transitory states that were not quite bound, rather than constituting a particle made up of bound neutrons which could in principle be stable. (Incidentally, unlike some neutral mesons, neutrons do not oscillate between neutrons and their anti-matter counterparts, which follows quite naturally from the conservation of baryon number in the Standard Model.) There appears to be a modest shortfall of nuclear binding force between a bound atomic nucleus and the bineutron state, but one could imagine that there could be some special factor that is ignored in other circumstances but is material in a nearly trivial two neutron system. For example, at the high energy scales present in Big Bang nucleosyntheis, the running of the Standard Model parameters with energy scale might permit the existence of bound bineutron states. Similarly lattice QCD studies with higher than physical pion masses (also here) find that bound bineutrons can exist. And other studies can't definitively rule out their existence and they might even play a role in determining alpha decay rates (although experimental evidence pretty clearly rules out truly stable dineutrons).

There also appear to be a set of phenomena common to not quite bound systems of baryons.

CKM and PMNS Matrix Numerology

The mixing angles of the CKM matrix which governs weak force flavor changing probabilities for quarks, and the PMNS matrix mixing angles which governs neutrino oscillation, show the following relationship empirically to the limits of current experimental precision:

(θ12PMNS/θ12CKM)*(θ23PMNS/θ23CKM)=(θ13PMNS/θ13CKM).

This tends to imply that the the probability of a first to third generation transition in the PMNS matrix is a function of the probability of a first to second generation transition and the probability of a second to third generation transition, in the same way that it is in the CKM matrix.

This, in turn, suggests that the three mixing angles in each of the matrixes actually involve only two degrees of freedom (i.e. the three mixing angles are not fully independent of each other). If this were true, the number of mixing matrix parameters of the Standard Model would be six instead of eight.

This is also mathematically equivalent to the following:

(θ12PMNS*θ23PMNS)/(θ12CKM*θ23CKM)=(θ13PMNS/θ13CKM).

In other neutrino physics, a new study claims to rule out (light) sterile neutrinos based upon cosmology data.

Friday, August 8, 2014

The Case Against Multiple Higgs Bosons

Paul H. Frampton (notorious in his personal life, but respected as a physicist) and Thomas W. Kephart have a new pre-print out related to the fact that the sum of the square of the Standard Model fermion masses is equal to one half of the square of the Higgs vacuum expectation value, a subject that I discussed in my most recent post and others, which is not inconsistent with experimental data, although it isn't certain to particularly high precision either since the margin of error in the top quark mass estimate is significant.

In particular, he illustrates how this relationship of the fermion masses to the Higgs vacuum expectation value, and the predicted decays of the Standard Model Higgs boson imply that any theory more than one scalar Higgs boson that couples to Standard Model particles cannot have Standard Model Higgs boson decay properties.  Yet, multiple Higgs bosons are a generic prediction of all supersymmetry (SUSY) theories.  So, to the extent that the Higgs boson decays in the manner predicted, SUSY theories are ruled out.

Frampton explains the argument less technically in a guest blog post here.

Basically, so long as the W boson mass is a function of the Higgs vacuum expectation value, and the Higgs boson decay branching fraction of a Standard Model fermion is a function of its mass, if the Yukawa of a fermion is too low, then the Higgs vev must be higher and the W boson mass will not conform to the experimental data.

As measurements of those branching fractions grow more precise, new physics theories with multiple Higgs bosons are increasingly disfavored.




Tuesday, August 5, 2014

Balancing Standard Model Fermion and Boson Squared Masses

The sum of the square of the pole masses of the Standard Model fermions (the quarks, charged leptons and neutrinos) plus the sum of the square of the Standard Model bosons (W, Z and Higgs) is almost exactly equal to the square of the vacuum expectation value of the Higgs boson given current experimental data.  Equivalently, the sum of the coefficients that are multiplied by the square of the Higgs vacuum expectation value to get the square of fundamental particle masses in the Standard Model sum to one.

Using these values the contribution from the fermions is about 2-3% smaller than the contribution from the bosons.  But, in the Standard Model, particle masses run with energy scales.  In general, at higher energy scales, fermion masses gets lower, but the boson masses (or at least the Higgs boson mass) fall much more rapidly.  As I've mused before, the energy scale at which the square of the fermion masses is equal to the square of the boson masses in the Standard Model may be a natural energy scale with some sort of significance.

The Higgs boson self-coupling constant, lambda, which is directly proportionate to the Higgs boson mass, falls by about 50% from its 0.13 value at the 125-126 GeV energy scale by 10,000 GeV (i.e. 10 TeV), and falls to zero in the vicinity of the GUT scale.

The running of the W and Z boson masses should correspond to the running of the constants g and g' in the Higgs boson mass formula, which are related to the electromagnetic and weak force coupling constants, which run in opposite directions from each other at higher energy scales and converge at about 4*10^12 GeV.

In contrast, the running of the charged lepton masses is almost 2% from the Z boson mass to the top quark mass and just 3.6% over fourteen orders of magnitude.  Quarks also run more slowly than the Higgs boson masses and the Higgs vev (which also runs with energy scale).

Eye balling the numbers, it looks like this cutoff is in the rough vicinity of the top quark mass (about 173.3 GeV in the latest global average) and Higgs vacuum expectation value (about 246.22 GeV).  Certainly, this equivalence is reached somewhere around the electroweak scale and surely below 1 TeV.  An equivalence at the Higgs vev would be particularly notable and is within the range of possibility.

Sunday, August 3, 2014

Finland From Its Prehistory To The Modern Era (UPDATED August 8, 2014)

Razib Khan has a nice little post summing up, among other things, the broad outlines of the prehistory of Finland and the means by which it arrived at its present population genetic and linguistic composition.

For full disclosure, I note than I am 50% Swede-Finn (my Lutheran Swede-Finn ancestors, who are on my maternal side, migrated to the Upper Peninsula of Michigan in the late 1800s), and it is possible that my account may be influenced by this fact. Some key points (with some of my own additions):

Mesolithic Finland

Finland was repopulated after the Last Glacial Maximum (which covered all of Northern Eurasia in an ice sheet ca. 20,000 BCE and completely ended modern human and any other hominin population of this region) only around 9,000 BCE (some sources indicate a date closer to 7300 BCE which would be about 9300 BP which could be caused by misunderstandings regarding the units in which dates are quoted), in an era sometimes called the Mesolithic or Epipaleolithic era.

Mesolithic Finnish Genetics

Uniparental genetics suggest that the source population for the original resettlement, which has left a strong serial founder effect impact on the Saami population (albeit with strong Uralic contributions) made its way up the Atlantic Coast (e.g. the Dutch and Belgian coast), probably from a Franco-Cantabrian refugium which also contributed to the proto-Berber populations with which it shares two of its three predominant mtDNA haplogroups, in what is now Algeria and Morocco to the South as it expanded in that direction as well.

The rare mtDNA haplogroup V is found all along the maritime path of this group of Mesolithic people's expansion (although later studies have shown mtDNA V in Eastern and Central Europe as well).  For example, a new study documents a Kurgan burial of an individual from the Novosvobodnaya culture in a part of what is now Southern Russian in the Northern Caucasus mountains around 3,000 BCE with mtDNA haplogroup V7.  The author of this new paper argues from this one ancient DNA sample that the mtDNA V7 points to a link with the contemporaneous Funnelbeaker culture (often abbreviated TRB from its German spelling) further North which was characterized by herding, fishing, marginal grain farming, and the use of imported copper but not Bronze, on the grounds that there are some indications of cultural linkage between the two cultures and on the grounds that mtDNA V is found in some remains from the Linear Pottery culture (often abbreviated LBK) who were the first farmers of the region from which the author argues that the TRB is derived.  (The lack of references to ancient TRB DNA to back up this claim, and reliance on a very outdated 1999 paper from Brian Sykes for the largely disproven hypothesis that most European mtDNA haplogroups have been widespread in Europe since the Upper Paleolithic era as a foundation for his analysis, undermines confidence in his conclusion, however.)  Also, who is to say that mtDNA V didn't introgress from pre-existing Mesolithic European hunter-gathers of whom the early Finns were a part, into farmers on the frontier of the Neolithic revolution, rather than the other way around?

The other leading mtDNA haplogrop in the Saami is U5b, which is common in all European hunter-gatherer populations. But, the U5b is predominantly U5b1b1 (284 out of 292 individuals with U5b with all 8 outliers located in Southern Sweden at the fringe of the area where Saami people are found), compared to about 20% of Continental Europeans with mtDNA U5b.



Disregarding the editorial white lines of distribution provided by the author (source here), I would say that the U5b1b1 distribution is quite a good fit for the coastal migration route.

On the Y-DNA side, one of the three most common Y-DNA haplogroups of the Finns, I1, is associated with the pre-Neolithic population of Europe (or more precisely, with sister Y-DNA haplogroup I2) much like mtDNA U5b and to a lesser extent mtDNA V (which is also found in ancient DNA in some early Neolithic populations in Europe, although possibly as a result of the first farmers of Europe taking wives from prior hunter-gatherer populations).

These people may have been the source population designed Western European Hunter-Gatherer based in ancient autosomal DNA, one of the three populations, together with "Early European Farmers" and "Ancestral Northern Eurasians" who were the main contributors to modern European population genetics in most of Europe (although as explained below, Finland had another Uralic population that contributed to its population genetics).

Mesolithic Archaeological Cultures In Finland

From 9000 BCE (or 7300 BCE if the reference is inaccurate) until about 2500 BCE, Finland was inhabited by the descendants of these people, who fed themselves mostly by fishing supplemented by hunting and gathering, without farming or herding.

Ulitmately, the Comb Ceramic culture, which was closely akin to the Pitted Ware people of mainland Europe nearby on the Baltic Sea, emerged from these people, around 5300 BCE, as they engaged in trade with Neolithic cultures to the South.
For example, flint from Scandinavia and the Valdai Hills, amber from Scandinavia and the Baltic region and slate from Scandinavia and Lake Onega found their way into Finnish archeological sites, while asbestos and soap stone from e.g. the area of Saimaa spread outside of Finland. Rock paintings—apparently related to shamanistic and totemistic belief systems—have been found, especially in Eastern Finland, e.g. Astuvansalmi.
Farming and Herding Arrive In Finland

Around 2500 BCE (or perhaps as late as 2300 BCE), Finland's Comb Ceramic maritime culture merged with the linguistically Indo-European Corded Ware culture derived Battle Axe culture from the south (that made its first inroads into Southwest Finland ca. 3200 BCE) to give rise to the Kiukainen culture. The Battle Axe culture brought dairy farming and greatly disrupted the local culture, although less the ideal climate conditions impaired the desirability of the area for herding and farming, ultimately led to a resurgence in maritime fishing people's to the local culture, so that both cultures made significant contributions.

It hasn't been clear until this week how much of a contribution this archaeological culture made to the further Northern Saami people, but the sudden appearance of dairy farming as evidenced by residues on pottery suddenly appearing around 2500 BCE even above the Arctic Circle, puts to rest the possibility that dairying didn't reach the far northern reaches of Finland at that time.  It is now clear that almost everyone in the region was affected as the Neolithic revolution finally reached Finland, even if some populations subsequently reverted to a different subsistence mode.

Indo-European Farmer Genetics

The Battle Axe culture is the likely source of Y-DNA haplogroup R1a in Finns and possibly also mtDNA H mostly arrive at this time (although the antiquity of mtDNA H in the region in ancient suggests that this haplogroup could have arrived in the Mesolithic, rather than a Neolithic era.

Notably, in Japan, whose Jomon culture was also maritime fishing based before the arrival of the Yaoyi rice farmers with horse riding warriors, there the indigeneous maritime culture also had a considerable population genetic contribution, rather than the predominant replacement of terrestrial hunter-gatherers seen elsewhere.  More sedentary fishing cultures may have had more staying power vis-a-vis early farmers than terrestrial hunters and gatherers.

The Case For An Indo-European Language Shift

At this point in time, the population of Finland probably came to speak an Indo-European pre-Slavic Baltic language. (A Finnish linguist argues that this Corded Ware language was proto-Germanic, but in my view a Baltic language is more likely.)

The ancestral language of the Comb Ceramic and Pitted Ware peoples was probably lost at this time in what was probably one of its last outposts in Europe (after earlier episodes of Neolithic replacement or conquest in Continental Europe starting ca. 5500-4600 BCE), except for some place names and perhaps some twice removed substrate influences.

This first farmer Neolithic era probably persisted until the arrival of the that persisted until the Finish Bronze Age (ca. 1500 BCE-500 BCE).

The Finnish Bronze Age

Another Language Shift

Sometime after 2500 BCE, Finland received an influx of a Uralic language speaking population from Siberia, not closely related to either of its source populations.

This influx gave rise to a language shift during which the Finnish language (or at least the Finno-Saami language family) probably emerged. The Finnish Bronze Age exactly coincides with a statistical estimate of the time of origin of the Finno-Saami branch of the Uralic languages.

Realistically, the Uralic language probably arrived as part of the advent of the Finnish Bronze Age ca. 1500 BCE which arrived with Bronze using cultures from Northern and Eastern Russia according to Wikipedia with is in accord with this source regarding the earliest appearance of Bronze artifacts being located inland. But, an unsourced Internet resource claims a Western rather than Eastern source for the Finnish Bronze Age and the appearance of the Western practice of cremation around this time in Finland argues for a Western source in coastal areas.  These differences of opinion regarding the origin of the Finnish Bronze Age had nationalist implications for Finland.

Like Razib, I disagree with the Wikipedia analysis that puts the arrival of the Uralic languages according to older scholarship, in Finland ca. 4000 BCE.  The Wikipedia analysis acknowledges the possibility that this is inaccurate, but suggests an Iron Age arrival for Uralic, which I suspect is too late, although a differentiation of Finnish from neighboring Uralic languages may date to the Iron Age ca. 500 BCE.

The Genetic Impact of Bronze Age Uralic Migration Into Finland

Razib notes that they left an east Asian autosomal genetic contribution (about 5%-8% in Saami populations today), mtDNA (mtDNA haplogroups D5 and Z which make up a similar percentage of Saami mtDNA with the mtDNA Z1 subclade contribution dated to 2,000-3,000 years ago by mtDNA mutation rate dating, i.e. the Iron Age) and Y-DNA legacy (e.g. Y-DNA haplogroup N1b and N1c1), which is distinct from the "ancestral North Asian" autosomal genetic legacy in Europe (that may have arrived with first and/or subsequent waves of Indo-Europeans together with, for example, Y-DNA haplogroup R1a).

About the Uralic Language Family

The Uralic language family is shared by the people of Estonia, Latvia, the Karelian region of Russia, and the Saami people of Finland. A more distant branch of this language family is Hungarian, and another more distant branch of this language family is shared by many Siberian ethnic populations including the Mari who are arguably the last population to have continuously practiced pagan religions of Northern Asia into the present.

Some linguists such as Michael Fortescue writing in 1998, have argued that the Eskimo-Aleut languages, including Inuit, are a sister language family to the Uralic languages that together form a circumpolar mega-language family.  Morris Swadesh in 1962, Holst in 2005, and others dating back to the 1746 CE, have made similar proposals.

From a historical timing perspective, however, the Saqqaq (Arctic Paleo-Eskimos) which was present 2000-2500 BCE (the best ancient DNA example of which had Y-DNA Q1a, mtDNA D2a1, and autosomal DNA similar to the modern Koryak people of the Northeast Asia coast) , or the Dorset (second wave Arctic Paleo-Eskimos) would be a better match, and perhaps left a substrate influence on later Eskimo-Aleut languages (although there are genetic indications that these earlier populations were almost totally replaced in the Americas) which arrived with the Thule around 500 CE. Inuits lack Y-DNA N1b and N1c found in modern Finns and contributed mostly mtDNA haplogroups A (which is absent in the Finns) to the existing indigenous American mtDNA pool, while the Dorset contributed mostly mtDNA D3 (found in modern Paleo-Siberian populations and which is a sister clade to Finnish and Asian mtDNA D5), according to the ancient DNA samples.

The Bronze Age Transition Was Demographically More Important Than The Iron Age Transition

The fact that the Finnish Bronze Age arrived from the Uralic heartland with a probable demic component, while the Finnish Iron Age appears to have include more cultural exchange than demic migration, and the likely powerful capacity of a metal age culture to overwhelm a pre-metal age culture militarily, argues for an arrival ca. 1500 BCE, rather than earlier or later.  Certainly, this Siberian centered linguistic family would not have appeared at a time when the cultural influences in Finland were from Rome or Western Europe as it was from 0 CE onward.

One archaeological site from the period is this one and a number of Finnish Bronze Age papers from 2009 can be found here).

The Finnish Iron Age and the Middle Ages

The Finnish Iron Age began around 500 BCE (several hundred years later than in the Mediterranean) and around 0 CE, began to show the influence of trade with the Roman Empire that persisted until about 400 CE. The Migration Period during which there were mass folk migrations of "barbarian" Germanic tribes like the Goths and the Visigoths through Europe extended to Finland's Iron Age from ca. 400 CE to 575 CE and showed increasing Germanic influence in cultural artifacts.

The Migration period and the Merovingian period that followed also coincide historically with Slavic expansion into what is now Orthodox Christian Eastern Europe.

From 575 CE to 800 CE, "The Merovingian period in Finland gave birth to distinctive fine crafts culture of its own, visible in the original decorations of domestically produced weapons and jewelry. Finest luxury weapons were, however imported from Western Europe. The very first Christian burials are found from the latter part of this era as well. The Leväluhta burial findings suggest that the average height of a man was 158 cm [i.e. 5'2"] and that of a woman was 147 cm [i.e. 4'10"]."

Trade with the linguistically Germanic (Indo-European) Vikings (many from Sweden and some of whom started to colonize Finland) followed from 800 CE to 1025 CE during which hill forts started to be erected in Southern Finland in the earliest signs of urbanization.

The Christianization of Finland began in earnest around 1150 CE which was also around the time that Finland begins to appear in the written historic record. Swedish colonization efforts directed at Finland were stepped up in Northern Crusades in the early 1200s CE, bringing with them the other of Finland's two major languages and adding a significant Swedish population genetic component to the overall mix. According to Wikipedia the story then continued as follows:
In the early 13th century, Bishop Thomas became the first bishop of Finland. There were several secular powers who aimed to bring the Finns under their rule. These were Sweden, Denmark, the Republic of Novgorod in Northwestern Russia and probably the German crusading orders as well. Finns had their own chiefs, but most probably no central authority. Russian chronicles indicate there were conflict between Novgorod and the Finnic tribes from the 11th or 12th century to the early 13th century.

The name "Finland" originally signified only the southwestern province that has been known as "Finland Proper" since the 18th century. Österland (lit. Eastern Land) was the original name for the Swedish realm's eastern part, but already in the 15th century Finland began to be used synonymously with Österland. The concept of a Finnish "country" in the modern sense developed only slowly during the period of the 15th–18th centuries.

It was the Swedish regent, Birger Jarl, who established Swedish rule in Finland through the Second Swedish Crusade, most often dated to 1249, which was aimed at Tavastians who had stopped being Christian again. Novgorod gained control in Karelia, the region inhabited by speakers of Eastern Finnish dialects. Sweden however gained the control of Western Karelia with the Third Finnish Crusade in 1293. Western Karelians were from then on viewed as part of the western cultural sphere, while eastern Karelians turned culturally to Russia and Orthodoxy. While eastern Karelians remain linguistically and ethnically closely related to the Finns, they are considered a people of their own by most. Thus, the northern border between Catholic and Orthodox Christendom came to lie at the eastern border of what would become Finland with the Treaty of Nöteborg in 1323.

During the 13th century, Finland was integrated into medieval European civilization. The Dominican order arrived in Finland around 1249 and came to exercise huge influence there. In the early 14th century, the first documents of Finnish students at Sorbonne appear. In the south-western part of the country, an urban settlement evolved in Turku. Turku was one of the biggest towns in the Kingdom of Sweden, and its population included German merchants and craftsmen. Otherwise the degree of urbanization was very low in medieval Finland. Southern Finland and the long coastal zone of the Bothnian Gulf had a sparse farming settlement, organized as parishes and castellanies. In the other parts of the country a small population of Sami hunters, fishermen and small-scale farmers lived. These were exploited by the Finnish and Karelian tax collectors.
Trade exchanges during the early Iron Age probably did not lead to language shift, and even the influx of Swedish and other Scandinavian colonists in starting in the Viking era around 1200 CE only led to a bilingual situation with coastal colony towns speaking Germanic Swedish languages with a Finnish substrate and Finnish and Saami remaining living languages elsewhere.

Finland In The Early Modern Era

Swedish domination would continue for centuries, and in the Reformation, the Swedish sided with the Protestant Lutherans against the Catholics and had their own round of witch hunting in the 1600s, including well as a short lived effort to establish a Swedish colony in America near the Delaware-Pennsylvania area from 1638-1655 CE, with at least half of the colonists coming from Finland.

Very hard times followed for the next quarter century resulting in the death of a third of the population in a four year long famine, followed by the death of half of the population in a twenty-one year long war.  The population of Finland fell by about two-thirds in a single generation.
In 1696–1699, a famine caused by climate decimated Finland. A combination of an early frost, the freezing temperatures preventing grain from reaching Finnish ports, and a lackluster response from the Swedish government saw about one-third of the population die. Soon afterwards, another war determining Finland's fate began (the Great Northern War of 1700–21). The Great Northern War (1700–1721) was devastating, as Sweden and Russia fought for control of the Baltic. Harsh conditions—worsening poverty and repeated crop failures—among peasants undermined support for the war, leading to Sweden's defeat. Finland was a battleground as both armies ravaged the countryside, leading to famine, epidemics, social disruption and the loss of nearly half the population. By 1721 only 250,000 remained.
Constitutional monarchy with a powerful parliament followed in Sweden, but "Finland by this time was [still] depopulated, with a population in 1749 of 427,000." Potato farming (which was a dietary staple of my ancestors) arrived after the 1750s.

In 1809, Finland was annexed to Russia with the assent of a popular assembly.

Mass migration to the United States in the late 19th century was also accompanied by hard times at home in Finland.

Finland secured independence in 1917, followed by a brief civil war in 1918, as a result of the Russian revolution.