Tuesday, May 4, 2021

Greek Ancient DNA Fleshes Out The Historical Linguistics Narrative

Razib Khan and Bernard have posts on a new paper with Greek ancient DNA. Razib's summary hits the nail on the head:

1) the main pulse of Indo-Europeans, the proto-Greeks, arrived ~2300 BCE to “mainland Greece” (i.e., the north). This notwithstanding other earlier contacts noted in the text between the Pontic steppe and the Balkans

2) The Minoans and other peoples of the Aegean did not have this ancestry. This is not surprising. But, this works seems to confirm a likely pulse of ancestry into the Aegean ~4000 BCE with roots in eastern Anatolia and/or the Caucasus. This is a minority component, but seems correlated with the arrival of Y chromosomal group haplogroup J2, and has been detected as far west as Sicily.

3) The above component is related to the contributor to about half the ancestry among the Yamnaya samples. But, the Yamnaya samples themselves are about half “Eastern Hunter-Gatherer” (EHG), which itself can be decomposed as 25% “Western Hunter-Gatherer” (WHG) and 75% “Ancient North Eurasian” (ANE). This EHG component was lacking entirely in the Minoans of the Bronze Age and is lacking in modern Cypriots (who are mostly ethnically Greek). In contrast, the EHG component begins to increase in the Balkans during the late Neolithic.

4) There seems to have been a further dilution of the steppe component among the Bronze Age Greeks as they moved from the north to the south. The largest component of Greek ancestry then, and now, remains “Early European Farmer” (EFF), related to and descended from “Anatolian Farmer” (AF).

5) Modern Greek samples have more steppe than late Bronze Age samples (Mycenaeans). I am confident this is due to early medieval Slav tribes, who moved as far south as the Peloponnese in large numbers. I’ve looked at a fair number of Greek samples, and some of them have way less steppe ancestry than others, with the latter matching those labeled “northern Greek” by the Estonian Biocentre dataset. I think many of these former are likely island Greeks from the Aegean or Greeks who descend from early 20th century migrants from Anatolia.

So, basically, you have a four wave model:

1. Anatolian Neolithic migrants.

2. Copper Age/Early Bronze Age migrants from West Asian highlands.

3. Indo-Europeans from North to South and petering out as they move south.

4. Medieval Slavs (who are Indo-European) also on a North to South cline.

The West Asian highland wave largely didn't make it into Europe (apart from Sicily, a few other places in Italy, Malta, European Turkey, and Greece).

The presence of this substrate in the places where the Anatolian languages emerged, and its absence elsewhere, is, in my humble opinion, what makes the Anatolian Indo-European languages (most famously Hittite) so divergent, despite the fact that it is really contemporaneous with Mycenaean Greek, Sanskrit, Avestian Persian, Tocharian, and other European Indo-European macro-language families.

This West Asian highland wave also tends to support my hypothesis that there is a macro-language family associated with it that includes Minoan, Hattic, Hurrian, Kassite, and probably some of the Caucasian languages, in addition to providing an Anatolian Indo-European language substrate. 

I'd also guess that at a time depth too great to construct using standard linguistic methods, that all or most of the Caucasian languages, Sumerian, and Elamite are also part of this macro-language family, for which ergative grammar is probably a good litmus test. 

This data and Razib's analysis also suggest the possibility of using Y-DNA J2 as a possible tracer of this language family's range in cases where subsequent languages acquired by language shift can't be ruled out, and to narrow down which Caucasian languages were mostly likely associated with this wave (most likely the Northeast Caucasian languages like Chechen and Ingush, where Y-DNA J2 present in a majority of men who speak those languages).

Map via Wikipedia

If Y-DNA J2 is a reliable tracer of this wave, then it also suggests that Basque and the extinct Vasconic languages are probably not associated with it. See, e.g., here. This thus favors the alternative hypothesis that Basque is derived from the language of the first farmers of Iberia, probably derived from the languages of West Anatolian early Neolithic farmers, rather than languages related to, for example, Minoan, Hattic and Hurrian.


Map via Wikipedia


Map via Eupedia

Indeed, quite likely both the Western Anatolian language family of the Neolithic first farmers, and the West Asian highlands language of the early metal age were remote cousins within this macro-language family, with Y-DNA G (associated with the first wave Anatolian farmers), J1 and J2 tracing deep historic outlines of some of its main branches (with J1 eventually undergoing language shift or language evolution to Afro-Asiatic languages like Arabic, Coptic, Berber and the Ethiopian languages).

Further Reading












Wednesday, April 28, 2021

No Evidence Of Higgs Channel Dark Matter

The Results

The ATLAS experiment, using the full strength 13 TeV LHC collisions of Run-2 has looked for Higgs portal dark matter and reports in a new preprint that its search has come up empty. 

The experimental ATLAS results were almost a perfect match to the predicted Standard Model backgrounds in its search using four different event selection criteria to make the result robust. All of the experimental results were within one sigma plus rounding to the nearest discrete measurable outcome.

This leaves no meaningful signal of a beyond the Standard Model Higgs portal dark matter decay, which ATLAS compared to several different Higgs portal dark matter models which it considered as benchmarks. 

The exclusion is stronger than direct dark matter detection experiments for dark matter particle masses of under 2 GeV for the dark matter model it examined.

Additionally, the results for the 𝑍'B model are interpreted in terms of 90% CL limits on the DM–nucleon scattering cross section, as a function of the DM particle mass, for a spin-independent scenario. For a DM mass lower than 2 GeV, the constraint with couplings sin 𝜃 = 0.3, 𝑔𝑞 = 1/3, and 𝑔𝜒 = 1 placed on the DM–nucleon cross section is more stringent than limits from direct detection experiments at low DM mass, showing the complementarity between the several approaches trying to unveil the microscopic nature of DM.

As decisively as this study ruled out Higgs portal dark matter, this analysis is actually generous in favor of the possibility of Higgs portal dark matter. 

If there really was Higgs portal dark matter, it would significantly throw off the percentage of all Higgs boson decays which were to Standard Model predicted decay modes as well, which isn't analyzed in this paper, but which is also not observed in the manner that this kind of tweak to the Standard Model would cause.

The Paper

The paper and its abstract are as follows:

A search for dark-matter particles in events with large missing transverse momentum and a Higgs boson candidate decaying into two photons is reported. The search uses 139 fb1 of proton-proton collision data collected at s=13 TeV with the ATLAS detector at the CERN LHC between 2015 and 2018. No significant excess of events over the Standard Model predictions is observed. The results are interpreted by extracting limits on three simplified models that include either vector or pseudoscalar mediators and predict a final state with a pair of dark-matter candidates and a Higgs boson decaying into two photons.

ATLAS Collaboration, "Search for dark matter in events with missing transverse momentum and a Higgs boson decaying into two photons in pp collisions at s√=13 TeV with the ATLAS detector" arXiv 2104.13240 (April 27, 2021).

Implications For Dark Matter Particle Searches

This study doesn't entirely rule out Higgs portal dark matter. Higgs portal dark matter could conceivably appear only in decays that don't involve photon pairs, if you twist your model to have that property. But it continues the relentless march of the LHC to rule out the existence of new particles of several TeV mass or less that could be dark matter candidates on multiple fronts.

Between the LHC and direct dark matter detection experiments, there are fewer and fewer places in the dark matter particle parameter space for a dark matter particle candidate that has any interactions whatsoever with Standard Model matter could be hiding. 

Yet truly "sterile" dark matter particle candidates that interact with ordinary matter only via gravity, whether or not it has dark sector self-interactions, are very hard to square with the observed tight alignment between inferred dark matter halo distributions and ordinary matter (as previous posts at this blog have explained). 

Furthermore, as previously posts at this blog have explained, dark matter candidates with more than several TeV of mass that the LHC can search for are also disfavored. 

These candidates are too "cold" (i.e. these particles would have too low of a mean velocity) to fit the astronomy observations of galaxy dynamics and large scale structure if it was produced as a thermal relic (for which dark matter mean velocity and dark matter particle mass are related via a viral theorem).

But conventional lambda CDM cosmology demands a nearly constant amount of dark matter for almost the entire history of the universe (and observations place significant model independent bounds on variations in the amount of inferred dark matter in the universe), even if dark matter particles are not produced as thermal relics. So, any dark matter candidate that is not a thermal relic has to be produced in a non-thermal process in equilibrium in the current universe that would have to exist. There are really no plausible candidates for such a process that aren't ruled out observationally, however. 

What Was ATLAS Looking For And Why?

As discussed below, one promising kind of dark matter candidate is Higgs portal dark matter. If Higgs boson portal dark matter exists, there should be decays of pairs of photons and something else which should produce (for the reasons discussed below) a very clean experimental signal. 

Some kinds of Higgs portal dark matter models also involve decays from beyond the Standard Model electrically neutral bosons, such an additional heavy Z boson called a Z' predicting in a variety of beyond the Standard Model theories, or pseudo-scalar Higgs boson called the HA predicted in "two Higgs doublet  model" (2HDM) extensions of the Standard Model predicted also predicted in a variety of Standard Model theories including essentially all supersymmetry theories.

By looking for missing energy (which is a typical way to search for new particles, as discussed below) in diphoton decays of Higgs bosons, the Large Hadron Collider (LHC) can search for Higgs portal dark matter, and the ATLAS Collaboration at the LHC did just that.

Why Search For Higgs Portal Dark Matter Candidates?

One class of dark matter candidates which has attracted a great deal of attention is "Higgs portal" dark matter candidates. These dark matter candidates would not have electromagnetic charge, strong force color charge, or weak force interactions, and hence would be nearly "sterile" dark matter candidates, but would couple to the Higgs boson like all other massive Standard Model particles which would provide a process by which these dark matter particles are created and would allow for very slight interactions (indirectly through virtual Higgs bosons loops) with ordinary matter. These are properties that generically we would like dark matter candidates to have, and this doesn't preclude the interaction of dark matter particles with each other via some force that only interacts with particles in the dark sector. 

Thus, it could be "self-interacting dark matter" with a means of formation shortly after the Big Bang and slight interaction with baryonic matter via a virtual Higgs boson interaction, which are properties that would all be good for a particle dark matter candidate that can come close to reproducing reality to have. 

Another desirable feature of Higgs portal dark matter candidates is that if they exist, they would be observable at sufficiently powerful particle colliders like the LHC.

Searches For Beyond The Standard Model "Invisible" Decay Products

The LHC has searched at the highest available energies for the production of particles that its detectors aren't able to see (e.g. because the decay products are long lived enough to exit the collider before being detected and/or not electrically charged) in a wide variety of processes.

This is done by looking for "missing traverse momentum", i.e. by looking at a decay of a known Standard Model particle, adding up the mass-energy of the observed decay products, adding to the detected mass-energy the predicted mass-energy from neutrino decays which are "invisible" to the detectors in a collider because they are electrically neutral and long lived, and adjusting the result for the fact that the detectors are known to miss a well quantified percentage of detectable particles produced in ordinary decays because the detectors aren't 100% perfect.

Missing traverse momentum is a generic signal of long lived, electrically neutral beyond the Standard Model particles, including dark matter candidates, so LHC experiments are constantly vigilant for statistically significant amounts of missing traverse momentum produced from its collisions.

Why Look At High Energy Photon Pairs?

One of the less common decays of the Higgs boson, predicted by the Standard Model, is  a decay to a pair of photons. It is an attractive one to study, and was one of the Higgs boson decays that was central to the discovery of the Higgs boson in 2012 in the first place even though other kinds of Higgs boson decays are much more common because it produces a very "clean" signal. 

The signal is "clean" because there are very few "background" processes that generate photon pairs. The main other Standard Model fundamental or composite particles that do are Z boson at about 91.19 GeV and the neutral pion at about 135 MeV.  The frequency and character of the Standard Model predicted decays are very well characterized and understood. Putting a floor on the energy of the photon pairs considered in one's event selection easily screens out these decays.

The Standard Model Higgs boson has a mass of about 125 GeV, all of which is translated into photon energy in its Standard Model decay into photons pairs, so focusing in on decays producing photon pairs with combined energies of 120 GeV to 130 GeV screens out all of the non-Higgs decay based background as this study did.

Thursday, April 22, 2021

The Search For Intelligent Life Other Than On Earth

It would be a worldview and Copernican perspective revolution if we discovered intelligent life somewhere other than Earth. The Drake equation is the leading way of trying to estimate how common such a possibility might be.

One related question that receives less attention, is what the likelihood is that if we encounter evidence of intelligent life outside Earth that this form of life will be extinct when we encounter it.

Without doing formal math, my intuition is that the probability of us encountering evidence of an extinct form of intelligent life before we encounter intelligent life that is not extinct is rather high.

Wednesday, April 21, 2021

A Short Guide To Beyond The Standard Model Physics Proposals That Are Wrong

A June 4, 2019 paper from the LHC reviews a plethora of beyond the Standard Model theories, most of which are motivated by supersymmetry or grand unified theory (GUT) models. It is one of the more comprehensive and up to date available reviews of such models.

Every single one of these theories is completely wrong.

These theories are the Lotus eater traps of modern physics that should be dead on arrival and summarily rejected in any serious effort to understand our reality.

Not a single one of them can even reproduce all of the particles of the Standard Model, without creating any particles that are not observed, or phenomena that have not been experimentally ruled out. 

The more compliant supersymmetry (SUSY) models are with experimental constraints, the less well they achieve the purposes for which they were designed.

It shocks me, to some extent, that it seems as if almost no one has even seriously tried to produce a model with no right handed neutrinos, no extra Higgs bosons, no other new fermions or bosons, no dark matter candidates, three generations of fundamental fermions, and that preserves baryon number and lepton number. 

But, perhaps this is simply because the entire enterprise is doomed,  and ill motivated by observational evidence in the first place, and it simply can't be done.

Some of these models explain charged lepton flavor universality violations (usually with leptoquarks), which is pretty much the only significant anomaly between the Standard Model and observation that hasn't been ruled out (or seems likely to be ruled out soon based upon what we know), but even if the anomaly turns out to be real, my own sense is that the solutions proposed to explain these anomalies are almost certainly not the correct ones.

Friday, April 16, 2021

Gravity and Locality

This post is a restatement of a Physics Forum post answering the question "is gravity action at a distance or is it a local force" and a specific subquestion related to string theory and loop quantum gravity which I include in my answer:

Gravitational Waves and Hypothetical Gravitons Propagate At The Speed Of Light

The affirmative detection of gravitational waves by LIGO and other gravitational wave detectors, coinciding to within error bars of photon evidence of a black hole-neutron star merger, strongly supports the view that gravity is a local effect that propagates at the speed of light, rather than an instantaneous "at a distance" effect.

General relativity, and every graviton based quantum gravity theory adopts this position.

Localization Issue In Classical General Relativity And The Issues It Poses For Quantum Gravity

This said, however, there are issues like the localization of gravitational energy, and the self-interactions arising from gravitational fields, which conventional general relativity theory as conventionally applied does not recognize.

Similarly, while individual gravitational interactions in general relativity conserve mass-energy, in general relativity with a cosmological constant the aggregate amount of mass-energy in the universe (apart from gravitational potential energy) increases with time at a global level, although it is arguably possible to describe a gravitational potential energy concept to address this which IMHO doesn't really work but a minority of GR theorists say overcomes the mass-energy conservation issue.

The theoretical aspects of GR that disfavor localization or don't conserve mass-energy area particularly problematic when trying to quantize gravity, because a quantum particle based theory pretty much entirely has to be a bottom up theory that derives all global properties from individual local interactions that the theory permits.

Entanglement and Locality

In quantum gravity theories, it is conceivable that gravitons could become entangled with each other leading to correlations in the properties of the entangled particles analogous to those seen in quantum mechanics in other contexts.

But entanglement effects always involve particles that share a light-cone in space-time, and as a practical matter, while we can measure the correlated properties of entangled photons or other Standard Model particles, we do not have the capacity to measure the properties of individual gravitons, either individually, or statistically, so there is no way to resolve entanglement questions related to quantum gravity in the straightforward direct way that we do with Standard Model particles.

The gist of entanglement is that the correlations observed require the sacrifice of at least one of three axioms that we can usually resort to in physics: causality, locality, or realism. So, while sacrificing locality is one way to get entanglement-like effects, it is not the only one. Arguably, they are all equivalent ways of expressing the same concept and the fact that this is not manifestly obvious simply indicates that our language is not aligned with underlying concepts of how Nature really works.

In the case of interactions involving photons, gravitons and gluons, I personally often find it convenient to, and tend to favor, sacrificing "causality" (i.e. the arrow of time) rather than locality, because fundamental massless spin-1 bosons always move at the speed of light, and hence, do not experience the passage of time in their own reference frame, so it makes sense that interactions mediated by fundamental massless spin-1 bosons should not experience an arrow of time either, thus disallowing CP violation (which empirically is not observed) in interactions mediated by these massless bosons, and essentially stating that the line in space-time that entangled massless bosons follow basically amount to simultaneous points in the space-time coordinates that are best suited to judging causality. But to some extent the decision regarding which axiom to sacrifice in entanglement situations is a stylistic one with no measurable real world effects.

But, lots of loop quantum gravity oriented quantum gravity theories adopt causality as a bedrock axiom and treat the dimension of time in some sense distinctly for space dimensions, so one must sacrifice either locality or reality to some degree in these causation affirming LQG theories.

Similarly, setting up a graviton entanglement experiment (or even a "natural experiment" that would entangle gravitons somehow so that we could measure these effects) is beyond our practical experimental and observational capacity.

Decoherence

Another possible angle to get at this issue which is attracting attention is to look at a phenomena in which a group of Standard Model particles acts "coherently" in the absence of outside interactions. In ordinary daily life, we are bombarded by also sorts of particles that leads to rapid decoherence except in rarified circumstances. But in a deep space vacuum, a coherent group of particles can be expected to travel vast distances with only slight non-gravitational interactions with the outside environment.

Theorists can use even very incomplete quantum gravity theories in which lots of quantities can't be calculated to then calculate the extent to which a flux of gravitons would lead to decoherence of that group of Standard Model particles sooner than it would in the absence of such interactions (see, e.g., here).

The rate at which decoherence emerges in objects in the deep vacuum is thus a physical observable that helps us tease out the mechanism by which gravity works.

Non-Local Gravity Theories

There are explicitly non-local formulations of gravity and papers on this topic address a lot of the issues that the OP question seems to be getting at. Rather than try to explain them myself, I'll defer to the articles from the literature below that discuss these theories.

Literature For Further Reading

Some recent relevant papers include:

* Ivan Kolář, Tomáš Málek, Anupam Mazumdar, "Exact solutions of non-local gravity in class of almost universal spacetimes" arXiv: 2103.08555

* Reza Pirmoradian, Mohammad Reza Tanhayi, "Non-local Probes of Entanglement in the Scale Invariant Gravity" arXiv: 2103.02998

* J. R. Nascimento, A. Yu. Petrov, P. J. Porfírio, "On the causality properties in non-local gravity theories" arXiv: 2102.01600

* Salvatore Capozziello, Maurizio Capriolo, Shin'ichi Nojiri, "Considerations on gravitational waves in higher-order local and non-local gravity" arXiv: 2009.12777

* Jens Boos, "Effects of Non-locality in Gravity and Quantum Theory" arXiv: 2009.10856

* Jens Boos, Jose Pinedo Soto, Valeri P. Frolov, "Ultrarelativistic spinning objects in non-local ghost-free gravity" arXiv: 2004.07420

The work of Erik Verlinde, for example, here, also deserves special mention. He has hypothesized that gravity may not actually be a distinct fundamental force, and may instead, be an emergent interactions that arises from the thermodynamic laws applicable to entropy and/or entanglement between particles arising from Standard Model interactions.

His theories approximate the observed laws of gravity, sometimes including reproduction of dark matter and/or dark energy-like effects, although some early simple attempts that he made to realize this concept have been found to be inconsistent with observational evidence.

Particular Theories
Does string theory or loop quantum gravity have hypotheses on what gravity is or how it arises?
In string theory, either a closed or open string gives rise in certain vibration patterns to gravitons which carries the gravitational force between particles in a manner rather analogous to photons that utilizes a "loophole" in key "no go theorems" related to quantum gravity that make a naive point particle analogy to photons not viable.

This is generally done in a 10-11 dimensional space, although the way that the deeper 10-11 dimensions are distilled to the three dimensions of space and one dimension of time that we observe varies quite a bit. In many versions, the Standard Model forces as manifested through string theory are confined to a four dimensional manifold or "brane" while gravitons and gravity can propagate in all of the dimensions.

The distinction between the dimensions in which the Standard Model forces can operate and those in which gravity can operate helps string theorists explain why gravity is so weak relative to other forces, relative to the generic naive expectation of versions of string theory that if all forces ultimately derive from universal string-like particles, they ought to be more similar in strength, especially at high energies.

There are multiple problems with string theory but the biggest one is that it is really a class of vast numbers of possible theories that do not uniquely give rise to a single low energy approximation that resembles the Standard Model, and nobody has figure out how to thin out the universe of possible low energy approximations of string theory to find even one that contains everything that the Standard Model contains, while lacking everything that we have no experimental evidence for at experimentally testable energies. So basically, there are almost no observables that can be calculated from string theory.

String theory, for example, tends to favor (and arguably requires) that its low energy approximations be supergravity theories (a class of theories that integrates supersymmetry theories with supergravity theories), Majorana neutrinos that undergo neutrinoless double beta decay, proton decay, and models in which the initial state of the Universe at the Big Bang has baryon number zero, lepton number zero, and engaged in baryogenesis and leptongenesis soon after the Big Bang with a high energy process showing CP violation, baryon number violation and lepton number violation, that generates far more particles than the only known Standard Model processes that do so. The existence of a massless spin-2 graviton is pretty much the only prediction of string theory that has any support from observational evidence, and of course, that itself, is only indirect and in its infancy. But the mathematical intractability of quantum gravity by other means under various "no go theorems" has been one important motivation for string theory's popularity.

In loop quantum gravity, the universe is fundamentally made up of nodes that have a small finite numbers of connections to other nodes, and gravity is quantized primarily by quantizing space-time, rather than primarily by quantizing particles against a background that is smooth, continuous and local.

In LQG, locality is ill defined at this most fundamental level and is only an emergent property of the collective interactions of all of the nodes in a sense similar to that of temperature and pressure in the thermodynamics of gases being emergent properties of individual gas atoms randomly moving around a particular speeds that can be described globally in a statistical fashion. Particles move from node to node according to simple rules.

For example, LQG imagines that in space-time, most nodes in what we perceive to be a local area of space-time will connect to other, basically adjacent, nodes in the same local area, but there is no fundamental prohibition on a node having some connections to nodes in what we perceive to be the same local area, and other connections to nodes in what we perceive to be a local area billions of light years away.

The number of space-time dimensions is likewise an emergent property in LQG, and the number of space-time dimensions that emerge in this fashion aren't necessarily integer quantities. A system of nodes could also be described with a fractal dimension that is not an integer defined in a manner similar or identical to the mathematical definition of a fractal dimension.

Some edge examples of LQG theories think of matter and particles themselves as deformations of space-time itself that are emergent, rather than as something separate that is placed within a space called "space-time."

As in classical general relativity, gravity is fundamentally a function of the geometry of space-time, but in LQG, that geometry is discrete and broken rather than smooth and continuous, and locality is (as discussed above) ill defined. In LQG, the "background independence" of the theory, realized by not having a space-time distinct from gravity, is a hallmark axiom of the field and line of reasoning involved. This has the nice feature of "automatically" and obviously giving LQG properties like co-variance that impose tight constraints on the universe of gravity theories formulated with more conventional equations of gravity, like the Einstein field equations, which have this property, even though this is not obvious without extended and non-obvious mathematical proofs. But it has the downside of expressing how gravity works in equations that are not very conceptually natural to the uninitiated, which can make understanding what LQG really says in more familiar contexts challenging.

One of the biggest practical challenges for LQG when confronted with experimental evidence, is that many naive versions of it should give rise to slight Lorentz Invariance Violations (i.e. deviations from special relativity) at the Planck level due to the discrete rather than continuous nature of space-time, because Lorentz Invariance is formulated as a continuous space-time concept. Strong experimental constraints disfavor Lorentz Invariance Violations to levels that naively extend below Planck length scale distances. But, the problem of discrete formulations of space-time leading to minor deviations from Lorentz Invariance is not a universal property of all LQG theories and can be overcome with different conceptualizations of it that evade this problem.

Like string theory, LQG is very much a work in process that is striving to find ways within its general approach and family of theories that reproduce either classical general relativity in the classical limit, or a plausible modification of classical general relativity that can't be distinguished from general relativity with current observational evidence. There are a host of intermediate baby steps and confirmations of what is predicted that have to be surmounted before it can gain wide acceptance and produce a full spectrum of "big picture" results, in part, because so much of this class of theories is emergent, and has to be discovered, rather than being put in by hand as higher level operational and useful theories in practical situations.

Footnote Regarding Loop Quantum Gravity Terminology

There are two senses in which the term "loop quantum gravity" (LQG) is used, and I'm not being very careful about distinguishing the two in this post. Some of what I say about LQG is really specific only to the narrow sense theory that I discuss below, while other things that I say about LQG applies to the entire family of LQG style quantum gravity theories.

In a strict and narrow sense, loop quantum gravity refers to a specific quantum gravity theory that involves quantizing space-time that is largely attributed to Lee Smolin and Carlo Rovelli, although assigning credit to any scientific theory that a community of interacting researchers help formulate is a perilous and always somewhat controversial thing to do.

But the term is also frequently used as a catchall term for quantum gravity theories that share, with the narrow sense type example of loop quantum gravity, the feature that space-time itself or closely analogous concepts are quantized. LQG theories are distinguishable from quantum gravity theories, like string theory, that simply insert graviton particles that carry the gravitational force into a distinct pre-determined space-time with properties sufficient to be Lorentz invariant and observe other requirements of gravity theories that approximate general relativity in the classical limit.

For example, "causal dynamical triangulation" (CDT) is a quantum gravity theory that is in the loop quantum gravity family of theories, but is not precisely the same theory as the type example of LQG after which this family of theories is named.

"Spin foam" theories are another example of LQG family quantum gravity theories.

Thursday, April 15, 2021

The Wool Road

Language Log discusses a paper looking at the spread of wool and wool related technologies in Bronze Age Asia. The bottom line chronology is as follows:
—After 3300 calBC: early exchanges of prestige goods across Near East and the North Caucasus, with wool-cotton textiles moving as part of the elite exchange networks; mixed wool-cotton textile dates around 2910–2600 calBC.

—The mid third millennium BC: spread of wool textile technologies and associated management out of the Near East / Anatolia and into the southern Caucasus; according to 14C data obtained for textiles and synchronous samples this happened between 2550–1925 calBC; an almost synchronous date was obtained from the dates of the northern steppe regions, suggesting that the spread of innovative technology from the South Caucasus to the steppe zone and further north up to the forest zone occurred as part of the same process between 2450–1900 calBC.

—Between 1925–1775 calBC there was rapid eastward transmission of the wool (and associated technologies) across the steppe and forest-steppe of the Volga and southern Urals, out across Kazakhstan and into western China between 1700–1225 calBC. This same process of transmission through the steppe ultimately brought woven wool textiles into societies around the western Altai and the Sayan Mountains of southern Siberia.
The Language Log post continues by observing that:
The findings for the timing and spread of wool technology comport well with those for bronze, chariots, horse riding, iron, weapon types, ornamentation and artwork, and other archeologically recovered cultural artifacts that we have examined in previous posts. Moreover, they are conveniently correlated with archeological cultures such as Andronovo[.]

Wednesday, April 14, 2021

Another Way Of Thinking About Neutrino Oscillation

This is one of the deepest and most thought provoking articles I've seen about neutrinos in a long time. This passage from the body text stands out:

[W]e have obtained the PMNS matrix without having to ever talk about mass diagonalization and mismatches between flavor and mass basis.

The abstract and the paper are as follows: 

We apply on-shell amplitude techniques to the study of neutrino oscillations in vacuum, focussing on processes involving W bosons. 
We start by determining the 3-point amplitude involving one neutrino, one charged lepton and one W boson, highlighting all the allowed kinematic structures. The result we obtain contains terms generated at all orders in an expansion in the cutoff scale of the theory, and we explicitly determine the lower dimensional operators behind the generation of the different structures. 
We then use this amplitude to construct the 4-point amplitude in which neutrinos are exchanged in the s-channel, giving rise to oscillations. 
We also study in detail how flavor enters in the amplitudes, and how the PMNS matrix emerges from the on-shell perspective.
Gustavo F. S. Alves, Enrico Bertuzzo, Gabriel M. Salla, "An on-shell perspective on neutrino oscillations and non-standard interactions" arXiv (March 30, 2021).

Tuesday, April 13, 2021

Lepton Universality Violation Considered Again

Moriond 2021 has reported New results on theoretically clean observables in rare B-meson decays from LHCb 3.1 sigma away from the SM prediction of lepton universality in the B+→K+μμ vs. B+→K+ee comparison. Again the same direction, muons are less common than electrons. 9/fb, i.e. the whole LHCb dataset so far. Preprint on arXiv

Statistics Issues

Statistically, my main issue is cherry picking.

They've found several instances where you have LFV and they combine those to get their significance in sigma. They ignore the many, many other instances where you have results that are consistent with LFU when the justification for excluding those results is non-obvious.

For example, lepton universality violations are not found in tau lepton decays or pion decays, and are not found in anti-B meson and D* meson decays or in Z boson decays. There is no evidence of LFV in Higgs boson decays either.

As one paper notes: "Many new physics models that explain the intriguing anomalies in the b-quark flavour sector are severely constrained by Bs-mixing, for which the Standard Model prediction and experiment agreed well until recently." Luca Di Luzio, Matthew Kirk and Alexander Lenz, "One constraint to kill them all?" (December 18, 2017).

Similarly, see Martin Jung, David M. Straub, "Constraining new physics in b→cℓν transitions" (January 3, 2018) ("We perform a comprehensive model-independent analysis of new physics in b→cℓν, considering vector, scalar, and tensor interactions, including for the first time differential distributions of B→D∗ℓν angular observables. We show that these are valuable in constraining non-standard interactions.")

An anomaly disappeared between Run-1 and Run-2 as documented in Mick Mulder, for the LHCb Collaboration, "The branching fraction and effective lifetime of B0(s)→μ+μ− at LHCb with Run 1 and Run 2 data" (9 May 2017) and was weak in the Belle Collaboration paper, "Lepton-Flavor-Dependent Angular Analysis of B→K∗ℓ+ℓ−" (December 15, 2016).

When you are looking a deviations from a prediction you should include all experiments that implicate that prediction.

In a SM-centric view, all leptonic or semi-leptonic W boson decays arising when a quark decays to another kind of quark should be interchangeable parts (subject to mass-energy caps on final states determined from the initial state), and since all leptonic or semi-leptonic W boson decays (either at tree level or removed one step at the one loop level) and are deep down the same process. See, e.g., Simone Bifani, et al., "Review of Lepton Universality tests in B decays" (September 17, 2018). So, you should be lumping them all together to determine if the significance of evidence for LFV.

Their justification for not pooling the anomalous results with the non-anomalous ones is weak and largely not stated expressly. At a minimum, the decision to draw a line regarding what should be looked at in the LFV bunch of results to get the 3.1 sigma and what should be looked at in the LFU bunch of results that isn't used to moderate the 3.1 sigma in any way is highly BSM model dependent, and the importance of that observation is understated in the analysis (and basically just ignored).

The cherry picking also gives rise to look elsewhere effect issues. If you've made eight measurements in all, divided among three different processes, the look elsewhere effect is small. If the relevant universe is all leptonic and semi-leptonic W and Z boson decays, in contrast, there are hundreds of measurements out there and even after you prune the matter-energy conservation limited measurements, you still have huge look elsewhere effects that trim one or two sigma from the significance of your results.

Possible Mundane Causes

It is much easier to come up with a Standard Model-like scenario in which there are too many electron-positron pairs produced, than it is to come up with one where there are too muon or tau pairs produced.

The ratios seem to be coming up at more than 80% but less than 90% of the expected Standard Model number of muon pair decays relative to electron-positron decays.

The simplest answer would be that there are two processes. 

One produces equal numbers of electron-positron and muon pair decays together with a positively charged kaon in each case, as expected.  The pre-print linked above states this about this process:
The B+ hadron contains a beauty antiquark, b, and the K+ a strange antiquark, s, such that at the quark level the decay involves a b → s transition. Quantum field theory allows such a process to be mediated by virtual particles that can have a physical mass larger than the mass difference between the initial- and final-state particles. In the SM description of such processes, these virtual particles include the electroweak-force carriers, the γ, W± and Z bosons, and the top quark. Such decays are highly suppressed and the fraction of B+ hadrons that decay into this final state (the branching fraction, B) is of the order of 10^−6.
A second process with a branching fraction of about 1/6th that of the primary process produces a positively charged kaon together with an electromagnetically neutral particle that has more than about 1.22 MeV of mass (enough to decay to a positron-electron pair), but less than about 211.4 MeV mass necessary to produce a muon pair when it decays.

It turns out that there is exactly one such known particle fundamental or composite, specifically, a neutral pion, with a mass of about 134.9768(5) MeV.

About 98.8% of the time, a πº it decays to a pair of photons and that decay would be ignored as the end product doesn't match the filtering criteria. But about 1.2% of the time, it decays to an electron-positron pair together with a photon, and all other possible decays are vanishing rare by comparison.

So, we need a decay of a B+ meson to a K+ meson and a neutral pion with a branching fraction of about (10^-6)*(1/6)*(1/0.012)= 1.4 * 10^-5.

It turns out that B+ mesons do indeed decay to K+ mesons and neutral pions with a branching fraction of 1.29(5)*10^-5 which is exactly what it needs to be to produce the apparent violation of lepton universality.

It also appears to me that the theoretical calculation of the K+µ+µ- to K+e+e- ratio isn't considering this decay, although it seems mind boggling to me that so many physicists in such a carefully studied process would somehow overlook the B+ --> K+πº decay channel impact on their expected outcome, which is the obvious way to reverse engineer the process. 

Time will tell if this will amount to anything. I've posted this analysis at the thread at the Physics Forums linked above, to get some critical review of this hypothesis.

If by some crazy twist of fate, this analysis isn't flawed, then it resolves almost all of one of the biggest anomalies in high energy physics outstanding today.

A Universal Maximum Energy Density

I've explored this line of thought before and so seeing it in a paper caught my eye. The mass per event horizon ratio of black holes also conforms to this limitation, reaching a maximal point at the minimum mass of any observed black hole.

This matters, in part, because quantum gravity theories need infrared or ultraviolet fixed point boundaries to be mathematically consistent, and this might be a way to provide that boundary.

One generic consequence of such a hypothesis is the primordial black holes smaller than stellar black holes don't simply not exist, they theoretically can't exist.

Recent astronomical observations of high redshift quasars, dark matter-dominated galaxies, mergers of neutron stars, glitch phenomena in pulsars, cosmic microwave background and experimental data from hadronic colliders do not rule out, but they even support the hypothesis that the energy-density in our universe most likely is upper-limited by ρ(unimax), which is predicted to lie between 2 to 3 the nuclear density ρ0. 
Quantum fluids in the cores of massive neutron stars with ρ≈ρ(unimax) reach the maximum compressibility state, where they become insensitive to further compression by the embedding spacetime and undergo a phase transition into the purely incompressible gluon-quark superfluid state. 
A direct correspondence between the positive energy stored in the embedding spacetime and the degree of compressibility and superfluidity of the trapped matter is proposed.
In this paper relevant observation signatures that support the maximum density hypothesis are reviewed, a possible origin of ρ(unimax) is proposed and finally the consequences of this scenario on the spacetime's topology of the universe as well as on the mechanisms underlying the growth rate and power of the high redshift QSOs are discussed.

Friday, April 9, 2021

A Few Notable Points About Charlemagne

The Basics

Charlemagne was King of the Franks, in an empire centered more or less around modern France but extending further, from 768 CE to his death in 814 CE (co-ruling with his brother Carloman I until 777 CE). 

He was a close ally of the Pope, and was crowned "Emperor of the Romans" by the Pope in 800 CE in what came to be called the Carolingian Empire.

This was right in the middle of the "Middle Ages" of Europe and towards the end of what are sometimes known as the "Dark Ages" of Europe. His rule preceded the Great Schism of 1054 in which the Roman Catholic Church split from the Eastern Orthodox churches.

Personal Life

He was born before his parents were married in the eyes of the church. They may, however, have had a Friedelehe (a.ka. a "peace" marriage), a Germanic form of quasi-marriage not accepted by the Christian church, with the following characteristics:
  • The husband did not become the legal guardian of the woman, in contrast to the Muntehe, or dowered marriage (some historians dispute the existence of this distinction).
  • The marriage was based on a consensual agreement between husband and wife, that is, both had the desire to marry.
  • The woman had the same right as the man to ask for divorce.
  • Friedelehe was usually contracted between couples from different social status.
  • Friedelehe was not synonymous with polygyny, but enabled it.
  • The children of a Friedelehe were not under the control of the father, but only that of the mother.
  • Children of a Friedelehe initially enjoyed full inheritance rights; under the growing influence of the church their position was continuously weakened.
  • Friedelehe came into being solely by public conveyance of the bride to the groom's domicile and the wedding night consummation; the bride also received a Morgengabe (literally "morning gift", a gift of money given to a wife upon consummation of a marriage).
  • Friedelehe was able to be converted into a Muntehe (dowered or guardianship marriage), if the husband later conveyed bridewealth (property conveyed to the wife's family). A Muntehe can also be characterized as a secular legal sale of a woman by her family clan's patriarch to her husband (sometimes with the requirement that the consummation of the marriage be witnessed).

Other alternative relationship forms that existed in that era included a Kebsehe with an unfree "concubine" in the Middle Ages, the morganatic marriage (a marriage without inheritance rights, usually of a noble to a commoner lover after the death of a first legitimate wife), the angle marriage (a "secret marriage" entered into without clergy involvement comparable to modern "common law marriage" banned by the church in 1215 but continuing in practice into the 1400s), and a robbery or kidnapping marriage (a forced marriage by abduction of the bride, sometimes with her or her family's tacit connivance to avoid an arranged marriage or because the couple lacks the economic means to arrange a conventional marriage).

Charlemagne "had eighteen children with eight of his ten known wives or concubines. . . . Among his descendants are several royal dynasties, including the Habsburg, and Capetian dynasties. . . . most if not all established European noble families ever since can genealogically trace some of their background to Charlemagne." The accounts are not entirely consistent.

He was mostly a serial monogamist, although he had two successive concubines at the same time as the marriage that produced most of his children, for about two years. 

His first relationship was with Himiltrude. After the fact, a later Pope declared it a legal marriage (despite the fact that she would logically have resulted in the invalidity of Charlemagne's later marriages as she lived until age 47). But she appears to have been a daughter from a noble family (as would be expected if she had an opportunity to have a relationship with a king's son), so she wasn't a serf who was an unfree kebese concubine either. An informal marriage-like relationship along the lines of  a Friedelehe that was not recognized by the church probably best characterizes her status at the time. This relationship produced one son who suffered from a spinal deformity and was called "the Hunchback" who spent his life confined to care in a convent.

Charlemagne's relationship with Himiltrude was put aside two years later when he legally married Desiderata, the daughter of the King of the Lombards, but the relationship with Desiderata produced no children and was formally annulled about a year later. 

He then married Hildegard of the Vinzgau in 771 with whom he had nine children before Hildegard died twelve years later in 783. 

Two of the children were named Kings (one of Aquitaine and one of Italy), one was made a Duke of Maine (a region in Northwestern France that is home to the city of La Mans), three died as infants, one daughter who never married died at age 25 after having one son out of wedlock with an abbot, one daughter died at age 47 after having had three children with a court official he remained in good standing in Charlemagne's court out of wedlock, and one daughter probably died at age 27 having never married or had any children although the time of her death is not well documents and she may have spent her final years in a convent. 

During his marriage to Hildegard he had two concubines, apparently successively, with whom he had one child each, Gersuinda starting in 773 and producing a child in 774, and Madelgard in 774, producing a daughter in 775 who was made an abbess. 

He then married Fastrada in 784 and she died ten years later in 794, after having two daughters with him, one of whom became an abbess. 

He then married Luitgard in 794 who died childless six years later in 800. 

After Luitgard's death, Charlemagne had two subsequent successive concubines. The first was Regina, starting in 800 with whom he had two sons (in 801 and 802), one of whom was made a bishop and then an abott, and the other of whom became the archchancellor of the empire. The second was Ethelind, starting in 804 with whom he had two sons, in 805 and 807, the first of whom became an abbot.

A Female Byzantine Rival

His main competitor for the title of Emperor was the Byzantine Empire's first female monarch, Irene of Athens.

Brutal Conversions Of European Pagans And Wars

Charlemagne was engaged in almost constant warfare throughout his reign and often personally led his armies in these campaigns accompanied by elite royal guards call the scara.
  • He conquered the Lombard Kingdom of Northern Italy from 772-776. He briefly took Southern Italy in 787, but it soon declared independence and he didn't try to recapture it.
  • He spent most of his rule fighting pitched campaigns to rule mostly Basque Aquitaine and neighboring regions, and the northern Iberian portion of Moorish Spain.
  • In the Saxon Wars, spanning thirty years ending in 804 and eighteen battles, he conquered West German Saxonia and proceeded to convert these pagan peoples to Christianity, and took Bavaria starting in 794 and solidified in 794.
  • He went beyond them to fight the Avars and Slav further to the east, taking Slavic Bohemia, Moravia, Austria and Croatia.
In his campaign against the Saxons to his east, Christianized them upon penalty of death and leading to events such as the Massacre of Verden. There he had 4,500 Saxons, who had been involved in a rebellion against him in Saxon territory that he had previously conquered, executed by beheading in a single day.

According to historian Alessandro Barbero in "Charlemagne: Father of a Continent" (2004) at pgs. 46-47, "the most likely inspiration for the mass execution of Verden was the Bible" and Charlemagne desiring "to act like a true King of Israel", citing the biblical tale of the total extermination of the Amalekites and the conquest of the Moabites by Biblical King David.

A royal chronicler, commenting on Charlemagne's treatment of the Saxons a few years after the Massacre of Verden records with regard to the Saxon that "either they were defeated or subjected to the Christian religion or completely swept away."