Pages

Wednesday, March 30, 2022

New Study Rewrites The Story Of Food Production In The Americas

A new paper extracting ancient DNA from remains in a tropical area greatly reshapes our understanding of component Neolithic Revolution in the Americas that developed around the "three sisters": corn, squash and beans.

While corn was first domesticated ca. 7000 BCE, it made up only about 10% of the Mesoamerican diet (it was probably used mostly to make alcohol), which started to increase around 3600 BCE, and reached a majority of the local diet around 2700 BCE with the dawn of the Mayan civilization.

Around 3600 BCE, a Southern Central American and South American sourced population, that was probably linguistically distinct, migrated to Mesoamerica with a new, far superior variety of corn developed in Northern Peru around 5000 BCE to 4000 BCE (and possibly other improved domesticated crops) by about 2700 BCE had replaced about 70% of the local gene pool by the time that the migration and population fusion had fully run its course.

The process echos the demic diffusion models that explain the spread of agriculture in Europe, in North Africa, in South Asia, and in China and Southeast Asia.  

This resulted by about 2700 BCE in the clearing of land to grow corn and in corn making up a majority of the diet of people there who really only then became a food producing farmer society, and became the Maya Civilization ca. 2600 BCE, a society that continues to leave its linguistic legacy in the region:
Maya occupation at Cuello (modern-day Belize) has been carbon dated to around 2600 BC. Settlements were established around 1800 BC in the Soconusco region of the Pacific coast, and the Maya were already cultivating the staple crops of maize, beans, squash, and chili pepper.

Eventually, these crops would spread to include essentially all of the Mississippi River basin in the Mississippian culture, and beyond to replace a less successful package of food production crops in what is now the American Southeast.

This chronology also helps to harmonize the amount of time between the development of a full fledged Neolithic culture in the Americas and the subsequent development of metalworking in the Americas. 

It also disfavors the hypothesis that technological development in food production and otherwise in the Americas was triggered by the arrival of the ancestors of the Na-Dene people in North America via Alaska fairly close in time to the development of the Mayan civilization.

Between the ethnogenesis of the fused Maya population and the present, the region has experienced an introgression of about 25% of the gene pool from the Mesoamerican highlands. Only about 23% of the modern Mayan gene pool is traceable to population present in the lowlands of Mesoamerica before this demic diffusion of agriculture occurred, while 52% of the modern Mayan gene pool is attributable to this migration of South American corn farmers (exclusive of 7% European and 1% African ancestry in modern Maya populations from the post-Columbian era).


Top: dates of individuals with genetic data (individual 95.4% confidence intervals and total summed probability density; MHCP.19.12.17 [low-coverage] omitted for scale). *Date based on association with familial relative. 
Bottom: earliest radiocarbon dates associated with microbotanical evidence for maize, manioc, and chili peppers in the Maya region and adjacent areas at Lake Puerto Arturo, Guatemala (GT); Cob Swamp, Belize (BZ); Rio Hondo Delta; Caye Coco, BZ; and Lake Yojoa, Honduras (HN), together with summed probability distribution of the earliest maize cobs (n = 11) in southeastern Mesoamerica from El Gigante rock-shelter, HN. Also shown in yellow is the known transition to staple maize agriculture based on dietary stable isotope dietary data from MHCP and ST.

This reflux model of corn domestication is new. Most previous scholarship had implicitly assumed that corn domestication was refined in situ in the Mesoamerican region where its wild type ancestor is found and was at something of a loss to explain why it took 3,400 years for it to really take hold as a major food source.

The introduction to the paper explains that:

Maize domestication began in southwest Mexico ~9000 years ago and genetic and microbotanical data indicate early dispersal southward and into South America prior to 7500 cal. BP as a partial domesticate before the complete suite of characteristics defining it as a staple grain had fully developed. 
Secondary improvement of maize occurred in South America, where selection led to increased cob and seed size beyond the range of wild teosinte progenitor species. Maize cultivation was widespread in northwestern Colombia and Bolivia by 7000 cal. BP. 
The earliest evidence for maize as a dietary staple comes from Paredones on the north coast of Peru, where dietary isotopes from human teeth suggest maize shifted from a weaning food to staple consumption between 6000–5000 cal. BP consistent with directly dated maize cobs. Maize cultivation was well-established in some parts of southern Central America (e.g., Panama) by ~6200 cal. BP.

Starch grains and phytoliths recovered from stone tools from sites in the southeastern Yucatan near MHCP and ST indicate that maize (Zea mays), manioc (Manihot sp.) and chili peppers (Capsicum sp.) were being processed during the Middle Holocene, possibly as early as 6500 cal. BP. 
Paleoecological data (pollen, phytoliths, and charcoal) from lake and wetland cores point to increases in burning and land clearance associated with early maize cultivation after ~5600 cal. BP. 
Maize was grown at low levels after initial introduction, and there is greater evidence for forest clearing and maize-based horticulture after ~4700 cal. BP. Increases in forest clearing and maize consumption coincide with the appearance of more productive varieties of maize regionally, which have been argued based on genetic and morphological data to to be reintroduced to Central America from South America.

The increases in burning, forest disturbance, and maize cultivation in southeastern Yucatan evident after ~5600 cal. BP, as well as the subsequent shift to more intensified forms of maize horticulture and consumption after ~4700 cal. BP, can be plausibly linked to: 1) the adoption of maize and other domesticates by local forager-horticulturalists, 2) the intrusion of more horticulturally-oriented populations carrying new varieties of maize, or 3) a combination of the two. 
Dispersals of people with domesticated plants and animals are well documented with combined archeological and genome-wide ancient DNA studies in the Near East; Africa; Europe; and Central, South, and Southeast Asia. The spread of populations practicing agriculture into the Caribbean islands from South America starting ~2500 years ago is also well documented archeologically and genetically. 
However, in the American mainland, the generally accepted null hypothesis is that the spread of horticultural and later farming systems typically resulted from diffusion of crops and technologies across cultural regions rather than movement of people. Here, we examined the mode of horticultural dispersal into the southeastern Yucatan with genome-wide data for a transect of individuals from MHCP and ST with stable isotope dietary data and direct AMS 14C dates between 9600 and 3700 cal. BP.

The paper continues in its discussion section to state that:

[O]ur results support a scenario in which Chibchan-related horticulturalists moved northward into the southeastern Yucatan carrying improved varieties of maize, and possibly also manioc and chili peppers, and mixed with local populations to create new horticultural traditions that ultimately led to more intensive forms of maize agriculture much later in time (after 4700 cal. BP).
Bernard's blog summarizes the genetic evidence in this new paradigm shifting paper as follows (Google translated from French; emphasis mine):
The oldest individuals from Belize dated between 9600 and 7300 years old cluster together, close to the ancient individuals from South America. 
More recent individuals from Belize dated between 5600 and 3700 years old are shifted to the left in the direction of present-day populations speaking a Chibchane language from northern Colombia and Venezuela. Current populations speaking a Maya language are located close to the ancient individuals from Belize dated between 5600 and 3700 years ago, but slightly shifted towards the Mexican populations of the highlands. . . . [T]he population represented by the ancient individuals from Belize dated between 5600 and 3700 years ago can be modeled as resulting from a genetic mixture between a population represented by the ancient individuals from Belize dated between 9600 and 7300 years ago (31%) and a population ancestor of the Chibchan populations (69%). These results suggest Chibchane gene flow in the ancient population of Belize. They are also supported by the discontinuity of mitochondrial haplogroups between 7300 and 5600 years ago. . . .

This flow of genes from the south is linked to the arrival of a Chibchane population from the south of the Maya region which probably contributed to the development of the cultivation of maize, and perhaps also other domesticated plants.

On the other hand the current Maya population can be modeled as resulting from a genetic mixture between the ancient population of Belize dated between 5600 and 3700 years ago (75%) and a population related to the ancestors of the Mexican populations of the highlands (25%) such as the Mixes , Zapotecs and Mixtecs.
The paper also engaged in some linguistic analysis to support its genetic and chemical analysis of human remains:
The distribution and history of languages in Central America provides an independent line of evidence for our proposed historical interpretation. 
Chibchan is a family of 16 extant (7–8 extinct) languages spoken from northern Venezuela and Colombia to eastern Honduras. The highest linguistic diversity of the Chibchan family occurs today in Costa Rica and Panama near the Isthmian land bridge to South America, and this is hypothesized to be the original homeland from which the languages diversified, starting roughly 5500 years ago. 
We undertook a preliminary analysis of 25 phonologically and semantically comparable basic vocabulary items to study the linguistic evidence for interaction between early Chibchan and Mayan languages, of which a subset of 9 display minimally recurring and interlocking sound correspondences. 
We also focused on possible borrowings: crucially, one of the terms for maize, #ʔayma, diffused among several languages of northern Central America (Misumalpan, Lenkan, Xinkan), as well as the branch of Mayan that broke off earlier than any other (Huastecan). This term is much more phonologically diverse and much more widely distributed (both across linguistic branches and geographically) in Chibchan than in any other non-Chibchan language of Central America or Mexico, and it is morphologically analyzable in several Chibchan languages, including a final suffix. 
Together, these traits support a Chibchan origin of this etymon, which could correspond to a variety of maize introduced from the south. Formal linguistic analysis over a much larger dataset will be necessary to understand the early sharing patterns (of both inheritance and diffusion) between these two language groups, which may provide further clues about the movements of material culture and people we discuss here.
The paper and its abstract are as follows:
The genetic prehistory of human populations in Central America is largely unexplored leaving an important gap in our knowledge of the global expansion of humans. We report genome-wide ancient DNA data for a transect of twenty individuals from two Belize rock-shelters dating between 9,600-3,700 calibrated radiocarbon years before present (cal. BP). 
The oldest individuals (9,600-7,300 cal. BP) descend from an Early Holocene Native American lineage with only distant relatedness to present-day Mesoamericans, including Mayan-speaking populations. 
After ~5,600 cal. BP a previously unknown human dispersal from the south made a major demographic impact on the region, contributing more than 50% of the ancestry of all later individuals. This new ancestry derived from a source related to present-day Chibchan speakers living from Costa Rica to Colombia. Its arrival corresponds to the first clear evidence for forest clearing and maize horticulture in what later became the Maya region.
Douglas Kennett, et al., "South-to-north migration preceded the advent of intensive farming in the Maya region" 13 Nature Communications 1530 (March 22, 2022) (open access) (supplemental data here). See also reporting on this paper at Science.

European Languages Spoken In 600 CE

This map isn't particular ground breaking, new, or scholarly (although I can confirm from my previous knowledge that it is generally correct in broad outline, although no doubt with some debatable points). But, it is pretty and thought provoking by providing information about this issue in an easily digestible form.

I am particularly struck by how small the geographic range of the Germanic languages was this late in history (but prior to the "migration period"). The Oghur language was part of the Turkic language family.


From here

Another map suggested in the comments to this post, via comments in the source post, is reproduced below, although it is more of a political map than a strictly linguistic one. 

The intentionally archaic looking type setting is harder to read so I will describe it: The colored in parts are West, East and South Slavic respectively. The Azars are in the "island" surrounded by the Slavic tribes. The unshaded area near the Baltic Sea contains "Balts", "Finns" are to the east of the Eastern Slavs, "Hungarians" are to the South of the Finns (although the Hungarian ethnicity as we know it didn't really exist in 600 CE and the Hungarian language hadn't arrived either by then either). Franks are to the West of the West Slavs. Romans are in Anatolia.

Recent Experimental Tests Of General Relativity

Sabine Hossenfelder surveys a variety of recent tests of General Relativity and of recent idea about how to test it that have not been conducted or completed yet. She opens with an insightful point:
I have to clarify that when I say “proving Einstein wrong”, I mean proving Einstein’s theory of general relativity wrong. Einstein himself has actually been wrong about his own theory, and not only once.

For example, he originally thought the universe was static, that it remained at a constant size. He changed his mind after learning of Hubble’s discovery that the light of distant galaxies is systematically shifted to the red, which is evidence that the universe expands. Einstein also at some point came to think that gravitational waves don’t exist, and argued that black holes aren’t physically possible. We have meanwhile found evidence for both.

I’m not telling you this to belittle Einstein. I’m telling you this because it’s such an amazing example for how powerful mathematics is. Once you have formulated the mathematics correctly, it tells you how nature works, and that may not be how even its inventor thought it would work. It also tells us that it can take a long time to really understand a theory.
Deur's work on gravity, which attempts to explain dark matter and dark energy phenomena with the self-interaction of gravitational fields in the context of orthodox General Relativity (albeit not in the manner it is conventionally applied), if correct, would be an example of the idea that "it can take a long to to really understand a theory" even after it has been fully formulated mathematically and studied by extremely intelligent scientists like Einstein himself.

None of the scholarship that Hossenfelder reviews disprove General Relativity, despite well motivated theoretical arguments that ultimately classical General Relativity should be replaced by a quantum gravity theory, which she recaps as follows:
General Relativity is now more than a century old, and so far its predictions have all held up. Light deflection on the sun, red shift in the gravitational field, expansion of the universe, gravitational waves, black holes, they were right, right, right, and right again, to incredibly high levels of precision. But still, most physicists are pretty convinced Einstein’s theory is wrong and that’s why they constantly try to find evidence that it doesn’t work after all.

The most important reason physicists think that general relativity must be wrong is that it doesn’t work together with quantum mechanics. General relativity is not a quantum theory, it’s instead a “classical” theory as physicists say. It doesn’t know anything about the Heisenberg uncertainty principle or about particles that can be in two places at the same time and that kind of thing. And this means we simply don’t have a theory of gravity for quantum particles. Even though all matter is made of quantum particles.

Let that sink in for a moment. We don’t know how matter manages to gravitate even though the fact that matter does gravitate is the most basic observation about physics that we make in our daily life.

This is why most physicists currently believe that general relativity has a quantum version, often called “quantum gravity”, just that no one has yet managed to write down the equations for it. Another reason that physicists think Einstein’s theory can’t be entirely correct is that it predicts the existence of singularities, inside black holes and at the big bang. At those singularities, the theory breaks down, so general relativity basically predicts its own demise
Much of the literature is familiar but one comment in the discussion clarified a term that I had not previously understood very precisely:
According to Einstein, the speed of light in vacuum doesn’t depend on the energy of the light or its polarization. If the speed depends on the energy, that’s called dispersion, and if it depends on the polarization that’s called birefringence. We know that these effects both exist in medium. If we’d also see them in vacuum, that would mean Einstein was wrong indeed.

So far, this hasn't been seen. She doesn't cite to a March 21, 2022 preprint by Arefe Abghari's team on the topic, but it used cosmic microwave background measurements to severely constrain birefringence, although a March 9, 2022 release from the Planck experiment finds evidence of non-zero birefringence at a little less than the 3 sigma level contrary to an expected value one sigma. There are reasons, however, that the theoretical expectation could be wrong without deviating from general relativity, including, potentially, the some of the same sort of phenomena that can lead to the CMB peaks associated with dark matter phenomena.

A recent paper to which she links, ruling out, to the limits of experimental accuracy, Lorentz invariance violation in quantum gravity or photon mass is:

D. J. Bartlett, H. Desmond, P. G. Ferreira, and J. Jasche, "Constraints on quantum gravity and the photon mass from gamma ray bursts"104 Phys. Rev. D 103516 (17 November 2021).
The discrepancy between the speed of gravitational waves and the speed of light is limited by an August 2017 binary neutron star merger, but the 1.7 second discrepancy seen could come from the gravitational wave causing event and the light emitting event not occurring at precisely the same time.

She also cites a couple of recent papers on "black hole echoes" that argue that there is a single observed echo from that event, but again, this is limited by a weak understanding of the detailed sequencing the event itself, particularly given that a true "black hole echo" in which the event horizon of a black hole has physical reality, would repeat multiple times rather than just once in a gravitational wave single.

Quantum type interactions of gravity are hard to rule out or confirm since there is so much background quantum interaction from other sources.

Short distance gravity measurements also see nothing amiss, but only to not terribly short distances (0.057 mm) by quantum physics standards (often dealing with femtometer or smaller scales):
Another rather straight-forward test is to check whether the one-over-R-squared law holds at very short distances. Yes, that’s known as Newton’s law of gravity, but we also have it in general relativity. Whether this remains valid at short distances can be directly tested with high precision measurements. These are done for example by the group of Eric Adelberger in Washington DC. . . . Their most precise measurement yet was published in 2020 and confirms that one-over-R-squared law is correct all the way down to 57 micrometers.
Scientists have also confirmed that, to the limits of experimental precision, objects with different masses fall at the same rate in a gravitational field (one of several aspects of the similar but not identical equivalent principles). In particular, a recent paper looked for a difference in how two isotopes of Rubidium fall in the gravitational field of Earth.

Hossenfelder omits, however, the really interesting and observationally motivated area in which general relativity is being tested, which is in very weak gravitational fields of extremely massive sources at great distances from the source (where dark matter phenomena are observed). In this area, there is evidence consistent with deviations from the Newtonian approximation of general relativity conventionally applied at these scales, and of an "external field effect" which violates one of the equivalence principles, but not others.

Wednesday, March 23, 2022

English Isn't All That Old

Early Modern English, based upon a London dialect used by bureaucrats called "Chancery English" dates to about 1470 CE and is the first dialect of the English language that is really mutually intelligible to modern English speakers today.

An English language intelligible to modern English speakers didn't exist for most of the history of the British Isles that is attested in writing.

Most English speakers, even quite well read ones, can't understand the French and Latin influenced Middle English of more than about 550 years ago (particularly diligent English students might read in the original with much stuggle, the Canterbury Tales of William Chaucer very late in the Middle English era), let alone the Old English of more than about 950 year ago at which point Middle English came into being as a result of the Norman Conquest in 1066 CE.

Before Old English, which is derived from a Germanic language called Frisian spoken in what is now the Netherlands, arrived in England in the first few centuries of the first millennium of the common era, the Celtic languages were spoken in the British Isles before and after the Latin of the Roman era. Celtic languages also existed as a vernacular language during the Latin era. Of course, the Celtic languages have continued to be spoken in Cornwall, Wales, Scotland and Ireland through the present, or at least, the very recent past. 

It isn't entirely clear when the Celtic languages were first spoken in the British Isles. Celtic languages were spoken in the region no later than the Iron Age, ca. 700 BCE or so, but possible as much as six hundred years older, either way, from a Central European source, although as I've noted before, many place names (a.k.a. toponyms) in the British Isles have Punic (i.e. pre-Arabic Tunisian) origins. 

But we don't really know what language the Bell Beaker culture derived people of the Bronze Age that preceded them in the region spoke. There was almost surely a language shift between the Neolithic era derived people's language prior to the Bell Beaker people's language, since there was an immense population genetic replacement at that time. 

The Bell Beaker language may have been in the same macro-linguistic family as Celtic, as the Bell Beaker culture's geographic scope lines up fairly closely with the historical range of the Celtic languages, and the population genetic shifts in the region in the late Bronze Age and Iron Age was much more subtle than the Bell Beaker era near replacement of populations, but we don't really know. 

We can, however, be pretty comfortable in assuming that none of the languages spoken in the British Isles prior to the population replacement by Bell Beaker culture derived people, were Indo-European languages, and we can't be certain that they spoke Indo-European languages either, although that possibility is very plausible.

Likewise, there was almost surely another language shift between the hunter-gather people of the region who were largely replaced by Neolithic era derived people who were a mix of Anatolian farmers and European hunter-gatherers. The Neolithic language may have been in the same language family as Basque, but again, we really don't know.

The Mesolithic hunter-gather languages of the British Isles, the Neolithic language of the British Isles, and the Bronze Age Bell Beaker language of the British Isles are all lost, because they were attested in writing and were completely replaced by later languages before any written records or accurate oral recollections preserved them. 

The last European hunter-gathers, and the only ones whose languages are attested, from the Arctic and sub-Arctic regions, are Uralic languages with roots as the name suggests in the Ural Mountains and Siberia beyond, rather than in the Mediterranean and Atlantic coast of Europe where Mesolithic hunter-gatherers of the British Isles have their origins.

Tuesday, March 22, 2022

Rethinking the W and Z Masses

A new paper notes that the formulas used to convert experimental data into the mass of unstable particles like the W, Z and top quark masses, the three shortest lived particles known to physics, are approximations rather than exact conversions and uses an exact formula to revise these mass estimates. 

This results in statistically significant slightly lower W and Z boson masses, but the adjustment to the top quark mass and Higgs boson mass due to these adjustments are less than the uncertainties in the experimental measurements. 

The bottom line conclusions of the paper (from the body text, references omitted, bracketed bold text inserted editorially for ease of reading) are as follows:

[The Z boson] 

The world-average values MZ = 91.1876 ± 0.0021 GeV and ΓZ = 2.4952 ± 0.0023 GeV can be used to derive the physical Z boson mass and width from Eqs. (23) and (24), 

mZ = 91.1620 ± 0.0021 GeV (26) 

ΓZ = 2.4940 ± 0.0023 GeV . (27) 

The physical Z boson mass is about 26 MeV less than the parameter MZ, a result we derived long ago; this is about ten times greater than the uncertainty in the mass. The Z boson width is the same as the parameter ΓZ within the uncertainty. This yields a Z boson lifetime of τZ = 2.6391 ± 0.0024 × 10−25 s. 

[The W boson] 

The world-average values MW = 80.379 ± 0.012 GeV and ΓW = 2.085 ± 0.042 GeV [6] yield the physical W boson mass and width 

mW = 80.359 ± 0.012 GeV (28) 

ΓW = 2.084 ± 0.042 GeV . (29) 

The physical W boson mass is about 20 MeV less than the parameter MW, which is nearly twice the uncertainty in the mass. 

The W boson width is the same as the parameter ΓW within the uncertainty, and yields τW = 3.158 ± 0.064 × 10−25 s. 

[The top quark] 

The top-quark mass and width are also extracted from experiment using the parameterization of Eq. (19). The world-average values are Mt = 172.76 ± 0.30 GeV and Γt = 1.42+0.19−0.15 GeV. 

The width is sufficiently narrow that these values are equal to the physical top quark mass and width well within the uncertainties. In addition, the physical top quark mass is ambiguous by an amount of order ΛQCD ∼ 200 MeV.

[The Higgs Boson] 
The Higgs boson width is expected to be so narrow (∼ 4 MeV) that the difference between the physical mass and the parameter MH is negligible.

In relative terms, the downward adjustment in the Z boson mass is about 1 part per 3,500, and the downward adjustment in the W boson mass is about 1 part per 4,000, so the phenomenological consequences of the adjustment are very modest. But the W boson mass is moved closer to the amount expected in a global electroweak fit.

The paper and its abstract are:

We show that the mass and width of an unstable particle are precisely defined by the pole in the complex energy plane, μ=m−(i/2)Γ, by using the defining relationship between the width and the lifetime, Γ=1/τ. We find that the physical Z boson mass lies 26 MeV below its quoted value, while the physical W boson mass lies 20 MeV below. We also clarify the various Breit-Wigner formulae that are used to describe a resonance.
Scott Willenbrock, "Mass and width of an unstable particle" arXiv:2203.11056 (March 21, 2022).

Monday, March 21, 2022

Exoplanets Defined

There is a new official definition of an exoplanet, roughly speaking, a planet in a solar system other than our own.
In antiquity, all of the enduring celestial bodies that were seen to move relative to the background sky of stars were considered planets. During the Copernican revolution, this definition was altered to objects orbiting around the Sun, removing the Sun and Moon but adding the Earth to the list of known planets. The concept of planet is thus not simply a question of nature, origin, composition, mass or size, but historically a concept related to the motion of one body around the other, in a hierarchical configuration. 
After discussion within the IAU Commission F2 "Exoplanets and the Solar System", the criterion of the star-planet mass ratio has been introduced in the definition of the term "exoplanet", thereby requiring the hierarchical structure seen in our Solar System for an object to be referred to as an exoplanet. Additionally, the planetary mass objects orbiting brown dwarfs, provided they follow the mass ratio criterion, are now considered as exoplanets. Therefore, the current working definition of an exoplanet, as amended in August 2018 by IAU Commission F2 "Exoplanets and the Solar System", reads as follows:

- Objects with true masses below the limiting mass for thermonuclear fusion of deuterium (currently calculated to be 13 Jupiter masses for objects of solar metallicity) that orbit stars, brown dwarfs or stellar remnants and that have a mass ratio with the central object below the L4/L5 instability (M/Mcentral<2/(25+621‾‾‾‾√)≈1/25) are "planets", no matter how they formed.

- The minimum mass/size required for an extrasolar object to be considered a planet should be the same as that used in our Solar System, which is a mass sufficient both for self-gravity to overcome rigid body forces and for clearing the neighborhood around the object's orbit.

Here we discuss the history and the rationale behind this definition.
A. Lecavelier des Etangs, Jack J. Lissauer, "The IAU Working Definition of an Exoplanet" arXiv:2203.09520 (March 17, 2022) (Accepted for publication in New Astronomy Reviews).

The key bit of analysis in the body text is as follows:

2.5. The mass ratio

At the time of the previous amendments to the exoplanet definition in 2003, no planetary mass objects had been found in orbit about brown dwarfs, and such objects were not considered in the 2003 definition. Members of this class have subsequently been found, and they have typically been referred to as exoplanets. Most planetary mass objects orbiting brown dwarfs seem to fall cleanly into one of two groups: (1) very massive objects, with masses of the same order as the object that they are bound to and (2) much lower mass objects. The high-mass group of companions appear akin to stellar binaries, whereas the much less massive bodies appear akin to planets orbiting stars. There is a wide separation in mass ratio between these two groupings Figures 1 and 2, so any definition with a ratio between ∼1/100 and 1/10 would provide similar results in the classification of known objects. It is noteworthy that the limiting mass ratio for stability of the triangular Lagrangian points, M/Mcentral < 2/(25 + √ 621) ≈ 1/25, falls in the middle of this range. Moreover, this ratio is a limit based on dynamical grounds, which distinguish between star-planet couples where the star dominates and the planet can “clear the neighborhood around its orbit” (when the mass ratio is below 1/25), and pairs of objects where the more massive body does not dominate the dynamics to the extent that the less massive body can be to a good approximation considered to be orbiting about an immobile primary. 
Therefore, we proposed using this dynamically-based criterion as the dividing point. The triangular Lagrangian points are potential energy maxima, but in the circular restricted three-body problem the Coriolis force stabilizes them for the secondary to primary mass ratio (m2/m1) below 1/25, which is the case for all known examples in the Solar System that are more massive than the Pluto–Charon system. The precise ratio required for linear stability of the Lagrangian points L4 and L5 is m2/m1 < 2/(25 + √ 621) ∼ 1/25 (see Danby, 1988). If a particle at L4 or L5 is perturbed slightly, it will start to librate about these points (i.e., oscillate back and forth, without circulating past the secondary). From an observational point of view, mass ratios are commonly used in statistical studies of exoplanet discoveries by microlensing (e.g., Suzuki et al., 2016, 2018) or by transit photometry (e.g., Pascucci et al., 2018). It appears that the planet-to-star mass ratio is not only the quantity that is best measured in microlensing light curves analysis, but also it may be a more fundamental quantity in some aspects of planet formation than planet mass (Pascucci et al., 2018). It can be considered as a natural criterion to be used in the definition of the term exoplanet.

2.6. The question of unbound planetary mass objects

Motion relative to the “fixed” stars was the defining aspect of the ancient definition of the term “planet”. Motion about the Sun became a requirement for planethood as a result of the adoption of the heliocentric model of the Solar System. Orbiting a star (or a similar object) is a natural extension of this requirement for exoplanets. We therefore agreed to keep that requirement for an object to be considered as an exoplanet to be “orbiting” around a more massive object. The need to orbit around a star or an analogous massive object to be considered as a planet is effectively an extension of the Copernicus revolution. 
For planetary mass objects that do not orbit around a more massive central object, the term “sub-brown dwarf” has not been adopted in the usage by the community; rather, these objects are often referred to as “free floating planetary mass objects”. These two terms are nowadays considered as synonymous. An alternative to the rather ambiguous term “free floating”, to specifically underline the presence or absence of a central object, would be to use the term “unbound”. 
Note that neither of these terms accounts for the situation when the planetary mass object is orbiting a companion whose mass is below the deuterium-burning limit and/or the minimum mass ratio for stability of the triangular Lagrangian points, but the advantages of brevity in terminology may well dictate that one of these terms is nonetheless optimal.

Tuesday, March 15, 2022

What Do Dark Matter Theorists Think The Big Questions Are?

A Snowmass 2021 paper on "Astrophysical and Cosmological Probes of Dark Matter" sets forth in its introduction what it sees the big open questions are from a dark matter particle perspective( form the introduction, bold emphasis mine, italics in headings from the source, with my commentary outside block quotes after each major question):
Is the Cold Dark Matter paradigm correct?

In the CDM paradigm, dark matter is collision-less and non-relativistic during structure formation. A natural consequence of this is the prediction of an abundance of low-mass dark matter halos down to ∼ 10^−6 M. 
Observations that provide information on the matter power spectrum at small scales and various redshifts, therefore, will play a pivotal role in confirming the CDM hypothesis. 
Evidence of small-scale power suppression could, for example, suggest that dark matter is warmer (i.e., not nonrelativistic) during structure formation, is not collision-less, is wavelike rather than particle-like, or underwent non-trivial phase transitions in the early Universe
As we will discuss, upcoming astrophysical surveys have the potential to start probing halo masses to much lower values and/or higher redshifts than previously accessible, opening the opportunity of definitively testing the CDM hypothesis.

We already have abundant evidence that the simple collisionness, particle-like, cold dark matter paradigm doesn't work. 

Warm dark matter, self-interacting dark matter, wave-like dark matter (to some extent warm dark matter, axion-like dark matter, and fuzzy dark matter), and phase shifting dark matter are the communities main responses to this data.

Is dark matter production in the early Universe thermal?

The observed relic abundance of dark matter can be explained through a thermal freeze-out mechanism. In this picture, dark matter is kept in thermal equilibrium with the photon bath at high temperatures through weak annihilation processes. Once dark matter becomes non-relativistic, dark matter is still allowed to annihilate, but the reverse process is kinematically forbidden. The continued annihilation of dark matter causes its comoving number density to be Boltzmann suppressed, until it freezes out due to Hubble expansion overcoming the annihilation rate. This process sets the present-day dark matter abundance. 
Importantly, the predicted abundance is sensitive to the detailed dark matter physics, including its particle mass as well as its specific interactions with the Standard Model. Weakly Interacting Massive Particles (WIMPs) provide a classic example of the freeze-out paradigm. In this case, a O(GeV–TeV) mass particle that is weakly interacting yields the correct relic abundance. 
As we will demonstrate, upcoming astrophysical surveys will have the opportunity to definitively test key aspects of the WIMP hypothesis by searching for the rare dark matter annihilation and decay products that arise from the same interactions that set its abundance in the early Universe. A combination of improved instruments and the so-far non-observation of WIMPs has also led to the exploration of probing dark matter candidates that are lighter or heavier than the canonical WIMP window, and which often have a non-thermal origin in the early Universe. This broadening of the possible dark matter candidates that one can search for in indirect detection will continue to be driven by the theory community.

WIMP dark matter is largely ruled out. Candidates lighter than GeV masses are more promising than heavier ones for reasons such as galaxy dynamics. The thermal approach is also muddled by the fact that we really don't understand baryogenesis or leptogenesis either. 

Is dark matter fundamentally wave-like or particle-like?

Model-independent arguments that rely on the phase-space packing of dark matter in galaxies have been used to set generic bounds on its minimum allowed mass. In particular, a fermionic dark matter candidate can have a minimum mass of ∼ keV, while a bosonic candidate can have a minimum mass of ∼ 10−23 eV. Moreover, when the dark matter mass is much less than ∼ eV, its number density in a galaxy is so large that it can effectively be treated as a classical field. 
Oftentimes referred to as “axions” or “axion-like particles” (ALPs), these ultra-light bosonic states can have distinctive signatures due to their wave-like nature. The QCD axion, originally introduced to address the strong CP problem, is a particularly well-motivated dark matter candidate for which there are clear mechanisms for how to generate the correct abundance today. In this framework, the axion mass and coupling are fundamentally related to each other through the symmetry-breaking scale of the theory As we show, upcoming searches for astrophysical axions will have the sensitivity reach to probe highly-motivated mass ranges for the QCD axion.

Wave-like dark matter means keV scale warm dark matter if it is a fermion, and much, much lighter dark matter bosons if it is bosonic. Since gravity based theories are equivalent to theories transmitted by bosons, the line between wave-like bosonic dark matter and gravity based theories can be thin. Also, keep in mind that even gravitons, while they would have zero rest mass in the vanilla version, would still have non-zero mass-energy that gravitates on a scale comparable to axion-like dark matter bosons. The QCD axion, by the way, is not real and is an ill motivated theory. 

Is there a dark sector containing other new particles and/or forces?

In a generic and well-motivated theory framework, dark matter can exist in a “dark sector” that communicates with the Standard Model through specific portal interactions. 
Within the dark sector, there can be multiple new states, as well as new forces that mediate interactions between the dark particles. Recent theory work has demonstrated classes of dark sector models that yield the correct dark matter abundance, oftentimes for lower dark matter masses than expected for WIMPs. Dark sector models can lead to a rich phenomenology for both astrophysical and terrestrial dark matter searches, as we will discuss. Two properties of the dark sector where upcoming astrophysical surveys will be able to make decisive statements are the presence of self interactions between dark matter particles and new light degrees of freedom.

Self-interacting dark matter theories are fifth-force theories that no longer have an edge over modified gravity theories in Occam's Razor. 

The number of degrees of freedom in dark matter phenomena is very low as we know from MOND-like theories that manage with just one fixed parameter for a wide range of applicability and from the scaling relation in galaxy clusters that implicates only one more, and also from earlier failures to multi-type dark matter models (citations to which I have not be able to locate again even though I clearly recall these early papers).

How will the development of numerical methods progress dark matter searches?

Given the sheer volume and complexity of data expected from astrophysical surveys in the upcoming decade, the development of effective observational and data analysis strategies is imperative. Novel machine learning and statistical tools will play an important role in maximizing the utility of these datasets. In particular, scalable inference techniques and deep learning methods have the potential to open new dark matter discovery potential across several frontiers. 
Another critical numerical component to harness the anticipated flood of astrophysical data in the next decade is the further development of cosmological and zoom-in simulations needed to interpret the survey results. We will comment on how such simulations are essential for understanding the implications of particular dark matter models on small-scale structure formation.

There really is an anticipated flood of new astrophysical data in the next decade, which will ground and bound BSM theories of dark matter, to either a null set, or a very narrow range of parameters for far fewer theories.

Also, the paper acknowledges that dark matter models currently fail massively with respect to small-scale structure formation. These theorists take it as an article of faith that better simulations will reveal a dark matter particle theory answer without much of a good basis to believe that. This is not going to come naturally or easily, if it is possible at all.

Also worth noting is the Windchime experiment. This would use tiny mechanical "wind chimes" to directly detect the gravitational effects of individual dark matter particles, overcoming the direct detection barriers associated with dark matter particle types that don't interact with ordinary matter via the Standard Model forces, a very vanilla dark matter hypothesis:

The absence of clear signals from particle dark matter in direct detection experiments motivates new approaches in disparate regions of viable parameter space. In this Snowmass white paper, we outline the Windchime project, a program to build a large array of quantum-enhanced mechanical sensors. The ultimate aim is to build a detector capable of searching for Planck mass-scale dark matter purely through its gravitational coupling to ordinary matter. In the shorter term, we aim to search for a number of other physics targets, especially some ultralight dark matter candidates. Here, we discuss the basic design, open R&D challenges and opportunities, current experimental efforts, and both short- and long-term physics targets of the Windchime project.

Monday, March 14, 2022

Persistent Tensions In Cosmology

A Snowmass 2021 contribution reminds us that there are severe tensions in LambdaCDM parameters that threaten the model itself.
In this paper we will list a few important goals that need to be addressed in the next decade, also taking into account the current discordances between the different cosmological probes, such as the disagreement in the value of the Hubble constant H(0), the σ8 -- S8 tension, and other less statistically significant anomalies. 
While these discordances can still be in part the result of systematic errors, their persistence after several years of accurate analysis strongly hints at cracks in the standard cosmological scenario and the necessity for new physics or generalisations beyond the standard model. 
In this paper, we focus on the 5.0σ tension between the Planck CMB estimate of the Hubble constant H(0) and the SH0ES collaboration measurements. 
After showing the H(0) evaluations made from different teams using different methods and geometric calibrations, we list a few interesting new physics models that could alleviate this tension and discuss how the next decade's experiments will be crucial. 
Moreover, we focus on the tension of the Planck CMB data with weak lensing measurements and redshift surveys, about the value of the matter energy density Ω(m), and the amplitude or rate of the growth of structure (σ8, f σ8). 
We list a few interesting models proposed for alleviating this tension, and we discuss the importance of trying to fit a full array of data with a single model and not just one parameter at a time. Additionally, we present a wide range of other less discussed anomalies at a statistical significance level lower than the H(0) -- S8 tensions which may also constitute hints towards new physics, and we discuss possible generic theoretical approaches that can collectively explain the non-standard nature of these signals.
Elcio Abdalla, et al., "Cosmology Intertwined: A Review of the Particle Physics, Astrophysics, and Cosmology Associated with the Cosmological Tensions and Anomalies" arXiv:2203.06142 (March 11, 2022).

A New Telescope Proposal Promises A Lot

It is no secret that I am highly skeptical of the prevailing LambdaCDM paradigm in cosmology today. But I am also hopeful because the sheer torrent of new data coming in from multiple new state of the art observatories will keep theoretical proposals tethered to empirical evidence, providing the tools to rule out incorrect theories. 

The CMB-HD Collaboration is particularly ambitious and if even half of their claims pan out, will be important in bounding theoretical proposals. 
CMB-HD is a proposed millimeter-wave survey over half the sky that would be ultra-deep (0.5 uK-arcmin) and have unprecedented resolution (15 arcseconds at 150 GHz). Such a survey would answer many outstanding questions about the fundamental physics of the Universe. Major advances would be 
1) the use of gravitational lensing of the primordial microwave background to map the distribution of matter on small scales (k~10 h Mpc^(-1)), which probes dark matter particle properties. It will also allow 
2) measurements of the thermal and kinetic Sunyaev-Zel'dovich effects on small scales to map the gas density and velocity, another probe of cosmic structure. In addition, CMB-HD would allow us to cross critical thresholds: 
3) ruling out or detecting any new, light (< 0.1 eV) particles that were in thermal equilibrium with known particles in the early Universe,
4) testing a wide class of multi-field models that could explain an epoch of inflation in the early Universe, and 
5) ruling out or detecting inflationary magnetic fields. 
CMB-HD would also provide world-leading constraints on 6) axion-like particles, 7) cosmic birefringence, 8) the sum of the neutrino masses, and 9) the dark energy equation of state. 
The CMB-HD survey would be delivered in 7.5 years of observing 20,000 square degrees of sky, using two new 30-meter-class off-axis crossed Dragone telescopes to be located at Cerro Toco in the Atacama Desert. Each telescope would field 800,000 detectors (200,000 pixels), for a total of 1.6 million detectors.
CMB-HD Collaboration, "Snowmass2021 CMB-HD White Paper" arXiv:2203.05728 (March 11, 2022).

Friday, March 11, 2022

Crater In Greenland Not Associated With Younger Dryas Event

There is significant evidence to support the theory that a meteor strike caused the Younger Dryas event. 

This event turned a warming period after the Last Glacial Maximum into a thousand year return to an ice age that destroyed the few hundred year old Clovis culture, and led to mass megafauna extinction in North America, and delaying by three thousand year the Neolithic Revolution that eventually emerged in the Fertile Crescent. The oldest known stone temple in the world, in what is now Turkey, Gobekli Tepe, appears to have images carved in stone that memorialize this impact.

Four years ago, scientists thought a crater in Greenland, found under its rapidly melting glacier, would be the smoking gun that would definitively prove that theory. 

But it turns out that it dated to only eight million years after an extra-terrestrial impact in the Gulf of Mexico led to the extinction of the dinosaurs and the rise of the mammals that replaced them. The genus Homo would not diverge from chimpanzees and bonobos for another 50 million years after the crater just found in Greenland formed.

So, the search for the impact site of the meteor that may have caused the Younger Dryas event continues.
In 2018, an international team of scientists announced a startling discovery: Buried beneath the thick ice of the Hiawatha Glacier in northwest Greenland is an impact crater 31 kilometers wide—not as big as the crater from the dinosaur-killing impact 66 million years ago, but perhaps still big enough to mess with the climate. Scientists were especially excited by hints in the crater and the surrounding ice that the Hiawatha strike was recent—perhaps within the past 100,000 years, when humans might have been around to witness it.

But now, using dates gleaned from tiny mineral crystals in rocks shocked by the impact, the same team says the strike is much, much older. The researchers say it occurred 58 million years ago, a warm time when vast forests covered Greenland—and humanity was not yet even a glimmer in evolution’s eye.
From Science citing this March 9, 2022 paper in Science Advances.

EDGES 21cm Result Questioned

Not so long ago, in  results published in 2018, the EDGES radio telescope in Australia made an observation of a particular signal that strongly contradicted prevailing theoretical expectations about "cosmic dawn." A new paper argues based upon another radio telescope observation, that this interpretation of what EDGES saw was incorrect, because their observations could not replicate the EDGES observation.

This is particularly notable because the EDGES result, if correct, is one of the most damning challenges to the LambdaCDM Standard Model of Cosmology at the cosmology scale. If it has been misinterpreted, that significantly strengthens the LambdaCDM model's cosmology problems (while still leaving its serious galaxy scale problems unresolved and still leaving in place some independent cosmology scale problems that are conceivably easier to remedy that the stark contradiction of the EDGES result).

The astrophysics of the cosmic dawn, when star formation commenced in the first collapsed objects, is predicted to be revealed by spectral and spatial signatures in the cosmic radio background at long wavelengths. The sky-averaged redshifted 21 cm absorption line of neutral hydrogen is a probe of the cosmic dawn. The line profile is determined by the evolving thermal state of the gas, radiation background, Lyman α radiation from stars scattering off cold primordial gas, and relative populations of the hyperfine spin levels in neutral hydrogen atoms. We report a radiometer measurement of the spectrum of the radio sky in the 55–85 MHz band, which shows that the profile found by Bowman et al. in data taken with the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) low-band instrument is not of astrophysical origin; their best-fitting profile is rejected with 95.3% confidence. The profile was interpreted to be a signature of the cosmic dawn; however, its amplitude was substantially higher than that predicted by standard cosmological models. Our non-detection bears out earlier concerns and suggests that the profile found by Bowman et al. is not evidence for new astrophysics or non-standard cosmology.
Singh, S., Nambissan T., J., Subrahmanyan, R. et al., "On the detection of a cosmic dawn signal in the radio background." Nature Astronomy (February 28, 2022). https://doi.org/10.1038/s41550-022-01610-5

The journal also has a story explaining the claims of the new paper for a wider audience. The EDGES group argues that the systemic error could just as easily be from the group behind the new paper as from their own observations.

Naturalness And Similar Delusions

"Naturalness" is not a real physics problem. The "hierarchy problem" and the "strong CP problem" and the "baryon asymmetry of the Universe problem", are likewise not real physics problems. These are just cases of unfounded conjectures about how Nature ought to be that are wrong.
At Quanta magazine, another article about the “naturalness problem”, headlined A Deepening Crisis Forces Physicists to Rethink Structure of Nature’s Laws. This has the usual problem with such stories of assigning to the Standard Model something which is not a problem for it, but only for certain kinds of speculative attempts to go beyond it. John Baez makes this point in this tweet:
Indeed, calling it a “crisis” is odd. Nothing that we really know about physics has become false. The only thing that can come crashing down is a tower of speculations that have become conventional wisdom.
James Wells has a series of tweets here, starting off with
The incredibly successful Standard Model does not have a Naturalness problem. And if by your criteria it does, then I can be sure your definition of Naturalness is useless.
He points to a more detailed explanation of the issue in section 4 of this paper.
My criticisms of some Quanta articles are motivated partly by the fact that the quality of the science coverage there is matched by very few other places. If you want to work there, they have a job open.

I share Woit's opinion that Quanta is, on average, one of the better sources of science journalism directed to educated laypersons in the English language.

A Proposal To Constrain Self-Interacting Dark Matter

Usually, I prefer to wait until an experiment or observation has results before discussing a proposal. But this time, the theoretical analysis in the paper and the state of existing knowledge, is sufficient.

The key theoretical conclusion is that structure formation happens later in self-interacting dark matter models than it does with the Lambda CDM Standard Model of Cosmology's simple cold dark matter model.

If this theoretical analysis is correct, however, we can pretty much rule out self-interacting dark matter models, because we know that structure formation has been observed to take place earlier than predicted in the LambdaCDM model, as is the case generically in gravity based models attempting to describe phenomena attributed to dark matter.

Observations of the high redshift universe provide a promising avenue for constraining the nature of the dark matter (DM). This will be even more true following the now successful launch of the James Webb Space Telescope (JWST). We run cosmological simulations of galaxy formation as a part of the Effective Theory of Structure Formation (ETHOS) project to compare the properties of high redshift galaxies in Cold (CDM) and alternative DM models which have varying relativistic coupling and self-interaction strengths. 
Phenomenologically, the interacting DM scenarios result in a cutoff in the linear power spectrum on small-scales, followed by a series of "dark acoustic oscillations" (DAOs). We find that DM interactions suppress the abundance of galaxies below M⋆∼10^8M⊙ for the models considered. The cutoff in the linear power spectrum generally causes a delay in structure formation relative to CDM. Objects in ETHOS that end up at the same final masses as their CDM counterparts are characterised by a more vigorous phase of early star formation. While galaxies with M⋆≲10^6M⊙ make up more than 60 per cent of star formation in CDM at z≈10, they contribute only about half the star formation density in ETHOS. These differences in star formation diminish with decreasing redshift. We find that the effects of DM self-interactions are negligible compared to effects of relativistic coupling (i.e. the effective initial conditions for galaxy formation) in all properties of the galaxy population we examine. Finally, we show that the clustering strength of galaxies at high redshifts depends sensitively on DM physics, although these differences are manifest on scales that may be too small to be measurable by JWST.
Ali Kurmus, et al., "The feasibility of constraining DM interactions with high-redshift observations by JWST" arXiv:2203.04985 (March 9, 2022) (Submitted to MNRAS).

Tuesday, March 8, 2022

Deur Takes On The CMB

Deur's approach to gravity, emphasizing gravitational field self-interactions in weak fields, that are generally neglected on the assumption that they are negligible in aggregate effect, is used to explain the Cosmic Microwave Background power spectrum which is the crowing achievement of the LambdaCDM model.

Figure 2 from the paper

Field self-interactions are at the origin of the non-linearities inherent to General Relativity. We study their effects on the Cosmic Microwave Background anisotropies. We find that they can reduce or alleviate the need for dark matter and dark energy in the description of the Cosmic Microwave Background power spectrum.
A. Deur, "Effect of the field self-interaction of General Relativity on the Cosmic Microwave Background Anisotropies" 
arXiv:2203.02350 (March 4, 2022).

The introduction in the body text explains:
The power spectrum of the Cosmic Microwave Background (CMB) anisotropies is a leading evidence for the existence of the dark components of the universe. This owes to the severely constraining precision of the observational data and to the concordance within the dark energy-cold dark matter model (Λ-CDM, the standard model of cosmology) of the energy and matter densities obtained from the CMB with those derived from other observations, e.g. supernovae at large redshift z. Despite the success of Λ-CDM, the absence of direct or indirect detection of dark matter particles is worrisome since searches have nearly exhausted the parameter space where candidates could reside. In addition, the straightforward extensions of particle physics’ Standard Model, e.g. minimal SUSY, that provided promising dark matter candidates are essentially ruled out. 
Λ-CDM also displays tensions with cosmological observations, e.g. it overestimates the number of dwarf galaxies and globular clusters or has no easy explanation for the tight correlations found between galactic dynamical quantities and the supposedly sub-dominant baryonic matter, e.g. the Tully-Fisher or McGaugh et al. relations. 
These worries are remotivating the exploration of alternatives to dark matter and possibly dark energy. To be as compelling as Λ-CDM, such alternatives must explain the observations suggestive of dark matter/energy consistently and economically (viz without introducing many parameters and fields). Among such observations, the CMB power spectrum is arguably the most prominent. 
Here we study whether the self-interaction (SI) of gravitational fields, a defining property of General Relativity (GR), may allow us to describe the CMB power spectrum without introducing dark components, or modifying the known laws of nature. GR’s SI already explains other key observations involving dark matter/energy: flat galactic rotation curves, large-z supernova luminosities, large structure formation, and internal dynamics of galaxy clusters, including the Bullet Cluster. It also explains the Tully-Fisher and McGaugh et al. relations. First, we recall the origin of GR’s SI and discuss when it becomes important as well as its overall effects.

Some other graphs from the conclusion to the paper pertinent to cosmology are below: 

Monday, March 7, 2022

Gravitons By Quantizing Space-Time

The spin-foam approach to quantum gravity is part of the class of approaches, that also include loop quantum gravity and a variety of other methods, that sets out to quantize space-time rather than the gravitational force itself. 

But, it turns out that "the continuum limit of spin foam dynamics does lead to massless gravitons."

The result is not at all obvious, and the analysis in the linked paper is challenging to follow. But it is also a result that is expected because both spin-foam approaches and graviton based quantum gravity approaches are trying to approximate general relativity. And, general relativity is widely believed to be the classical limit of a quantum theory with a massless spin-2 graviton.

It could be that at some deep level quantizing space-time, and quantizing gravity in a more or less canonical way with a massless spin-2 graviton, are equivalent, with one implying the other and visa versa.

A New Bottom Quark Mass Determination

A new paper precisely estimates that the bottom quark pole mass (i.e. its mass at the energy scale of its own mass) is 4180.2 ± 7.9 MeV, assuming the global average value of the strong force coupling constant at the Z boson energy scale of 0.1182.

The uncertainty in the strong force coupling constant's value is about ± 0.0010 (dimensionless) which translates into an additional uncertainty in the bottom quark mass determination of about 1.1 MeV. So, the combined uncertainty in the estimate is about 8.0 MeV.

You could state the result without spurious accuracy as 4180(8) MeV. 

The Particle Data Group value is the same, but with a much more conservative margin of error at 4180 +30/-20 MeV. The FLAG19 value is 4198(12), which is consistent at 1.25 sigma with the new paper's result. The FLAG21 value is 4203(11) MeV, which is consistent at 1.7 sigma with the new paper's result.

This is based mostly upon precision measurements of the masses of excited bottomonium mesons, and upon calculations from perturbative quantum chromodynamics (QCD), which is the strong force part of the Standard Model, using high energy physics QCD methods, rather than the lattice QCD methods used in lower energy contexts which predominant in quark mass determination studies. The confirmation of the lattice QCD results with perturbative QCD suggests that the lattice QCD results are robust.

The two parts per thousand precision is impressive and is consistent with prior measurements.

Friday, March 4, 2022

Directly Measuring Gravitational Self-Interaction

This paper suggests a clever experimental method for investigating gravitational self-interaction and determining if the nature of gravity is quantum or classical. It might be possible with a very precise double slit experiment that could be performed in a fairly ordinary physics laboratory.
The Schrodinger-Newton equation has frequently been studied as a nonlinear modification of the Schrodinger equation incorporating gravitational self-interaction. However, there is no evidence yet as to whether nature actually behaves this way. This work investigates a possible way to experimentally test gravitational self-interaction. 
The effect of self-gravity on interference of massive particles is studied by numerically solving the Schrodinger-Newton equation for a particle passing through a double-slit. The results show that the presence of gravitational self-interaction has an effect on the fringe width of the interference that can be tested in matter-wave interferometry experiments. 
Notably, this approach can distinguish between gravitational self-interaction and environment induced decoherence, as the latter does not affect the fringe width. This result will also provide a way to test if gravity requires to be quantized on the scale of ordinary quantum mechanics.
Sourav Kesharee Sahoo, Ashutosh Dash, Radhika Vathsan, Tabish Qureshi, "Testing Gravitational Self-interaction via Matter-Wave Interferometry" arXiv:2203.01787 (March 3, 2022).

The introduction to the body text of the paper explains that:
The emergence of classicality from quantum theory is an issue which has plagued quantum mechanics right from its inception. Schr¨odinger equation is linear, and thus allows superposition of any two distinct solutions. However, in our familiar classical world, the superposition of macroscopically distinct states, e.g., the state corresponding to two well separated distinct positions of a particle, is never observed. Taking into account environment induced decoherence, one may argue that pure superposition states do not survive for long, and the interaction with the environment causes the off-diagonal elements of the system reduced density matrix to vanish. The remaining diagonal terms are then interpreted as classical probabilities. However, decoherence is based on unitary quantum evolution and if one tried to explain how a single outcome results for a particular measurement, one will eventually be forced to resort to some kind of many worlds interpretation. 
Another class of approaches to address this issue invokes some kind of nonlinearity in quantum evolution, which may cause macroscopic superposition states to dynamically evolve into one macroscopically distinct state. Different theories attribute the origin of the nonlinearity to different sources, for instance, an inherent nonlinearity in the evolution equation, or gravitational self-interaction. 
Considerable effort has been put into finding ways to test any non-linearity which may lead to the destruction of superpositions. For example, an experiment in space was proposed, which would involve preparing a macroscopic mirror in a superposition state. The problem with such experiments, even if they are successfully realized, is that it is difficult to rule out the role of decoherence in destroying the superposition. What is sorely needed, is an effect which can distinguish between the effect of nonlinearity of the Schr¨odinger equation and the effect of environment-induced decoherence. This is the issue we wish to address in this work. 

In 1984 L. Diosi introduced a gravitational self-interaction term in the Schr¨odinger equation in order to constrain the spreading of the wave packet with time. The resulting integro-differential equation, the Schr¨odinger-Newton (S-N) equation compromised the linearity of quantum mechanics but provided localized stationary solutions. It was R. Penrose who used the S-N equation to unravel quantum state reduction phenomenon. He proposed that macroscopic gravity could be the reason for the collapse of the wave function as the wave packet responds to its self gravity. The effect of gravity and self-gravity on quantum systems have been studied by several authors. 

The coupling of classical gravity to a quantum system also addresses the question of whether gravity is fundamentally quantum or classical. While theories of semiclassical gravity have faced several theoretical objections, the ultimate test would be experimental. In such a context, providing an experimental route to test the effect of S-N non-linearity in a simple quantum mechanical context is valuable. 

In the present work, we focus on evolution of a superposition state through the non-linear Schr¨odinger-Newton equation. Any signature of non-linearities due to the gravitational self-interaction in the variation of fringe width with mass should give us an experimental handle on separating the effect of decoherence from gravitational state reduction.