Wednesday, August 11, 2021

Tree Diversity

The eastern part of the continental United States has far more species of "wild" tress than the western part of the continental United States does.

This is something that is unsurprising on the Great Plains, but wouldn't be common knowledge when it comes to the mountains (although high desert certainly isn't an optimal environment for tree species diversity) or near the Pacific Coast. 

The deep South has the most rich diversity of tree species in the continental United States.



Gravitational Self-Interaction In Lieu Of Dark Matter For Large Scale Structure

Deur's approach to gravitational field self-interaction in lieu of dark matter once again has success, this time describing large scale structure formation.
We check whether General Relativity's field self-interaction alleviates the need for dark matter to explain the universe's large structure formation. We found that self-interaction accelerates sufficiently the growth of structures so that they can reach their presently observed density. No free parameters, dark components or modifications of the known laws of nature were required. This result adds to the other natural explanations provided by the same approach to the, inter alia, flat rotation curves of galaxies, supernovae observations suggestive of dark energy, and dynamics of galaxy clusters, thereby reinforcing its credibility as an alternative to the dark universe model.
Alexandre Deur, "Effect of gravitational field self-interaction on large structure formation" arxiv.org: 2108.04649 (July 9, 2021) (Accepted for publication in Phys. Lett. B) DOI: 10.1016/j.physletb.2021.136510

The of the paper conclusion explains that:
The consistency of the standard Λ-CDM model of the universe in explaining many observations that would be otherwise problematic is a compelling argument for the existence of dark matter and dark energy. 
Yet, there are good reasons for studying alternatives to Λ-CDM, e.g. the lack of detection of dark particles, the dwindling support from theories beyond the standard model of particle physics, observations that challenge the dark matter model such as [19], lack of observations of Λ-CDM predictions such as the dwarf galaxy problem, or the Hubble tension [20]. The credibility of an alternative approach is enhanced if, like for Λ-CDM, it can consistently explain the otherwise puzzling cosmological observations. 
One alternative approach proposes that these observations are explained by the self-interaction of gravitational fields in General Relativity. It naturally explains the galactic rotation curves [6]-[8], the supernovae observations suggestive of dark energy [10], the tight empirical relation between baryonic and observed accelerations [9, 19], the dynamics of galaxy clusters [6] and the Tully-Fisher relation [6, 10, 16]. 
The explanation is natural in the sense that a similar phenomenology is well-known in the context of QCD, a fundamental force whose Lagrangian has the same structure as that of General Relativity. Crucially, no free parameters are necessary, nor exotic matter or fields, nor modifications of the known laws of nature. 
In this article, we checked whether the approach also explains the formation of large structures. We found that field self-interaction strengthens sufficiently the gravitational force so that the small CMB inhomogeneities can grow to the density presently observed. 
Again, no free parameters were needed: the function that globally quantifies the effect of field self-interaction had been previously determined in Ref. [10] in the context of the a priori unrelated topic of dark energy.

Commentary

Deur's work is, in my humble opinion, by far the most plausible and credible explanation of the phenomena attributed to "dark matter" and "dark energy", and the most promising foundation from which to develop a theory of quantum gravity. Dark matter is the only phenomena known for which there seems to be overwhelming evidence that core theory (i.e. the Standard Model plus General Relativity as conventionally applied) definitely can't explain. The mass of the Higgs boson, first observed in 2012, for example, is at a value that would allow the Standard Model to produce sensible predictions all of the way up to Big Bang energies.

Multiple factors distinguish Deur's approach from all of the dark matter, dark energy, and modified gravity alternatives - strictly speaking his approach isn't any of these things.

In Deur's view, "dark matter" and "dark energy" are simply the product of disregarding gravitational field self-interactions in weak gravitational fields in favor of Newtonian gravitational approximations of General Relativity in complex galaxy scale and larger systems, something that has been done rather thoughtlessly and without much rigorous analysis, when it is inappropriate to do so and overlooks important observable consequences of the non-linear part of General Relativity in non-spherically symmetric, galaxy scale or larger systems.

Deur reproduces the successes of MOND, while curing its phenomenological defects in galaxy clusters and other relativistic circumstances. 

Deur provides an exact equation for MOND effects in particular types of galaxies, rather than an empirically fit toy model approximation, and provides a theoretical basis for MOND behavior derived from a theory of gravity that has already worked extraordinarily well for more than a century in the face of myriad strong field observational tests.

Deur does all of this with just one, universal, experimentally measured parameter, in addition to Newton's constant. But this constant is determinable, in principle, from first principles (although he hasn't actually done the first principles derivation of it). Meanwhile, Deur removes one of the experimentally measured physical constants, the cosmological constant, from the ranks of fundamental constants in the "core theory" of the Standard Model plus General Relativity, thereby improving on the status quo. 

Thus, Deur manages to fit all of the observation data to which his approach has been applied, with no free parameters or equation terms not already fixed by General Relativity a century ago. In contrast, the best dark matter particle theory simulations require sixteen finely tuned free parameters to match what we observe.

Deur reduces the amount of mass-energy in the universe attributed to the dark sector by about 95%, with lambda CDM currently estimating that the universe is about 68% dark energy and 27% dark matter, although the potential systemic error in the dark energy energy component is greatly overestimated. This could also help solve the "flatness problem."

Deur's work eliminates the theoretical motivation to find primordial black holes, a search that has so far come up completely empty, not identifying even one of them, although a few small corners of parameter space remain to rule them out as a dark matter candidates or as actual phenomena at all.

Deur's cosmology can explain everything we observe entirely with Standard Model fundamental particles, with one exception, which isn't strictly speaking necessarily required. He calls for only one particle beyond the Standard Model, and then, only if it is expressed as a quantum gravity theory. This is the absolutely plain vanilla massless spin-2 graviton that couples to particles in proportion to their mass-energy with a strength expressed by Newton's constant, which every quantum gravity researcher predicts. But he doesn't even require that General Relativity be formulated as a quantum gravity theory to achieve his results.

Deur solves the profound conservation of mass-energy problem of general relativity with a cosmological constant, or alternative dark energy theories, in an elegant way that no one else that I am aware of has seriously even attempted. This may be the only possible solution to observed dark energy phenomena that does so. Everyone else has simply been content to make one exception to the conservation of mass-energy, or to look for systemic error in estimating it. He actually solves this problem.

Like the universe of modified gravity theories generally, Deur's approach lacks the many serious flaws of the various dark matter particle theories that seem to be growing in number every few months. 

In Deur's approach, the non-detection of dark matter particles is expected. The close correlation between baryonic matter content and apparent dark matter now flows directly from a formula, with the deviations from the scaling relations observed attributable to the geometry of the mass distributions. The tendency of satellite galaxies to fall in a plane with spiral galaxies is explained. The enhanced attraction of wide binary stars is explained. The apparent lack of dark matter suggested by 21cm data is explained. The impossible early galaxy problem is resolved, while still producing correct levels of large scale structure in recent times. The behavior of galaxy cluster collisions like the Bullet Cluster is no longer problematic (it currently is a problem in both dark matter particle theories where it is too improbable, and in some, but not all, modified gravity theories).

While he hasn't done it yet, Deur's approach should almost surely be able to reproduce the Cosmic Microwave Background radiation spectrum that a phenomenologically extremely similar relativistic MOND theory has already been able to reproduce. 

Deur's approach explains why inferred dark matter halos do not fit the NFW distribution that they should in lambdaCDM theory. Deur's approach explains the "cosmic coincidence" issue.

Eighteen years have passed since Deur's first preprint on this approach was published in September of 2003. He has published nine peer reviewed scientific journal articles on the topic since then, two with co-authors, and has another that looks likely to be published that has co-authors. Essentially no one else has cited his work, or built upon it, but it is also true that in all that time, not a single published article or pre-print comment has poked a hole in his analysis. 

In contrast, many other attempts by outsiders to the sub-field of astrophysics to explain dark matter phenomena with General Relativity, or to explain dark energy phenomena with General Relativity without a cosmological constant, such as a recent attempt to explain spiral galaxy rotation curves with gravitomagnetic effects in galaxies, have been quickly shot down.

Two of Deur's most recent articles have reached his conclusions from ordinary classical General Relativity, rather than from the quantum gravity enhancement of General Relativity that originally motivated his analysis, which makes the claims he has made in his papers less extraordinary at a theoretical level, even though his approach completely upsets the modern cosmology paradigm.

The resistance of the scientific establishment to Deur's work is understandable. Deur's primary professional experience and training is as a QCD physicist, not a astrophysicist or cosmologist. 

Distinguished general relativity scholars have stated that General Relativity shouldn't matter in these systems (without doing sufficiently rigorous analysis to confirm this without loopholes for non-spherically symmetric systems) and that gravitational self-interactions shouldn't be important, to the point that established researchers in astrophysics assumed this was a dead end that didn't bear serious investigation.

If Deur is right, every single dark matter particle theorist and major collaborations like the Planck collaboration have been fundamentally barking up the wrong tree pursuing what amounts epicycles, and even the modified gravity theorists have been somewhat wrong in believing that gravity had to be modified when it was right all along but misapplied.

Essentially every astronomy and cosmology observation in the last half century that was previously interpreted in terms of the leading lambdaCDM theory of cosmology, or the cosmological constant, or another dark matter particle theory, or an incorrect modified gravity theory, has to be revisited and reinterpreted. And, the analysis of the very complex system of the universe using Deur's approach, as this most recent paper illustrates, is a lot more subtle and tricky to conduct analytically, than the analysis using conventional lambdaCDM theory.

General Relativity without a cosmological constant is also significantly easier to formula as a quantum gravity theory than General Relativity with a cosmological constant.

The End Of Fundamental Physics?

Without unexplained phenomena to describe, theoretical work proposing dark matter candidates and modifications to gravity, and support for experimental searches for them would fade to a slight simmer. 

Dark matter candidates are also the strongest observational motivation for particle physics searches for beyond the Standard Model fundamental particles, and that motivation would disappear as well.

Deur's work isn't the only reason for theorists proposing wild new beyond the Standard Model particles or forces, which have been spewing out in arXiv preprints in a steady flow for years, to be disheartened, however.

After more than a decade of operation, the Large Hadron Collider still hasn't found any meaningful hints of new fundamental particles or forces other than the long predicted Standard Model Higgs boson, despite exploring energy scales much higher than any previous experiment. 

Lepton universality violations are the only serious anomalies that remain outstanding at this point and for reasons that I have expressed in previous posts, I think it is likely that these are due to look elsewhere effects, to unrecognized systemic error, or to incorrectly modeled theoretical predictions (with this last option being most likely).

Furthermore, I noted in an April 7, 2021 blog post, reporting the latest muon g-2 measurement:

[A] Lattice QCD collaboration known as BMW released a new paper in Nature that concludes . . . that the leading order hadron vacuum polarization calculation which is the dominant source of theoretical error in the Standard Model prediction should be calculated in a different matter that it turns out is consistent to within 1.6 sigma of the combined muon g-2 measurement (and to within 1.1 sigma of the Fermilab measurement) and suggests that the Standard Model is complete and requires no new physics. Meanwhile another preprint announced an improved calculation of the hadronic light by light contribution to the Standard Model prediction that also moves the prediction closer to the experimental value[.]

Another theoretical group using different methods came up with a different prediction for the value of muon g-2 that is in significant tension with the new Fermilab measurement. But, I personally have almost no doubt that the BMW calculation and the other improved calculation announced the same day, are the correct ones. This leaves very little room for new physics at energy scales that can be experimentally probed in the foreseeable future.

Most consequentially, these developments, taken together, essentially put nails in the coffin of supersymmetry theories and with them, string theory, which needs to have a supersymmetry theory as its low energy effective theory. Scientists looked long and hard in all the right places for evidence of supersymmetry and came up empty handed. This dominant paradigm in the theoretical physics community is breathing its dying gasps.

Deur's work, and the latest muon g-2 and LHC results, leave "core theory" reigning supreme. It may take another generation for theorists to stop proposing new physics that these developments leave us without a need to explain with new physics. But the writing is already on the wall.

This doesn't entirely leave physicists chasing the ultimate fundamentals with nothing to do (and, of course, there are plenty of non-fundamental physics questions like those raised by condensed matter physics, fluid dynamics, hadron physics, nuclear physics, and the star, planet and galaxy formation process, that are left to answer). 

But these developments should refocus the fundamental physics sub-discipline seeking the ultimately complete laws of Nature on: (1) the source of the fundamental constants in the Standard Model which seem to have a pattern to them (which I suspect involves an extension of Koide's rule and the LP&C relationship in the context of electroweak unification theory as key elements), (2) on hammering out the details of neutrino physics (and especially the mechanism by which neutrinos acquire their mass which I strongly suspect is not Majorana in nature), (3) on figuring out what is behind apparent observations of lepton universality violations (which I suspect will disappear with further analysis and experimental work), and (4) on matter creation (i.e. baryogenesis and leptogenesis) which is a matter that I also think has a relatively straightforward explanation that isn't really contrary to Standard Model physics or General Relativity (i.e. an anti-matter dominated universe expanding outward in the opposite direction in time from our universe that is a counterpart to our own post-Big Bang universe).

Inflation Considered

Pretty much the only gravitational element of modern cosmology upon Deur's approach remains agnostic is cosmological inflation (which isn't a consensus view even now, in cosmology and comes in hundreds of different flavors, at least dozens of which are still potentially consistent with observations).

In physical cosmologycosmic inflationcosmological inflation, or just inflation, is a theory of exponential expansion of space in the early universe. The inflationary epoch lasted from 10−36 seconds after the conjectured Big Bang singularity to some time between 10−33 and 10−32 seconds after the singularity. Following the inflationary period, the universe continued to expand, but at a slower rate. The acceleration of this expansion due to dark energy began after the universe was already over 7.7 billion years old (5.4 billion years ago). . . .

It was developed further in the early 1980s. It explains the origin of the large-scale structure of the cosmosQuantum fluctuations in the microscopic inflationary region, magnified to cosmic size, become the seeds for the growth of structure in the Universe. Many physicists also believe that inflation explains why the universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the universe is flat, and why no magnetic monopoles have been observed.
Magnetic monopoles are already a non-existent problem ruling out a rubbish theory with no observational support. The other issues need to be revisited afresh.

Deur's current paper traces cosmology from 370,000 years or so after the Big Bang known as "recombination time", while inflation pertains only to a first fraction of a second after the Big Bang, so it isn't obvious if Deur's approach would change the analysis to date. But, as this paper shows, it does already greatly move back the time frame at which dark energy phenomena begin to emerge and become relevant to about 1 billion years after the Big Bang, which is much earlier than in the current paradigm.

I wouldn't be surprised, however, if once the dust settled and all existing data was reinterpreted in light of Deur's work, that cosmological inflation turns out to be unnecessary to explain our observations (although this is merely my own rank conjecture).

Tuesday, August 10, 2021

Australia Once Had A Huge Flying Dinosaur

The newly described flying Australian dinosaur is "A" (the human form is for scale, it did not co-exist with humans).

Australia's largest flying reptile has been uncovered, a pterosaur with an estimated seven-metre wingspan that soared like a dragon above the ancient, vast inland sea once covering much of outback Queensland. . . .

"It's the closest thing we have to a real life dragon," Mr Richards said.

"The new pterosaur, which we named Thapunngaka shawi, would have been a fearsome beast, with a spear-like mouth and a wingspan around seven metres. . . . 

"This thing would have been quite savage.

"It would have cast a great shadow over some quivering little dinosaur that wouldn't have heard it until it was too late."

Mr Richards said the skull alone would have been just over one metre long, containing around 40 teeth, perfectly suited to grasping the many fishes known to inhabit Queensland's no-longer-existent Eromanga Sea.


 Australia 100 million years ago.

From here via Science Daily.

Friday, August 6, 2021

The Grave Of An Intersex Warrior In Finland

A single individual's tomb in Finland in the municipality of Hattula, which was radiocarbon dated to between 1040 and 1174, contained remains buried with prestige weapons usually associated with men, but also jewelry and other grave goods usually associated with women (via Bernard's Blog in French).

Investigators considered the possibility that it was the grave of a man and a woman, but there was only one set of remains and it was too small for that. They considered the possibility as well that it could have been a woman who had a leadership position for whom the weapons and male associated grave goods were symbols of authority rather than tools that the decedent used in life.

The bones themselves were not in sufficiently good condition due to decomposition to determine the decedent's gender, but two femurs yielded some ancient DNA evidence. As Bernard relates:
there was very little preserved DNA and the authors were only able to test the gender of the individual. The results showed that this individual has an aneuploid karyotype: XXY (Klinefelter syndrome).
This syndrome is found in one out of 1000 to 2000 births, so while it is rare, it is an exceptional genetic condition that is observed now and then. 

Bernard identifies three other cases in the last three years where there is ancient DNA indicating either chromosomal abnormality (aneuploidy), or a gender identity suggested by grave goods that is contrary to what the person's genes would suggest.

Klinefelter syndrome increases the risk of someone's premature death somewhat, but is hardly a death sentence with people who have the condition usually living to adulthood and not discovering that they have the condition until then. According to Wikipedia (linked above):
The individual is then of a masculine character, but infertile. . . .

Under the name of Klinefelter syndrome we group together all or part of all of the following symptoms, a variability of expression being often observed and all the problems of life that cannot be linked to this syndrome: size on average larger than the siblings, possible delay in puberty, possibility during childhood of learning disabilities of language or reading, size of the testicles smaller from puberty, possibility in adolescence if there is a lack of testosterone of a low hairiness, lack of muscle tone, development of mammary glands or gynecomastia, brittle tooth enamel and osteoporosis adulthood.

The atypical expression of this syndrome therefore explains the frequent delay in its diagnosis, which is often done only as part of a search for sterility.
In other words, people with this genetic type, not infrequently, present as intersex individuals that don't fit nearly as neatly into categorization as male or female as the vast majority of people do. 

It seems clear that this medieval Finnish warrior presented in that way, and nonetheless lived a life ending with recognition as a high status member of this community.

Wednesday, August 4, 2021

ATLAS Sees Mild Irregularities In Higgs Related Processes

Lubos Motl calls attention to two new pre-prints from the ATLAS experiment at the Large Hadron Collider (LHC) (here and here).

One paper considers the possibility that a Higgs boson decaying to four leptons sometimes does so via an intermediate pair of spin-0 or spin-1 bosons (or a mixed pair) with masses in the 1-60 GeV range. The abstract of the paper states: "The data are found to be consistent with Standard Model expectations." But Lubos notes that the body text shows a 2.5 sigma local excess at 28 GeV. This is a mass resonance where an earlier report from the CMS experiment at the LHC also found weak evidence for a resonance when dimuons are produced from a decaying Higgs boson.

The other paper notes a 1000 GeV mass resonance of 3 sigma significance locally, but only 2 sigma significance globally, in a search for Higgs boson pair production in events with two b-jets and two τ-leptons with the LHC at full power. The conclusion of the paper states:

The data are found to be compatible with the background-only hypothesis, with the largest deviation being found in the search for resonant H H production at mass of 1 TeV, which corresponds to a local (global) significance of 3.0 σ (2.0 +0.4 −0.2 σ).

In my view, these are both probably nothing more than statistical flukes. The LHC does an immense number of experiments looking at Higgs boson formation and decay and some of them, inevitably, are going to be further from the mean predicted result than expected simply due to random chance. This is called the "look elsewhere effect" and is the reason that there is a difference between the local significance of an experimental result and its global significance.

I am also not impressed that either resonance is particularly well motivated theoretically, although there are certainly possible beyond the Standard Model theoretical explanations for these resonances out there.

But, if there is additional stronger evidence of these resonances as more data is collected, these papers could turn out to be the first hints of beyond the Standard Model physics.

Tuesday, August 3, 2021

Co-Inventor Of CKM Matrix Passes

One week ago, Tošihide Maskawa (81) died of cancer; see Physics World. He co-authored the CKM matrix with Kobajaši and got the 2008 Nobel Prize in Physics with Jojčiro Nambu (with the symmetry breaking as a theme).

Via Lubos Motl.

The CKM matrix is a key component of the electroweak part of the Standard Model. Four of the experimentally determined constants of the Standard Model of Particle Physics are the parameters of the CKM matrix. It sets forth the probability of one quark transforming via a W boson interaction into another kind of quark. It is also the only source of charge parity (CP) violation in the Standard Model (apart from possible CP violation in the parallel PMNS matrix which describes neutrino oscillation).

Friday, July 30, 2021

Serious Progress On The Langlands Program And What This Means

Woit's Not Even Wrong blog calls attention to an article in Quanta magazine about an area of research in mathematics called the Langlands program which is probably the most clear explanation of this area of mathematical research that I've ever seen, relating recent dramatic theoretical developments in this area of mathematics that may make it possible to solve several of the leading unsolved mathematical problems in mathematics and physics.

These issues might have been resolved a century and a half earlier, but Évariste Galois, the mathematical genius who made the mathematical discoveries that started the ball rolling in this sprawling area of mathematical research died in a duel at the age of twenty, before he could continue developing these ideas. It then took more than a century for other, great, but not quite so epically brilliant, mathematicians to continue to make the next key developments in his work.

Basic Background

One common strategy in advanced mathematics is to take a problem that is too hard to solve, and find a parallel problem in a way that is exactly analogous to the problem you want to solve in the ways that are important but it is easier to solve. You solve the parallel problem, and then you reverse the process so that the answer to the easier to solve parallel problem gives you the answer to the hard to solve original problem.

For example, the Fourier Transform takes certain formulas defined in space and time that is hard to solve or work with, and converts a parallel formula into "frequency space" that is easier to solve and work with. And,  once you solve or simplify the problem in "frequency space" using differential equations, you can then convert that solution or simplification back to the original formula you were studying.

In a more elementary example, most equations can be displayed as a graph of all of the possible solutions of the equation, and sometimes it is easier to work with the graph of the function to readily identify some key properties of the equation, than it is to calculate them as a mathematical formula.

The Langlands Program

The Langlands program is an effort to do a similar kind of transformation to solve problems in number theory and other areas of mathematics. 

The block quotes below, before my quotation of Woit's block, are from the Quanta article with the bold emphasis, and the italic text in brackets, provided by me.

The Langlands program is a sprawling research vision that begins with a simple concern: finding solutions to polynomial equations like x^2 − 2 = 0 and x^4 − 10x^2 + 22 = 0. Solving them means finding the “roots” of the polynomial — the values of x that make the polynomial equal zero (x = ± the square root of 2 for the first example, and x = ± the square root of the sum of 5 and ± the square root of 3, for the second).

By the 1500s mathematicians had discovered tidy formulas for calculating the roots of polynomials whose highest powers are 2, 3 or 4. [Ed. you probably learned the formula for the case of powers of two, called the quadratic equation, in high school.] They then searched for ways to identify the roots of polynomials with variables raised to the power of 5 and beyond. 
But in 1832 the young mathematician Évariste Galois discovered the search was fruitless, proving that there are no general methods for calculating the roots of higher-power polynomials.

Galois didn’t stop there, though. In the months before his death in a duel in 1832 at age 20, Galois laid out a new theory of polynomial solutions. Rather than calculating roots exactly — which can’t be done in most cases — he proposed studying the symmetries between roots, which he encoded in a new mathematical object eventually called a Galois group.

In the example x^2 − 2, instead of making the roots explicit, the Galois group emphasizes that the two roots (whatever they are) are mirror images of each other as far as the laws of algebra are concerned.

“Mathematicians had to step away from formulas because usually there were no formulas,” said Brian Conrad of Stanford University. “Computing a Galois group is some measure of computing the relations among the roots.”

Throughout the 20th century mathematicians devised new ways of studying Galois groups. One main strategy involved creating a dictionary translating between the groups and other objects — often functions coming from calculus — and investigating those as a proxy for working with Galois groups directly. This is the basic premise of the Langlands program, which is a broad vision for investigating Galois groups — and really polynomials — through these types of translations.

The Langlands program began in 1967, when its namesake, Robert Langlands, wrote a letter to a famed mathematician named André Weil. Langlands proposed that there should be a way of matching every Galois group with an object called an automorphic form. While Galois groups arise in algebra (reflecting the way you use algebra to solve equations), automorphic forms come from a very different branch of mathematics called analysis, which is an enhanced form of calculus. Mathematical advances from the first half of the 20th century had identified enough similarities between the two to make Langlands suspect a more thorough link. . . .
If mathematicians could prove what came to be called the Langlands correspondence, they could confidently investigate all polynomials using the powerful tools of calculus. The conjectured relationship is so fundamental that its solution may also touch on many of the biggest open problems in number theory, including three of the million-dollar Millennium Prize problems: the Riemann hypothesis, the BSD conjecture and the Hodge conjecture. [Ed. Many other unsolved problems in mathematics have solutions that work if these conjectures can be shown to be accurate.] . . . 
Beginning in the early 1980s Vladimir Drinfeld and later Alexander Beilinson proposed that there should be a way to interpret Langlands’ conjectures in geometric terms. The translation between numbers and geometry is often difficult, but when it works it can crack problems wide open.

To take just one example, a basic question about a number is whether it has a repeated prime factor. The number 12 does: It factors into 2 × 2 × 3, with the 2 occurring twice. The number 15 does not (it factors into 3 × 5).

In general, there’s no quick way of knowing whether a number has a repeated factor. But there is an analogous geometric problem which is much easier.

Polynomials have many of the same properties as numbers: You can add, subtract, multiply and divide them. There’s even a notion of what it means for a polynomial to be “prime.” But unlike numbers, polynomials have a clear geometric guise. You can graph their solutions and study the graphs to gain insights about them.

For instance, if the graph is tangent to the x-axis at any point, you can deduce that the polynomial has a repeated factor (indicated at exactly the point of tangency). It’s just one example of how a murky arithmetic question acquires a visual meaning once converted into its analogue for polynomials.

“You can graph polynomials. You can’t graph a number. And when you graph a [polynomial] it gives you ideas,” said Conrad. “With a number you just have the number.”

The “geometric” Langlands program, as it came to be called, aimed to find geometric objects with properties that could stand in for the Galois groups and automorphic forms in Langlands’ conjectures. Proving an analogous correspondence in this new setting by using geometric tools could give mathematicians more confidence in the original Langlands conjectures and perhaps suggest useful ways of thinking about them.

The New Developments 

A series of papers by Laurent Fargues of the Institute of Mathematics of Jussieu in Paris and Peter Scholze of the University of Bonn, initially developing ideas independently and then collaborating, culminating in a 350 page paper released in February of 2021 which painstakingly resolve a lot of technical issues identified in the last three pages of a previous much shorter 2017 paper that set forth the basic ideas of their collaboration, has now finally filled in a lot of the gaps in the Langlands program.

In 2012, at the age of 24, Scholze invented a new kind of mathematically defined geometric object called a perfectoid space, which he expanded upon in 2014 as he taught an advanced math course for graduate students in which he invented the mathematical content he was teaching as he went along during the semester.
Scholze’s theory was based on special number systems called the p-adics. The “p” in p-adic stands for “prime,” as in prime numbers. For each prime, there is a unique p-adic number system: the 2-adics, the 3-adics, the 5-adics and so on. P-adic numbers have been a central tool in mathematics for over a century. They’re useful as more manageable number systems in which to investigate questions that occur back in the rational numbers (numbers that can be written as a ratio of positive or negative whole numbers), which are unwieldy by comparison.

The virtue of p-adic numbers is that they’re each based on just one single prime. This makes them more straightforward, with more obvious structure, than the rationals, which have an infinitude of primes with no obvious pattern among them. Mathematicians often try to understand basic questions about numbers in the p-adics first, and then take those lessons back to their investigation of the rationals. . . . 

All number systems have a geometric form — the real numbers, for instance, take the form of a line. Scholze’s perfectoid spaces gave a new and more useful geometric form to the p-adic numbers. This enhanced geometry made the p-adics, as seen through his perfectoid spaces, an even more effective way to probe basic number-theoretic phenomena, like questions about the solutions of polynomial equations. . . . 
Fargues together with "Jean-Marc Fontaine in an area of math called p-adic Hodge theory, which focuses on basic arithmetic questions about these numbers" invented "a curve — the Fargues-Fontaine curve — whose points each represented a version of an important object called a p-adic ring."

Fargues and Scholze were both in Berkley for a semester long session of the Mathematical Science Research Institute where they learned about each others work and started to collaborate with each other. They generalized the concept of the Fargues-Fontaine curve so it could be used to describe the kind of p-adic number system problems that Scholze was using his perfectoid spaces to deal with, and called these generalized structures "diamonds" which represent p-adic groups.

"Diamonds" make it possible to link Galois groups and specific mathematical structures that are easier to work with, and the collaborates proved with these new structures that there is always a specific Galois group that can be associated with a specific p-adic group. This proves half of a special case of the Langlands correspondence known as the "local Langlands correspondence". The other half of the local Langlands correspondence would show to transform a specific Galois group into a p-adic group, and progress in proving this half of the local Langlands correspondence seems likely. What is a "diamond"?
Imagine that you start with an unorganized collection of points — a “cloud of dust,” in Scholze’s words — that you want to glue together in just the right way to assemble the object you’re looking for. The theory Fargues and Scholze developed provides exact mathematical directions for performing that gluing and certifies that, in the end, you will get the Fargues-Fontaine curve. And this time, it’s defined in just the right way for the task at hand — addressing the local Langlands correspondence.
Progress in proving the existence of this large class of special cases of the local Langlands correspondence, in turn, makes prospects of generalizing the local Langlands correspondence to the global Langlands correspondence which applies to all rational numbers, rather than just p-adic groups.

The local and global Langlands correspondence relate to their corresponding local and global geometric Langlands correspondence with geometric objects known as "sheaves", the theory of which was brought to its current level by Alexander Grothendieck in the 1950s. 

The part of the local Langlands correspondence which was been proved by Fargues and Scholze has also been extended to the geometric Langlands correspondence by finding a way to translate their findings made with "diamonds" into descriptions using "sheaves."

So basically, Fargues and Scholze over the last decade, have taken the mush of speculation and stray thoughts and concepts that have been swirling around creating lots of heat but little light over the last half century around the promise of the Langlands program, and have solved the first of four main components of the ultimate goal. They have also provided a much more focused roadmap to solving the next three pieces of this problem.

Implications

Recall, as noted above in bold, that the prize at the end of the day for solving all four pieces is a solution to a large share of the most important unsolved problems in mathematics, many of which have practical applications in physics as well.

Woit's commentary on it is as follows;

Quanta magazine has a good article about the dramatic Fargues-Scholze result linking geometry and number theory....

[These are] extremely interesting topics indicating a deep unity of number theory, geometry and physics. They’re also not topics easy to say much about in a blog posting. In the Fargues-Scholze case that’s partly because the new ideas they have come up with relating arithmetic and geometry are ones I don’t understand very well at all (although I hope to learn more about them in the future). The connections they have found between representation theory, arithmetic geometry, and geometric Langlands are very new and it will likely be quite a few years before they are well understood and their implications well-developed. . . .

There is a fairly short path now potentially connecting fundamental unifying ideas in number theory and geometry to our best fundamental theories in physics (and seminars on arithmetic geometry and QFT are now a thing). 
The Fargues-Scholze work relates arithmetic and the central objects in geometric Langlands involving categories of bundles over curves. These categories in turn are related (in work of Witten and collaborators) to 4d TQFTs based on twistings of N=4 super Yang-Mills. This sort of 4d QFT involves much the same ingredients as 4d QFTs describing the Standard Model and gravity. For some better indication of the relation of number theory to this sort of QFT, a good source is David Ben-Zvi’s lectures this past semester (see here and here). 
I’m hopeful that the ideas about twistors and QFT in Euclidean signature discussed here will provide a close connection of such 4d QFTs to the Standard Model and gravity (more to come on this topic in the near future).

Of course, until the Langlands correspondence can be proven, this is all just a status report on a large scale, long term mathematical research effort that doesn't have much to show for it. 

But, this research is a potential sleeper wildcard that could lead to a rapid rush of major theoretical discoveries in mathematics and physics if it can be worked out, which is something that could easily happen in the next several years to a decade.

String Theory Still Broken

String theorists continue to say absurd things in support of their theoretical program, most recently, two absurd statements noted below from leading sting theorists.

The Lex Fridman podcast has an interview with Cumrun Vafa. Going to the section (1:19:48) – Skepticism regarding string theory) where Vafa answers the skeptics, he has just one argument for string theory as a predictive theory: it predicts that the number of spacetime dimensions is between 1 and 11. 
A second edition of Gordon Kane’s String Theory and the Real World has just appeared. One learns there (page 1-19) that
There is good reason, based on theory, to think discovery of the superpartners of Standard Model particles should occur at the CERN LHC in the next few years.

From Not Even Wrong (emphasis mine).

The first "prediction" is, of course, profoundly unimpressive.

The second prediction is profoundly unlikely, given how far along in its planned experimental run the LHC is already, and given the complete absence of experimental hints of the existence of superpartners of Standard Model particles so far. The excluded parameter space for these particles can be found here (as of October 2020) and grows larger almost every month with new papers from the LHC. 

Gordon Kane has a long track record of making unsubstantiated predictions (see, e.g., this 2018 post at this blog).

Monday, July 26, 2021

Steven Weinberg Has Passed

Physicist Steven Weinberg, who was born in 1933, died on July 23, 2021.
He was arguably the dominant figure in theoretical particle physics during its period of great success from the late sixties to the early eighties. In particular, his 1967 work on unification of the weak and electromagnetic interactions was a huge breakthrough, and remains to this day at the center of the Standard Model, our best understanding of fundamental physics.

Science News has another nice obituary for him.

The Common Cold Is Old

The common cold virus is much older than modern humans. 

The origins of viral pathogens and the age of their association with humans remains largely elusive. To date, there is no direct evidence about the diversity of viral infections in early modern humans pre-dating the Holocene. We recovered two near-complete genomes (5.2X and 0.7X) of human adenovirus C (HAdV-C), as well as low-coverage genomes from four distinct species of human herpesvirus obtained from two 31,630-year-old milk teeth excavated at Yana, in northeastern Siberia. 
Phylogenetic analysis of the two HAdV-C genomes suggests an evolutionary origin around 700,000 years ago consistent with a common evolutionary history with hominin hosts. 
Our findings push back the earliest direct molecular evidence for human viral infections by ∼25,000 years, and demonstrate that viral species causing common childhood viral infections today have been in circulation in humans at least since the Pleistocene.
From Sofie Holtsmark Nielsen, et al., "31,600-year-old human virus genomes support a Pleistocene origin for common childhood infections" bioRxiv (June 28, 2021).

The Testimony Of The Mandarin

Mandarin citrus fruits were first domesticated in the mountainous regions of Southern China, and spread widely from there. 

Hybridization of these mainland Chinese fruits and some wild species native to Japan's Southern Ryukyu Islands accounts for most important modern varieties of them.

Hunan Province of southern China, which is the center of wild mandarin diversity and the genetic source of most well-known mandarins. When the scientists re-analyzed previously published genomic data, they unexpectedly found that wild mandarins of this mountainous region are split into two subspecies.

"We found that one of these mandarin subspecies can produce offspring that are genetically identical to the mother," said Dr. Guohong Albert Wu, a research collaborator at the Lawrence Berkeley National Laboratory in California. "Like many other plants, wild citrus typically reproduces when the pollen of the father combines with the egg of the mother, mixing the genes from both parents in the seed. 
But we found a subspecies of wild mandarins from Mangshan, in southern China, where the seed contains an identical copy of the mother's DNA without any input from a father. So, the seed grows to be a clone of the mother tree."

From Science Daily.

The body text of the source paper explains that:

We find that the complexity of mandarin relationships is considerably simplified by the discovery of three ancestral lineages which, together with pummelo, gave rise to all extant mandarin diversity by hybridization and introgression. One of these groups is a previously unknown wild species currently found in the Ryukyu islands; the other two are previously unrecognized sister subspecies of mainland Asian mandarin. 
Our analysis leads to a comprehensive revision of the origin and diversification of east Asian citrus, including the elucidation of the origins of apomixis in mandarin and its spread to related citrus including oranges, grapefruits and lemons.

The paper and its abstract are:

The origin and dispersal of cultivated and wild mandarin and related citrus are poorly understood. Here, comparative genome analysis of 69 new east Asian genomes and other mainland Asian citrus reveals a previously unrecognized wild sexual species native to the Ryukyu Islands: C. ryukyuensis sp. nov. 
The taxonomic complexity of east Asian mandarins then collapses to a satisfying simplicity, accounting for tachibana, shiikuwasha, and other traditional Ryukyuan mandarin types as homoploid hybrid species formed by combining C. ryukyuensis with various mainland mandarins. These hybrid species reproduce clonally by apomictic seed, a trait shared with oranges, grapefruits, lemons and many cultivated mandarins. 
We trace the origin of apomixis alleles in citrus to mangshanyeju wild mandarins, which played a central role in citrus domestication via adaptive wild introgression. Our results provide a coherent biogeographic framework for understanding the diversity and domestication of mandarin-type citrus through speciation, admixture, and rapid diffusion of apomictic reproduction.
Guohong Albert Wu, et al., "Diversification of mandarin citrus by hybrid speciation and apomixis." 12(1) Nature Communications (July 26, 2021) (open access). 

Friday, July 16, 2021

Medieval Astronomy

The Syriac Text
The reconstructed sky in the place where the Syriac text was written on May 25, 760 at 2:40 a.m.

Scientists have
analyzed ancient historical accounts from Syria, China, the Mediterranean and West Asia (most critically, the detailed accounts in the hand written Syrian Chronicle of Zuqn¯ın, part of which ended up in the Vatican Library and part of which ended up in the British Museum) to confirm that all of these accounts viewed key parts of the appearance of a particular comet in their skies in late May and early June of the year 760 CE.  

The scientists matched these observations with calculations of where the comet 1P/Halley would have been in the sky at that time based upon its current observed trajectory with key dates pinned down to a margin of error of one to two days for particular events.

This 760 CE fly-by was the comet's last return before a close encounter with Earth in 837 CE. The 760 CE perihelion of the comet that was observed is particularly important for extrapolation further back in time. This study provides one of the longest time frames of confirmed continued observations of the same celestial object. This helps to confirm the accuracy and robustness of astronomy's current gravitational calculations of solar system orbits, and to remind us just how long quite accurate scientistic astronomy observations have been collected and recorded by people.

Historical Context

This was near the end of the period known as the "dark ages" in the former western Roman Empire in Europe, during the life of Charlemagne, eight years before he began his reign as the King of the Franks in what is now France, and forty years after the remarkably wet summer of 720 CE in Europe.

Decisive battles on land and at sea with the Byzantine Empire ended the expansion of the Umayyad Caliphate into the territory of the former Roman Empire fourteen years earlier (746 CE). The Eastern Orthodox Christian Byzantine Empire was the last remnant of the Roman Empire, in what is now most of Turkey, Greece and Italy, and would persist in a gradually diminished form over about three more centuries. The West Asian accounts were written by Byzantine subjects.

As the body text of the new paper explains:
The author of the chronicle was probably the stylite monk Joshua; a stylite is an early Byzantine or Syrian Christian ascetic living and preaching on a pillar in the open air, so that many celestial observations can be expected in his work. The author of the Chronicle of Zuqn¯ın may have lived on a pillar for some time. During the time of writing of the Chronicle of Zuqn¯ın [ed. completed in 775/776 CE], the area was outside the border of the Byzantine empire and already under 푐Abbasid rule.
Thus, the Syrian chronicle entries were written by a Christian monk under the jurisdiction of the Islamic Abbasid Caliphate (750-1517 CE) in areas recently reclaimed by the Caliphate after a brief Byzantine expansion into the territory of the Umayyad Caliphate which preceded it. The Abbasid Caliphate had been formed ten years earlier in the Abbasid Revolution and replaced the Umayyad Caliphate (661-750 CE). The Umayyad Caliphate had been led by an ethnically Arab elite that treated even non-Arab Muslim converts as second class citizens, while the Abbasid Caliphate was led by a multi-ethnic, mostly non-Arab, and eastern oriented Abbasid Caliphate that ruled in a more inclusive manner, whose Caliphs claimed to have descended from an uncle of the Prophet Muhammad (who died about four decades before the comet appeared).

The provenance of the Chronicle was somewhat involved. As the body text explains:
The Chronicle of Zuqn¯ın is not known to be copied and disseminated; sometime during the 9th century it was transferred to the Monastery of the Syrians in the Egyptian desert . . .  Shortly after the manuscript was found and bought for the Vatican, it was considered to be written by the ¯ West Syrian patriarch Dionysius I of Tell-Mah. re, so that this chronicle was long known as Chronicle of Dionysius of ¯ Tell-Mah. re. Dionysius did write an otherwise lost world chronicle, but lived later (died AD ca. 845). Since this mistake was noticed, the chronicle has been called the Chronicle of Pseudo-Dionysius of Tell-Mah. re or, better, the Chronicle of Zuqn¯ın, because the text mentions the monastery of Zuqn¯ın as the living place of the author; Zuqn¯ın was located near Amida, now Diyarbakır in Turkey near the border to Syria. 
The Chronicle of Zuqn¯ın is made of four parts: Part I runs from the creation to Emperor Constantine (AD 272-337), Part II from Constantine to Emperor Theodosius II (AD 401-450) plus a copy of the so-called Chronicle of PseudoJoshua the Stylite (AD 497 to 506/7), Part III from Theodosius to Emperor Justinian (AD 481-565), and Part IV to the time of writing, AD 775/776. The Chronicler used a variety of sources, some of them otherwise lost. The author knew that some of his sources did not provide a perfect chronology; for him, it is more important to convey his message (to learn from history) than to give perfect datings. 
The events reported in the text are dated using the Seleucid calendar; the Seleucid Era (SE) started on October 7, BC 312 (= Dios 1). There are several versions of the Seleucid calendar, including the Babylonian (Jewish), Macedonian, and West Syrian (Christian) ones. The author of our chronicle systematically used the latter version for reports during his lifetime – a solar calendar, in which the year ran from Tishri/October 1 to Elul/September 30, applied since at least the fifth century AD.
This was also two years before the city of Baghdad was founded within the Abbasid Caliphate, near the ancient city of Seleucia, which had been the capitol of the Nestorian Christian Church of the East from 410 CE, until it had to be abandoned to the desert sands when the Tigris River that made it possible to live there shifted, a few decades after Baghdad was founded.

In China, this comet's appearance coincided with the unsuccessful seven year long An Lushan Rebellion against the Tang Dynasty. This rebellion ended with a pair of stunning betrays, first when An Lushan, the leader of the rebellion, was killed by one of his own eunuchs in 757, and then when his successor as leader of the rebellion, Shi Siming, was killed by his own son in 763, which ended the rebellion.

Citizen Science In Astronomy

The actual goal of this project, to find gravitational lensing evidence of a small dark matter halo from a database with tens of thousands of observations, is pretty ordinary as astrophysics goes, and it doesn't yet have any definitive results. Still, it is a worthwhile project that adds incrementally to what we know in a well focused way to expand the margins of our existing knowledge.

It illustrates the reality that the many modern telescopes in multiple frequencies that are now being used can collect a vast amounts of information. But this has made a lot of questions in astrophysics and astronomy "big data" problems.

With a smaller database, a single skilled research could personally review each one. This painstaking pouring over of data by a single highly trained scientist with the PhD in the relevant subfield of astronomy is how this kind of research got started. But it is impossible for a single astronomer to conduct the necessary fairly detailed analysis of each observation required for this kind of study, for such a large collection of data, in a reasonable amount of time. But timely analysis is necessary because the amount of data to review gets larger every month. 

The firehose of incoming data is only getting stronger. For example, a new European Space Agency project targeted for the year 2045 will collect information on 10 to 12 billion new sources of light in the sky that are too faint to discern now.

The citizen science methodology used in this study is remarkable and exciting. It presents an alternative to statistical, machine learning, and supercomputing approaches to sorting through masses of data. Unlike these automated alternatives, this citizen science approach doesn't sacrifice the human judgment element of the process present when a single scientist analyzes a large, but tractable body of data. 

In this case, twenty people, about a quarter of whom were scientists, about quarter of whom were graduate students, and about half of whom were undergraduates, mostly at the University of Crete, worked together to tackle the large dataset to identify 40 strong candidates out of 13,828 (many of which have multiple images at different wave lengths that had to be considered) including two particularly promising needles in the haystack.

It is a kind of project I am familiar with from my day job as an attorney, where, for example, I've had to mobilize similar numbers of people with similar skill levels, to review an entire room full of banker's boxes of not very well organized hard copy business records to locate a handful of key documents in complex securities fraud litigation.

The way this project managed to mobilize so many people to volunteer their time for this somewhat esoteric goal, hearteningly democratized this scientific endeavor and made this task possible to complete.

The paper and its abstract are as follows:

Dark Matter (DM) halos with masses below 108 M, which would help to discriminate between DM models, may be detected through their gravitational effect on distant sources. The same applies to primordial black holes, considered as an alternative scenario to DM particle models. However, there is still no evidence for the existence of such objects. 
With the aim of finding compact objects in the mass range  106 -- 109M, we search for strong gravitational lenses on milli (mas)-arcseconds scales (< 150 mas). For our search, we used the Astrogeo VLBI FITS image database -- the largest publicly available database, containing multi-frequency VLBI data of 13828 individual sources. 
We used the citizen science approach to visually inspect all sources in all available frequencies in search for images with multiple compact components on mas-scales. At the final stage, sources were excluded based on the surface brightness preservation criterion. We obtained a sample of 40 sources that passed all steps and therefore are judged to be milli-arcsecond lens candidates. 
These sources are currently followed-up with on-going European VLBI Network (EVN) observations at 5 and 22 GHz. Based on spectral index measurements, we suggest that two of our candidates have a higher probability to be associated with gravitational lenses.

C. Casadio, et al., "SMILE: Search for MIlli Lenses" arXiv: 2017.06896 (July 14, 2021) (accepted for publication).