Friday, July 30, 2021

Serious Progress On The Langlands Program And What This Means

Woit's Not Even Wrong blog calls attention to an article in Quanta magazine about an area of research in mathematics called the Langlands program which is probably the most clear explanation of this area of mathematical research that I've ever seen, relating recent dramatic theoretical developments in this area of mathematics that may make it possible to solve several of the leading unsolved mathematical problems in mathematics and physics.

These issues might have been resolved a century and a half earlier, but Évariste Galois, the mathematical genius who made the mathematical discoveries that started the ball rolling in this sprawling area of mathematical research died in a duel at the age of twenty, before he could continue developing these ideas. It then took more than a century for other, great, but not quite so epically brilliant, mathematicians to continue to make the next key developments in his work.

Basic Background

One common strategy in advanced mathematics is to take a problem that is too hard to solve, and find a parallel problem in a way that is exactly analogous to the problem you want to solve in the ways that are important but it is easier to solve. You solve the parallel problem, and then you reverse the process so that the answer to the easier to solve parallel problem gives you the answer to the hard to solve original problem.

For example, the Fourier Transform takes certain formulas defined in space and time that is hard to solve or work with, and converts a parallel formula into "frequency space" that is easier to solve and work with. And,  once you solve or simplify the problem in "frequency space" using differential equations, you can then convert that solution or simplification back to the original formula you were studying.

In a more elementary example, most equations can be displayed as a graph of all of the possible solutions of the equation, and sometimes it is easier to work with the graph of the function to readily identify some key properties of the equation, than it is to calculate them as a mathematical formula.

The Langlands Program

The Langlands program is an effort to do a similar kind of transformation to solve problems in number theory and other areas of mathematics. 

The block quotes below, before my quotation of Woit's block, are from the Quanta article with the bold emphasis, and the italic text in brackets, provided by me.

The Langlands program is a sprawling research vision that begins with a simple concern: finding solutions to polynomial equations like x^2 − 2 = 0 and x^4 − 10x^2 + 22 = 0. Solving them means finding the “roots” of the polynomial — the values of x that make the polynomial equal zero (x = ± the square root of 2 for the first example, and x = ± the square root of the sum of 5 and ± the square root of 3, for the second).

By the 1500s mathematicians had discovered tidy formulas for calculating the roots of polynomials whose highest powers are 2, 3 or 4. [Ed. you probably learned the formula for the case of powers of two, called the quadratic equation, in high school.] They then searched for ways to identify the roots of polynomials with variables raised to the power of 5 and beyond. 
But in 1832 the young mathematician Évariste Galois discovered the search was fruitless, proving that there are no general methods for calculating the roots of higher-power polynomials.

Galois didn’t stop there, though. In the months before his death in a duel in 1832 at age 20, Galois laid out a new theory of polynomial solutions. Rather than calculating roots exactly — which can’t be done in most cases — he proposed studying the symmetries between roots, which he encoded in a new mathematical object eventually called a Galois group.

In the example x^2 − 2, instead of making the roots explicit, the Galois group emphasizes that the two roots (whatever they are) are mirror images of each other as far as the laws of algebra are concerned.

“Mathematicians had to step away from formulas because usually there were no formulas,” said Brian Conrad of Stanford University. “Computing a Galois group is some measure of computing the relations among the roots.”

Throughout the 20th century mathematicians devised new ways of studying Galois groups. One main strategy involved creating a dictionary translating between the groups and other objects — often functions coming from calculus — and investigating those as a proxy for working with Galois groups directly. This is the basic premise of the Langlands program, which is a broad vision for investigating Galois groups — and really polynomials — through these types of translations.

The Langlands program began in 1967, when its namesake, Robert Langlands, wrote a letter to a famed mathematician named André Weil. Langlands proposed that there should be a way of matching every Galois group with an object called an automorphic form. While Galois groups arise in algebra (reflecting the way you use algebra to solve equations), automorphic forms come from a very different branch of mathematics called analysis, which is an enhanced form of calculus. Mathematical advances from the first half of the 20th century had identified enough similarities between the two to make Langlands suspect a more thorough link. . . .
If mathematicians could prove what came to be called the Langlands correspondence, they could confidently investigate all polynomials using the powerful tools of calculus. The conjectured relationship is so fundamental that its solution may also touch on many of the biggest open problems in number theory, including three of the million-dollar Millennium Prize problems: the Riemann hypothesis, the BSD conjecture and the Hodge conjecture. [Ed. Many other unsolved problems in mathematics have solutions that work if these conjectures can be shown to be accurate.] . . . 
Beginning in the early 1980s Vladimir Drinfeld and later Alexander Beilinson proposed that there should be a way to interpret Langlands’ conjectures in geometric terms. The translation between numbers and geometry is often difficult, but when it works it can crack problems wide open.

To take just one example, a basic question about a number is whether it has a repeated prime factor. The number 12 does: It factors into 2 × 2 × 3, with the 2 occurring twice. The number 15 does not (it factors into 3 × 5).

In general, there’s no quick way of knowing whether a number has a repeated factor. But there is an analogous geometric problem which is much easier.

Polynomials have many of the same properties as numbers: You can add, subtract, multiply and divide them. There’s even a notion of what it means for a polynomial to be “prime.” But unlike numbers, polynomials have a clear geometric guise. You can graph their solutions and study the graphs to gain insights about them.

For instance, if the graph is tangent to the x-axis at any point, you can deduce that the polynomial has a repeated factor (indicated at exactly the point of tangency). It’s just one example of how a murky arithmetic question acquires a visual meaning once converted into its analogue for polynomials.

“You can graph polynomials. You can’t graph a number. And when you graph a [polynomial] it gives you ideas,” said Conrad. “With a number you just have the number.”

The “geometric” Langlands program, as it came to be called, aimed to find geometric objects with properties that could stand in for the Galois groups and automorphic forms in Langlands’ conjectures. Proving an analogous correspondence in this new setting by using geometric tools could give mathematicians more confidence in the original Langlands conjectures and perhaps suggest useful ways of thinking about them.

The New Developments 

A series of papers by Laurent Fargues of the Institute of Mathematics of Jussieu in Paris and Peter Scholze of the University of Bonn, initially developing ideas independently and then collaborating, culminating in a 350 page paper released in February of 2021 which painstakingly resolve a lot of technical issues identified in the last three pages of a previous much shorter 2017 paper that set forth the basic ideas of their collaboration, has now finally filled in a lot of the gaps in the Langlands program.

In 2012, at the age of 24, Scholze invented a new kind of mathematically defined geometric object called a perfectoid space, which he expanded upon in 2014 as he taught an advanced math course for graduate students in which he invented the mathematical content he was teaching as he went along during the semester.
Scholze’s theory was based on special number systems called the p-adics. The “p” in p-adic stands for “prime,” as in prime numbers. For each prime, there is a unique p-adic number system: the 2-adics, the 3-adics, the 5-adics and so on. P-adic numbers have been a central tool in mathematics for over a century. They’re useful as more manageable number systems in which to investigate questions that occur back in the rational numbers (numbers that can be written as a ratio of positive or negative whole numbers), which are unwieldy by comparison.

The virtue of p-adic numbers is that they’re each based on just one single prime. This makes them more straightforward, with more obvious structure, than the rationals, which have an infinitude of primes with no obvious pattern among them. Mathematicians often try to understand basic questions about numbers in the p-adics first, and then take those lessons back to their investigation of the rationals. . . . 

All number systems have a geometric form — the real numbers, for instance, take the form of a line. Scholze’s perfectoid spaces gave a new and more useful geometric form to the p-adic numbers. This enhanced geometry made the p-adics, as seen through his perfectoid spaces, an even more effective way to probe basic number-theoretic phenomena, like questions about the solutions of polynomial equations. . . . 
Fargues together with "Jean-Marc Fontaine in an area of math called p-adic Hodge theory, which focuses on basic arithmetic questions about these numbers" invented "a curve — the Fargues-Fontaine curve — whose points each represented a version of an important object called a p-adic ring."

Fargues and Scholze were both in Berkley for a semester long session of the Mathematical Science Research Institute where they learned about each others work and started to collaborate with each other. They generalized the concept of the Fargues-Fontaine curve so it could be used to describe the kind of p-adic number system problems that Scholze was using his perfectoid spaces to deal with, and called these generalized structures "diamonds" which represent p-adic groups.

"Diamonds" make it possible to link Galois groups and specific mathematical structures that are easier to work with, and the collaborates proved with these new structures that there is always a specific Galois group that can be associated with a specific p-adic group. This proves half of a special case of the Langlands correspondence known as the "local Langlands correspondence". The other half of the local Langlands correspondence would show to transform a specific Galois group into a p-adic group, and progress in proving this half of the local Langlands correspondence seems likely. What is a "diamond"?
Imagine that you start with an unorganized collection of points — a “cloud of dust,” in Scholze’s words — that you want to glue together in just the right way to assemble the object you’re looking for. The theory Fargues and Scholze developed provides exact mathematical directions for performing that gluing and certifies that, in the end, you will get the Fargues-Fontaine curve. And this time, it’s defined in just the right way for the task at hand — addressing the local Langlands correspondence.
Progress in proving the existence of this large class of special cases of the local Langlands correspondence, in turn, makes prospects of generalizing the local Langlands correspondence to the global Langlands correspondence which applies to all rational numbers, rather than just p-adic groups.

The local and global Langlands correspondence relate to their corresponding local and global geometric Langlands correspondence with geometric objects known as "sheaves", the theory of which was brought to its current level by Alexander Grothendieck in the 1950s. 

The part of the local Langlands correspondence which was been proved by Fargues and Scholze has also been extended to the geometric Langlands correspondence by finding a way to translate their findings made with "diamonds" into descriptions using "sheaves."

So basically, Fargues and Scholze over the last decade, have taken the mush of speculation and stray thoughts and concepts that have been swirling around creating lots of heat but little light over the last half century around the promise of the Langlands program, and have solved the first of four main components of the ultimate goal. They have also provided a much more focused roadmap to solving the next three pieces of this problem.


Recall, as noted above in bold, that the prize at the end of the day for solving all four pieces is a solution to a large share of the most important unsolved problems in mathematics, many of which have practical applications in physics as well.

Woit's commentary on it is as follows;

Quanta magazine has a good article about the dramatic Fargues-Scholze result linking geometry and number theory....

[These are] extremely interesting topics indicating a deep unity of number theory, geometry and physics. They’re also not topics easy to say much about in a blog posting. In the Fargues-Scholze case that’s partly because the new ideas they have come up with relating arithmetic and geometry are ones I don’t understand very well at all (although I hope to learn more about them in the future). The connections they have found between representation theory, arithmetic geometry, and geometric Langlands are very new and it will likely be quite a few years before they are well understood and their implications well-developed. . . .

There is a fairly short path now potentially connecting fundamental unifying ideas in number theory and geometry to our best fundamental theories in physics (and seminars on arithmetic geometry and QFT are now a thing). 
The Fargues-Scholze work relates arithmetic and the central objects in geometric Langlands involving categories of bundles over curves. These categories in turn are related (in work of Witten and collaborators) to 4d TQFTs based on twistings of N=4 super Yang-Mills. This sort of 4d QFT involves much the same ingredients as 4d QFTs describing the Standard Model and gravity. For some better indication of the relation of number theory to this sort of QFT, a good source is David Ben-Zvi’s lectures this past semester (see here and here). 
I’m hopeful that the ideas about twistors and QFT in Euclidean signature discussed here will provide a close connection of such 4d QFTs to the Standard Model and gravity (more to come on this topic in the near future).

Of course, until the Langlands correspondence can be proven, this is all just a status report on a large scale, long term mathematical research effort that doesn't have much to show for it. 

But, this research is a potential sleeper wildcard that could lead to a rapid rush of major theoretical discoveries in mathematics and physics if it can be worked out, which is something that could easily happen in the next several years to a decade.

String Theory Still Broken

String theorists continue to say absurd things in support of their theoretical program, most recently, two absurd statements noted below from leading sting theorists.

The Lex Fridman podcast has an interview with Cumrun Vafa. Going to the section (1:19:48) – Skepticism regarding string theory) where Vafa answers the skeptics, he has just one argument for string theory as a predictive theory: it predicts that the number of spacetime dimensions is between 1 and 11. 
A second edition of Gordon Kane’s String Theory and the Real World has just appeared. One learns there (page 1-19) that
There is good reason, based on theory, to think discovery of the superpartners of Standard Model particles should occur at the CERN LHC in the next few years.

From Not Even Wrong (emphasis mine).

The first "prediction" is, of course, profoundly unimpressive.

The second prediction is profoundly unlikely, given how far along in its planned experimental run the LHC is already, and given the complete absence of experimental hints of the existence of superpartners of Standard Model particles so far. The excluded parameter space for these particles can be found here (as of October 2020) and grows larger almost every month with new papers from the LHC. 

Gordon Kane has a long track record of making unsubstantiated predictions (see, e.g., this 2018 post at this blog).

Monday, July 26, 2021

Steven Weinberg Has Passed

Physicist Steven Weinberg, who was born in 1933, died on July 23, 2021.
He was arguably the dominant figure in theoretical particle physics during its period of great success from the late sixties to the early eighties. In particular, his 1967 work on unification of the weak and electromagnetic interactions was a huge breakthrough, and remains to this day at the center of the Standard Model, our best understanding of fundamental physics.

Science News has another nice obituary for him.

The Common Cold Is Old

The common cold virus is much older than modern humans. 

The origins of viral pathogens and the age of their association with humans remains largely elusive. To date, there is no direct evidence about the diversity of viral infections in early modern humans pre-dating the Holocene. We recovered two near-complete genomes (5.2X and 0.7X) of human adenovirus C (HAdV-C), as well as low-coverage genomes from four distinct species of human herpesvirus obtained from two 31,630-year-old milk teeth excavated at Yana, in northeastern Siberia. 
Phylogenetic analysis of the two HAdV-C genomes suggests an evolutionary origin around 700,000 years ago consistent with a common evolutionary history with hominin hosts. 
Our findings push back the earliest direct molecular evidence for human viral infections by ∼25,000 years, and demonstrate that viral species causing common childhood viral infections today have been in circulation in humans at least since the Pleistocene.
From Sofie Holtsmark Nielsen, et al., "31,600-year-old human virus genomes support a Pleistocene origin for common childhood infections" bioRxiv (June 28, 2021).

The Testimony Of The Mandarin

Mandarin citrus fruits were first domesticated in the mountainous regions of Southern China, and spread widely from there. 

Hybridization of these mainland Chinese fruits and some wild species native to Japan's Southern Ryukyu Islands accounts for most important modern varieties of them.

Hunan Province of southern China, which is the center of wild mandarin diversity and the genetic source of most well-known mandarins. When the scientists re-analyzed previously published genomic data, they unexpectedly found that wild mandarins of this mountainous region are split into two subspecies.

"We found that one of these mandarin subspecies can produce offspring that are genetically identical to the mother," said Dr. Guohong Albert Wu, a research collaborator at the Lawrence Berkeley National Laboratory in California. "Like many other plants, wild citrus typically reproduces when the pollen of the father combines with the egg of the mother, mixing the genes from both parents in the seed. 
But we found a subspecies of wild mandarins from Mangshan, in southern China, where the seed contains an identical copy of the mother's DNA without any input from a father. So, the seed grows to be a clone of the mother tree."

From Science Daily.

The body text of the source paper explains that:

We find that the complexity of mandarin relationships is considerably simplified by the discovery of three ancestral lineages which, together with pummelo, gave rise to all extant mandarin diversity by hybridization and introgression. One of these groups is a previously unknown wild species currently found in the Ryukyu islands; the other two are previously unrecognized sister subspecies of mainland Asian mandarin. 
Our analysis leads to a comprehensive revision of the origin and diversification of east Asian citrus, including the elucidation of the origins of apomixis in mandarin and its spread to related citrus including oranges, grapefruits and lemons.

The paper and its abstract are:

The origin and dispersal of cultivated and wild mandarin and related citrus are poorly understood. Here, comparative genome analysis of 69 new east Asian genomes and other mainland Asian citrus reveals a previously unrecognized wild sexual species native to the Ryukyu Islands: C. ryukyuensis sp. nov. 
The taxonomic complexity of east Asian mandarins then collapses to a satisfying simplicity, accounting for tachibana, shiikuwasha, and other traditional Ryukyuan mandarin types as homoploid hybrid species formed by combining C. ryukyuensis with various mainland mandarins. These hybrid species reproduce clonally by apomictic seed, a trait shared with oranges, grapefruits, lemons and many cultivated mandarins. 
We trace the origin of apomixis alleles in citrus to mangshanyeju wild mandarins, which played a central role in citrus domestication via adaptive wild introgression. Our results provide a coherent biogeographic framework for understanding the diversity and domestication of mandarin-type citrus through speciation, admixture, and rapid diffusion of apomictic reproduction.
Guohong Albert Wu, et al., "Diversification of mandarin citrus by hybrid speciation and apomixis." 12(1) Nature Communications (July 26, 2021) (open access). 

Friday, July 16, 2021

Medieval Astronomy

The Syriac Text
The reconstructed sky in the place where the Syriac text was written on May 25, 760 at 2:40 a.m.

Scientists have
analyzed ancient historical accounts from Syria, China, the Mediterranean and West Asia (most critically, the detailed accounts in the hand written Syrian Chronicle of Zuqn¯ın, part of which ended up in the Vatican Library and part of which ended up in the British Museum) to confirm that all of these accounts viewed key parts of the appearance of a particular comet in their skies in late May and early June of the year 760 CE.  

The scientists matched these observations with calculations of where the comet 1P/Halley would have been in the sky at that time based upon its current observed trajectory with key dates pinned down to a margin of error of one to two days for particular events.

This 760 CE fly-by was the comet's last return before a close encounter with Earth in 837 CE. The 760 CE perihelion of the comet that was observed is particularly important for extrapolation further back in time. This study provides one of the longest time frames of confirmed continued observations of the same celestial object. This helps to confirm the accuracy and robustness of astronomy's current gravitational calculations of solar system orbits, and to remind us just how long quite accurate scientistic astronomy observations have been collected and recorded by people.

Historical Context

This was near the end of the period known as the "dark ages" in the former western Roman Empire in Europe, during the life of Charlemagne, eight years before he began his reign as the King of the Franks in what is now France, and forty years after the remarkably wet summer of 720 CE in Europe.

Decisive battles on land and at sea with the Byzantine Empire ended the expansion of the Umayyad Caliphate into the territory of the former Roman Empire fourteen years earlier (746 CE). The Eastern Orthodox Christian Byzantine Empire was the last remnant of the Roman Empire, in what is now most of Turkey, Greece and Italy, and would persist in a gradually diminished form over about three more centuries. The West Asian accounts were written by Byzantine subjects.

As the body text of the new paper explains:
The author of the chronicle was probably the stylite monk Joshua; a stylite is an early Byzantine or Syrian Christian ascetic living and preaching on a pillar in the open air, so that many celestial observations can be expected in his work. The author of the Chronicle of Zuqn¯ın may have lived on a pillar for some time. During the time of writing of the Chronicle of Zuqn¯ın [ed. completed in 775/776 CE], the area was outside the border of the Byzantine empire and already under 푐Abbasid rule.
Thus, the Syrian chronicle entries were written by a Christian monk under the jurisdiction of the Islamic Abbasid Caliphate (750-1517 CE) in areas recently reclaimed by the Caliphate after a brief Byzantine expansion into the territory of the Umayyad Caliphate which preceded it. The Abbasid Caliphate had been formed ten years earlier in the Abbasid Revolution and replaced the Umayyad Caliphate (661-750 CE). The Umayyad Caliphate had been led by an ethnically Arab elite that treated even non-Arab Muslim converts as second class citizens, while the Abbasid Caliphate was led by a multi-ethnic, mostly non-Arab, and eastern oriented Abbasid Caliphate that ruled in a more inclusive manner, whose Caliphs claimed to have descended from an uncle of the Prophet Muhammad (who died about four decades before the comet appeared).

The provenance of the Chronicle was somewhat involved. As the body text explains:
The Chronicle of Zuqn¯ın is not known to be copied and disseminated; sometime during the 9th century it was transferred to the Monastery of the Syrians in the Egyptian desert . . .  Shortly after the manuscript was found and bought for the Vatican, it was considered to be written by the ¯ West Syrian patriarch Dionysius I of Tell-Mah. re, so that this chronicle was long known as Chronicle of Dionysius of ¯ Tell-Mah. re. Dionysius did write an otherwise lost world chronicle, but lived later (died AD ca. 845). Since this mistake was noticed, the chronicle has been called the Chronicle of Pseudo-Dionysius of Tell-Mah. re or, better, the Chronicle of Zuqn¯ın, because the text mentions the monastery of Zuqn¯ın as the living place of the author; Zuqn¯ın was located near Amida, now Diyarbakır in Turkey near the border to Syria. 
The Chronicle of Zuqn¯ın is made of four parts: Part I runs from the creation to Emperor Constantine (AD 272-337), Part II from Constantine to Emperor Theodosius II (AD 401-450) plus a copy of the so-called Chronicle of PseudoJoshua the Stylite (AD 497 to 506/7), Part III from Theodosius to Emperor Justinian (AD 481-565), and Part IV to the time of writing, AD 775/776. The Chronicler used a variety of sources, some of them otherwise lost. The author knew that some of his sources did not provide a perfect chronology; for him, it is more important to convey his message (to learn from history) than to give perfect datings. 
The events reported in the text are dated using the Seleucid calendar; the Seleucid Era (SE) started on October 7, BC 312 (= Dios 1). There are several versions of the Seleucid calendar, including the Babylonian (Jewish), Macedonian, and West Syrian (Christian) ones. The author of our chronicle systematically used the latter version for reports during his lifetime – a solar calendar, in which the year ran from Tishri/October 1 to Elul/September 30, applied since at least the fifth century AD.
This was also two years before the city of Baghdad was founded within the Abbasid Caliphate, near the ancient city of Seleucia, which had been the capitol of the Nestorian Christian Church of the East from 410 CE, until it had to be abandoned to the desert sands when the Tigris River that made it possible to live there shifted, a few decades after Baghdad was founded.

In China, this comet's appearance coincided with the unsuccessful seven year long An Lushan Rebellion against the Tang Dynasty. This rebellion ended with a pair of stunning betrays, first when An Lushan, the leader of the rebellion, was killed by one of his own eunuchs in 757, and then when his successor as leader of the rebellion, Shi Siming, was killed by his own son in 763, which ended the rebellion.

Citizen Science In Astronomy

The actual goal of this project, to find gravitational lensing evidence of a small dark matter halo from a database with tens of thousands of observations, is pretty ordinary as astrophysics goes, and it doesn't yet have any definitive results. Still, it is a worthwhile project that adds incrementally to what we know in a well focused way to expand the margins of our existing knowledge.

It illustrates the reality that the many modern telescopes in multiple frequencies that are now being used can collect a vast amounts of information. But this has made a lot of questions in astrophysics and astronomy "big data" problems.

With a smaller database, a single skilled research could personally review each one. This painstaking pouring over of data by a single highly trained scientist with the PhD in the relevant subfield of astronomy is how this kind of research got started. But it is impossible for a single astronomer to conduct the necessary fairly detailed analysis of each observation required for this kind of study, for such a large collection of data, in a reasonable amount of time. But timely analysis is necessary because the amount of data to review gets larger every month. 

The firehose of incoming data is only getting stronger. For example, a new European Space Agency project targeted for the year 2045 will collect information on 10 to 12 billion new sources of light in the sky that are too faint to discern now.

The citizen science methodology used in this study is remarkable and exciting. It presents an alternative to statistical, machine learning, and supercomputing approaches to sorting through masses of data. Unlike these automated alternatives, this citizen science approach doesn't sacrifice the human judgment element of the process present when a single scientist analyzes a large, but tractable body of data. 

In this case, twenty people, about a quarter of whom were scientists, about quarter of whom were graduate students, and about half of whom were undergraduates, mostly at the University of Crete, worked together to tackle the large dataset to identify 40 strong candidates out of 13,828 (many of which have multiple images at different wave lengths that had to be considered) including two particularly promising needles in the haystack.

It is a kind of project I am familiar with from my day job as an attorney, where, for example, I've had to mobilize similar numbers of people with similar skill levels, to review an entire room full of banker's boxes of not very well organized hard copy business records to locate a handful of key documents in complex securities fraud litigation.

The way this project managed to mobilize so many people to volunteer their time for this somewhat esoteric goal, hearteningly democratized this scientific endeavor and made this task possible to complete.

The paper and its abstract are as follows:

Dark Matter (DM) halos with masses below 108 M, which would help to discriminate between DM models, may be detected through their gravitational effect on distant sources. The same applies to primordial black holes, considered as an alternative scenario to DM particle models. However, there is still no evidence for the existence of such objects. 
With the aim of finding compact objects in the mass range  106 -- 109M, we search for strong gravitational lenses on milli (mas)-arcseconds scales (< 150 mas). For our search, we used the Astrogeo VLBI FITS image database -- the largest publicly available database, containing multi-frequency VLBI data of 13828 individual sources. 
We used the citizen science approach to visually inspect all sources in all available frequencies in search for images with multiple compact components on mas-scales. At the final stage, sources were excluded based on the surface brightness preservation criterion. We obtained a sample of 40 sources that passed all steps and therefore are judged to be milli-arcsecond lens candidates. 
These sources are currently followed-up with on-going European VLBI Network (EVN) observations at 5 and 22 GHz. Based on spectral index measurements, we suggest that two of our candidates have a higher probability to be associated with gravitational lenses.

C. Casadio, et al., "SMILE: Search for MIlli Lenses" arXiv: 2017.06896 (July 14, 2021) (accepted for publication).

Wednesday, July 14, 2021

Is Dark Energy Just Systemic Error?

The authors of this pre-print argue that the redshift estimated age of supernova which is the most important means by which the acceleration of the universe's expansion attributed to "dark energy" is determined may actually be a product of systemic error that does not take into account that the age of the stars that go supernova influences how bright they become. 

If their very plausible analysis is correct, there should be no "dark energy" and the cosmological constant may be zero. 

Supernova (SN) cosmology is based on the assumption that the width-luminosity relation (WLR) and the color-luminosity relation (CLR) in the type Ia SN luminosity standardization would not vary with progenitor age. Unlike this expectation, recent age datings of stellar populations in host galaxies have shown significant correlations between progenitor age and Hubble residual (HR). It was not clear, however, how this correlation arises from the SN luminosity standardization process, and how this would impact the cosmological result. Here we show that this correlation originates from a strong progenitor age dependence of the WLR and the CLR, in the sense that SNe from younger progenitors are fainter each at given light-curve parameters x1 and c. This is reminiscent of Baade's discovery of two Cepheid period-luminosity relations, and, as such, causes a serious systematic bias with redshift in SN cosmology. Other host properties show substantially smaller and insignificant differences in the WLR and CLR for the same dataset. We illustrate that the differences between the high-z and low-z SNe in the WLR and CLR, and in HR after the standardization, are fully comparable to those between the correspondingly young and old SNe at intermediate redshift, indicating that the observed dimming of SNe with redshift is most likely an artifact of over-correction in the luminosity standardization. When this systematic bias with redshift is properly taken into account, there is no or little evidence left for an accelerating universe, posing a serious question to one of the cornerstones of the concordance model.

Monday, July 12, 2021

A Principal Component Analysis Of Major Branches Of Eurasian Population Genetic Diversity

This chart is a two dimensional chart of the relative genetic similarities and relationships of many modern and one group of ancient Eurasian populations from a paper analyzing the genetics of the Saami people of Northern Scandinavia (especially Finland with which they have the strongest linguistic ties), who are now herders and historically were one of the last hunter-gatherer populations of Europe.

Inferring Ancient Arabian Genetics From Modern Populations

Researchers in a preprint earlier this year use modern Arabian and Iranian whole genomes to infer the makeup of ancient Arabian population genetics. 

Their results suggest that the hypothetical ghost population of "basal Eurasians" (who did not experience Neanderthal admixture) may have a real world counterpart in ancient East Arabians and in the ancient Iberomaurusian hunter-gatherers of what is now Morocco, Algeria, Tunisia and Libya, in the period between the Last Glacial Maximum and the Holocene. As Wikipedia describes this an ancient archaeological culture:
The name of the Iberomaurusian means "of Iberia and Mauretania", the latter being a Latin name for Northwest Africa. Pallary (1909) coined this term to describe assemblages from the site of La Mouillah in the belief that the industry extended over the strait of Gibraltar into the Iberian peninsula. This theory is now generally discounted (Garrod 1938), but the name has stuck.

In Algeria, Tunisia, and Libya, but not in Morocco, the industry is succeeded by the Capsian industry, whose origins are unclear. The Capsian is believed either to have spread into North Africa from the Near East, or to have evolved from the Iberomaurusian. In Morocco and Western Algeria, the Iberomaurusian is succeeded by the Cardial culture after a long hiatus.

We know a fair amount about these people genetically from ancient DNA and my own remote Y-DNA E-V13 ancestor is probably from this archaeological culture.

In 2013, Iberomaurusian skeletons from the prehistoric sites of Taforalt and Afalou were analyzed for ancient DNA. All of the specimens belonged to maternal clades associated with either North Africa or the northern and southern Mediterranean littoral, indicating gene flow between these areas since the Epipaleolithic. The ancient Taforalt individuals carried the mtDNA Haplogroup N subclades like U6 and M which points to population continuity in the region dating from the Iberomaurusian period.

Loosdrecht et al. (2018) analysed genome-wide data from seven ancient individuals from the Iberomaurusian Grotte des Pigeons site near Taforalt in north-eastern Morocco. The fossils were directly dated to between 15,100 and 13,900 calibrated years before present. 
The scientists found that all males belonged to haplogroup E1b1b, common among Afroasiatic males. The male specimens with sufficient nuclear DNA preservation belonged to the paternal haplogroup E1b1b1a1 (M78), with one skeleton bearing the E1b1b1a1b1 parent lineage to E-V13, one male specimen belonged to E1b1b (M215*). These Y-DNA clades 24,000 years BP had a common ancestor with the Berbers and the E1b1b1b (M123) subhaplogroup that has been observed in skeletal remains belonging to the Epipaleolithic Natufian and Pre-Pottery Neolithic cultures of the Levant
Maternally, the Taforalt remains bore the U6a and M1b mtDNA haplogroups, which are common among modern Afroasiatic-speaking populations in Africa. 
A two-way admixture scenario using Natufian and modern sub-Saharan samples (including West Africans and the Tanzanian Hadza) as reference populations inferred that the seven Taforalt individuals are best modeled genetically as of 63.5% Natufian-related and 36.5% sub-Saharan ancestry (with the latter having both West African-like and Hadza-like affinities), with no apparent gene flow from the Epigravettian culture of Paleolithic southern Europe. The scientists indicated that further ancient DNA testing at other Iberomaurusian archaeological sites would be necessary to determine whether the Taforalt samples were representative of the broader Iberomaurusian gene pool.
The paper and its abstract are as follows (emphasis mine):
Arabian Peninsula is strategic for investigations centred on the structuring of the modern human population in the three main groups, in the awake of the out-of-Africa migration. Despite the poor climatic conditions for recovery of ancient DNA human evidence in Arabia, the availability of genomic data from neighbouring ancient specimens and of informative statistical tools allow better modelling the ancestry of these populations. 
We applied this approach to a dataset of 741,000 variants screened in 291 Arabians and 78 Iranians, and obtained insightful evidence. 
The west-east axis was a strong forcer of population structure in the Peninsula, and, more importantly, there were clear continuums throughout time linking west Arabia with Levant, and east Arabia with Iran and Caucasus. East Arabians also displayed the highest levels of the basal Eurasian lineage of all tested modern-day populations, a signal that was maintained even after correcting for possible bias due to recent sub-Saharan African input in their genomes. Not surprisingly, east Arabians were also the ones with higher similarity with Iberomaurusians, who were so far the best proxy for the basal Eurasians amongst the known ancient specimens. The basal Eurasian lineage is the signature of ancient non-Africans that diverged from the common European-East Asian pool before 50 thousand years ago, and before the later interbred with Neanderthals. Our results are strong evidence to include the exposed basin of the Arabo-Persian Gulf as possible home of basal Eurasians, to be investigated further on namely by searching ancient Arabian human specimens.

Saturday, July 10, 2021

Add This To The List Of Problems With Cold Dark Matter Models

Dark matter particle theories have a long string of documented problems. Here is another one. 

Cold dark matter (CDM) has faced a number of challenges mainly at small scales, such as the too-big-to-fail problem, and core-cusp density profile of dwarf galaxies. Such problems were argued to have a solution either in the baryonic physics sector or in modifying the nature of dark matter to be self-interacting, or self-annihilating, or ultra-light. Here we present a new challenge for CDM by showing that some of Milky Way's satellites are too dense, requiring the formation masses and redshifts of halos in CDM not compatible with being a satellite. These too-dense-to-be-satellite systems are dominated by dark matter and exhibit a surface density above mean dark matter cosmic surface density Ωdmρcc/H0200 M/pc2. This value corresponds to dark matter pressure of 1010erg/cm3. This problem, unlike other issues facing CDM, has no solution in the baryonic sector and none of the current alternatives of dark matter can account for it. The too-dense-to-be-satellite problem presented in this work provides a new clue for the nature of dark matter, never accounted for before.
Mohammadtaher Safarzadeh, Abraham Loeb "A New Challenge for Dark Matter Models" arXiv:2017.03478 (July 7, 2021).