Pages

Friday, December 31, 2021

Probably Worth Watching

Honestly, I hate watching podcasts and video recorded lectures, and this one is a little more than an hour long. But, I've had my eye on conformal symmetry papers for a while now, as a potentially promising direction for new breakthroughs, and this one was definitely eye catching. 

So, even if I don't watch it, I may chase down some of Shaposhnikov's papers on the topics, at least, to see what progress he's made and to better grasp the gist of his arguments.

UPDATE January 3, 2022: Upon reading several of the papers, I am very impressed that Shaposhnikov and his colleagues are really onto ground breaking Copernican revolution class insights, and I eagerly await what more there is to come. The first two papers below are very impressive and complementary.

I very much recommend taking a look at the talk from earlier this year by Mikhail Shaposhnikov, Conformal symmetry: towards the link between the Fermi and the Planck scales. Shaposhnikov has done a lot of fascinating work over the years, developing in detail a point of view which hasn’t got a lot of attention, but that seems to me very compelling. He argues that the SM and GR make a perfectly consistent theory up to the Planck scale, with the “naturalness problem” disappearing when you don’t assume something like a GUT scale with new heavy particles. Watching the discussion after the talk, one sees how many people find it hard to envision such a possibility, even though all experimental evidence shows no signs of such particles. For more about what he is in mind, see the talk or some of the many papers he’s been writing about this.

From Woit at Not Even WrongSome of the recent papers developing these ideas are:

The standard way to do computations in Quantum Field Theory (QFT) often results in the requirement of dramatic cancellations between contributions induced by a "heavy" sector into the physical observables of the "light'' (or low energy) sector - the phenomenon known as "hierarchy problem''. This procedure uses divergent multi-loop Feynman diagrams, their regularisation to handle the UV divergences, and then renormalisation to remove them. At the same time, the ultimate outcome of the renormalisation is the mapping of several finite parameters defining the renormalisable field theory into different observables (e.g. all kinds of particle cross-sections). 
In this paper, we first demonstrate how to relate the parameters of the theory to observables without running into intermediate UV divergences. Then we go one step further: we show how in theories with different mass scales, all physics of the "light" sector can be computed in a way which does not require dramatic cancellations induced by physics of the "heavy" sector. The existence of such a technique suggests that the "hierarchy problem'' in renormalisable theories is not really physical, but rather an artefact of the conventional procedure to compute correlation functions. If the QFT is defined by the "divergencies-free'' method all fine-tunings in theories with well separated energy scales may be avoided.
Sander Mooij, Mikhail Shaposhnikov, "QFT without infinities and hierarchy problem" arXiv:2110.05175 (October 11, 2021).
We show how Einstein-Cartan gravity can accommodate both global scale and local scale (Weyl) invariance. To this end, we construct a wide class of models with nonpropagaing torsion and a nonminimally coupled scalar field. In phenomenological applications the scalar field is associated with the Higgs boson. For global scale invariance, an additional field -- dilaton -- is needed to make the theory phenomenologically viable. In the case of the Weyl symmetry, the dilaton is spurious and the theory reduces to a sub-class of one-field models. In both scenarios of scale invariance, we derive an equivalent metric theory and discuss possible implications for phenomenology.
Georgios K. Karananas, Mikhail Shaposhnikov, Andrey Shkerin, Sebastian Zell, "Scale and Weyl Invariance in Einstein-Cartan Gravity" arXiv:2108.05897 (December 7, 2021) (Phys. Rev. D 104, 124014).
We study scalar, fermionic and gauge fields coupled nonminimally to gravity in the Einstein-Cartan formulation. We construct a wide class of models with nondynamical torsion whose gravitational spectra comprise only the massless graviton. Eliminating non-propagating degrees of freedom, we derive an equivalent theory in the metric formulation of gravity. It features contact interactions of a certain form between and among the matter and gauge currents. We also discuss briefly the inclusion of curvature-squared terms.
Georgios K. Karananas, Mikhail Shaposhnikov, Andrey Shkerin, Sebastian Zell, "Matter matters in Einstein-Cartan gravity" arXiv:2106.13811 (September 21, 2021) (Phys. Rev. D 104, 064036).
Quantum field theories with exact but spontaneously broken conformal invariance have an intriguing feature: their vacuum energy (cosmological constant) is equal to zero. Up to now, the only known ultraviolet complete theories where conformal symmetry can be spontaneously broken were associated with supersymmetry (SUSY), with the most prominent example being the N=4 SUSY Yang-Mills. In this Letter we show that the recently proposed conformal "fishnet" theory supports at the classical level a rich set of flat directions (moduli) along which conformal symmetry is spontaneously broken. We demonstrate that, at least perturbatively, some of these vacua survive in the full quantum theory (in the planar limit, at the leading order of 1/N(c) expansion) without any fine tuning. The vacuum energy is equal to zero along these flat directions, providing the first non-SUSY example of a four-dimensional quantum field theory with "natural" breaking of conformal symmetry.
Georgios K. Karananas, Vladimir Kazakov, Mikhail Shaposhnikov, "Spontaneous Conformal Symmetry Breaking in Fishnet CFT" arXiv:1908.04302 (November 22, 2020) (Phys. Lett. B 811 (2020) 135922).

Thursday, December 30, 2021

Back Migration To Africa

There is genetic evidence of significant back migration from the Middle East to Africa, accounting for 30%-40% of modern sub-Saharan African ancestry in most populations and 20% in African hunter-gathers, at a quite great time depth. The data isn't completely consistent however.

Razib Khan mentioned a couple of the relevant papers in a quick post and another relevant paper is mentioned in the comments.

I put off blogging about it in order to write a longer post discussing the previous genetic analysis of uniparental genetics, the historical context, the methodological issues, the linguistic questions this poses, and the archaeology.

But the best was becoming the enemy of the good, so I am making this quick post in order to not let these papers fall off the my radar screen entirely without mention.

Tuesday, December 28, 2021

The Story Of Celtic


Image via Wikipedia.

Davidski discusses the genetic evidence relevant to the origins of the Celtic languages at his Eurogenes blog:
A new paper at Nature by Patterson et al. argues that Celtic languages spread into Britain during the Bronze Age rather than the Iron Age [LINK]. This argument is based on the observation that there was a large-scale shift in deep ancestry proportions in Britain during the Bronze Age.

In particular, the ratio of Early European Farmer (EEF) ancestry increased significantly in what is now England during the Late Bronze Age (LBA). On the other hand, the English Iron Age was a much more stable period in this context.

I don't have any strong opinions about the spread of Celtic languages into Britain, and Patterson et al. might well be correct, but their argument is potentially flawed . . . . Indeed, when I plot some of the key ancient samples from the paper in my ultra fine scale Principal Component Analyses (PCA) of Northern and Western Europe, it appears that it's only the Early Iron Age (EIA) population from England that overlaps significantly with a roughly contemporaneous group from nearby Celtic-speaking continental Europe.

Archaeology and linguistics have generally favored an early Iron Age date for Celtic language expansion into Western Europe including Britain and Ireland. 

But, we've know for some time now that there was massive population replacement in Britain and Ireland by Bell Beaker derived people with lots of steppe ancestry (and lighter skin) in a roughly 300-400 year time period (or less) in the mid-2000s BCE, which almost surely resulted in the replacement of the first farmer languages of Britain and Ireland (presumably derived from the Iberian first farmer language in the same family as the Cardial Pottery Neolithic that took the Mediterranean route, quite possibly a Vasconic language one, as they largely replaced Mesolithic hunter-gathers there whose language is no doubt almost completely lost), with whatever language the Bell Beaker people (radiating from a Dutch Bell Beaker hub) shared. 

This transition was aided because farming as a way of life had collapsed in Britain and Ireland, probably because the first Neolithic farming methods were unsustainable in some way, such as by depleting top soil nutrients, resulting in a reversion to hunting and gathering and probably to herding as well.

The new paper argues that this language was proto-Celtic of some kind. And, there is a lot to like about this hypothesis, since the geographic extent of the Bell Beaker people and the geographic extent of the Celtic languages coincide to a great extent, and because it was a time of language replacement. Also, most of the early Iron Age genetic impact was in Southern England, not in the periphery of places like Scotland, where Celtic persisted longer.

On the other hand, seeing a continental European genetic influx in the early Iron Age from Celtic areas at just the time that linguists (based upon the magnitude of linguistic diversification) and anthropologists (based mostly upon material culture) should have arrived and brought Celtic languages to the region has the virtue of multidisciplinary convergence of evidence. It is a hard signal to see genetically because Celtic areas of continental Europe and the British Isles were quite similar genetically immediately prior to the Iron Age. 

Iron Age technology is also just the kind of elite dominance generating factor that would have allowed for language shift with a fairly modest migration population genetic-wise, much like the Norman invasion that led to the transition from Old English to French influenced Middle English (on essentially the same geographic route) about two thousand years later.

One downside of this model, however, is that it leaves us in the dark about the nature of the Bell Beaker language. This is particularly challenging because while the genetically similar Corded Ware people almost surely spoke an Indo-European language, the Bell Beaker people adopted a lot culturally from pre-steppe Southern Iberians where this archaeological culture began, and could have been heavily influenced by Iberians linguistically as well (in which case the Bell Beaker people might have been Vasconic rather than Indo-European linguistically, as a result of a culturally driven language shift).

One could also imagine a slightly more sophisticated model in which the shared Bell Beaker language family substrate of Celtic regions facilitated the distinctive development of the Celtic languages, perhaps in a manner distinct from that of, for example, the Italic languages, that may have been part of a shared mid-tier language family with Celtic, perhaps in connection with the Urnfield culture of the 12th century BC (Late Bronze Age) expansion immediately preceding the Hallstatt archaeological culture expansion commonly associated with the proto-Celtic languages and people (and followed in much of its area by the La Tène culture). In this model, the divergence of the Italic and Celtic languages arises from differing linguistic substrates.

A lack of consensus of the relationships of the extant and extinct Celtic languages to each other among linguists (driven by scant source material) doesn't help the process of resolving the different possibilities:
Irish, Scottish and Manx form the Goidelic languages, while Welsh, Cornish and Breton are Brittonic. All of these are Insular Celtic languages, since Breton, the only living Celtic language spoken in continental Europe, is descended from the language of settlers from Britain. There are a number of extinct but attested continental Celtic languages, such as Celtiberian, Galatian and Gaulish. Beyond that there is no agreement on the subdivisions of the Celtic language family. They may be divided into P-Celtic and Q-Celtic.

The different hypotheses (quoted from Wikipedia) are lined up below: 

Eska (2010)

Eska (2010) evaluates the evidence as supporting the following tree, based on shared innovations, though it is not always clear that the innovations are not areal features. It seems likely that Celtiberian split off before Cisalpine Celtic, but the evidence for this is not robust. On the other hand, the unity of Gaulish, Goidelic, and Brittonic is reasonably secure. Schumacher (2004, p. 86) had already cautiously considered this grouping to be likely genetic, based, among others, on the shared reformation of the sentence-initial, fully inflecting relative pronoun *i̯os, *i̯ā, *i̯od into an uninflected enclitic particle. Eska sees Cisalpine Gaulish as more akin to Lepontic than to Transalpine Gaulish.

Eska considers a division of Transalpine–Goidelic–Brittonic into Transalpine and Insular Celtic to be most probable because of the greater number of innovations in Insular Celtic than in P-Celtic, and because the Insular Celtic languages were probably not in great enough contact for those innovations to spread as part of a sprachbund. However, if they have another explanation (such as an SOV substratum language), then it is possible that P-Celtic is a valid clade, and the top branching would be:

Italo-Celtic

Within the Indo-European family, the Celtic languages have sometimes been placed with the Italic languages in a common Italo-Celtic subfamily. This hypothesis fell somewhat out of favour following reexamination by American linguist Calvert Watkins in 1966. Irrespective, some scholars such as Ringe, Warnow and Taylor have argued in favour of an Italo-Celtic grouping in 21st century theses.

The Willow In Slavic Folklore

My birth surname, "Willeke", is a derivative of "Willow Tree" in some early modern dialect of German. 

Also, while my ancestry on my father's side is German (well, Prussian, anyway), my Y-DNA patriline probably derived from the Balkans from which my ancestors integrated themselves into migrating populations of steppe men in the early Bronze Age or late Copper Age, and made their way to the northwest.

Naturally, a blog post at the Old European Culture blog about the Willow in Slavic Folklore caught my eye (the source is full of wonderful images and links to related articles, so please, click through and check it out).

The Willow is associated with Slavic spring fertility rituals and summer magical rituals. 

Saturday before Palm Sunday is in Serbia known as Vrbica (Willow day). On that day kids and young women make and wear wreaths made of willow twigs and flowers. On that day, willow twigs with young leaves and flowers, like these, known in English as Pussy Willow are brought to the church where they are blessed the next day, Palm Sunday, which is in Serbia known as Cveti (Flowers day). According to the church, this whole willow business is the consequence of the fact that there are no palms in Serbia, and people replaced palm branches with willow branches...Move on, nothing so see here...

There are few problems with this explanation...Vrbopuc (Willow burst) is an expression which in Serbia means "part of spring during which willow starts growing new green shoots"... In Serbia in the past people believed that during this time of the year women become very horny and very fertile...So this was the best time to make babies...🙂 Vrbopuc is also a term used in Serbia for the period of sudden surge of sexual hormones in teenage boys and girls...

Basically willow was directly linked with human fertility...Which is why in the past in Serbia, girls used to make belts from willow twigs (wrap willow twigs around their bellies), and wear them going to the rivers to perform ritual baths... This ritual bathing was performed on Cveti (Flowers day) but also on Djurdjevdan (St George's day), the old Yarilo day, the old day of the young sun, the old celebration of the beginning of summer...Known also as Beltane... At the same time on St Georges day, while girls were wearing willow belts and bathing, boys and men were blowing into willow horns to "scare the witches away"... It is interesting that this ritual bathing was done before sunrise and willow belt had to be taken off as soon as the sun rose...And that blowing into the willow horns was also done during the night.... I didn't pay much attention to this until I remembered that willow was directly linked with water, water divination....Dodole, young women which took part in rain bringing magic rituals performed during hot summers also wore willow twig belts...

Mother Earth = Yin = Winter, Cold, Wet, Night, Down = Female fertility

Father Sun (Sky) = Yang = Summer, Hot, Dry, Day, Up = Male fertility

Which is why rain, water magic is female magic... And which is why willow, the tree which grows next to water, is associated with rain, water and female fertility, female sexuality...Hence ritual whipping of teenage girls by teenage boys using willow whips, performed in the Czech Republic, Slovakia on Easter Sunday... If men arrive at women's houses after 12 o'clock, women throw a bucket of cold water on them. In some regions the men also douse girls with water... The man first sings a a ritual song about spring, bountifulness and fertility, and the young woman then turns around and gets few whacks on her backside with the willow whip...This was done "so girl would be healthy, beautiful and fertile throughout the following year"...

In Serbia willow was also linked with female coming of age rituals, also performed on Willow day...On that day, young unmarried girls, wearing willow twig belts and willow and flower wreaths walked around the village land and blessed the nature... They would first go to a spring where they would sing and dance and would then wish good morning to the spring water. Spring water is in Serbia called "živa voda" (live water, water of life) and is believed to have magic properties...

Spring is seen as a place where fertile Mother Earth releases her "water of life" in the same way that a fertile woman releases her menstrual blood, female "water of life". In this way the spring water is magically linked with the menstrual blood... So no wonder that the spring is the first stop of the Lazarice group, the group of girls whose "water of life has started to run" (who got their period). BTW they are called Lazarice "because Willow day is by Christians also known as Lazarus day"... After this ritual, the girls would go to meadows to pick wild flowers. They would use these flowers to make wreaths which they would wear on their heads during their procession through the village land and the village... They would then walk through the fields, forests, meadows belonging to the village, and would sing fertility songs wishing nature to be fertile and bountiful.... Young girls, Spring Earth, Female fertility, Earth fertility, water...Willow...

That the belief in the link between willow and fertility was once probably Europe wide, can be seen from this English belief: "Striking an animal or a child with a willow twig will stunt their growth!" This is a great example of Christianity at work...

In Serbia it is actually the opposite...On Willow day, children and animals are whipped with willow twigs so they grow like willow and are healthy and fertile...Which is what you would expect after everything I have presented so far... Willow twigs, particularly the ones cut around St George's day are considered to be most potent when it comes to growing magic...Willow was in the mind of the Slavs definitely linked to growth and fertility... 
So imagine my surprise when I came across this information: Willow contains two very interesting chemicals: indolebutyric acid (IBA) and salicylic acid (SA)... Indolebutyric acid (IBA) is a plant hormone that stimulates root growth. It is present in high concentrations in the growing tips of willow branches... Salicylic acid (SA) is a plant hormone which is involved in the process of “systemic acquired resistance” (SAR) – where an attack on one part of the plant induces a resistance response to pathogens (triggers the plant’s internal defences) in other parts of the plant...
Soooo...The first hormone makes the plant grow...The second hormone makes the the plant healthy...And these hormones can be extracted from the willow shoots and used to help your plants grow and be healthy...
Salicylic acid is also an acne treatment, something young men and women often need to keep themselves looking beautiful.

Monday, December 27, 2021

An English Neolithic Tomb From 3700 BCE

The first farmers of Britain had a polygamous, patrilocal society, in which children from previous relationships of wives of clan members and some other people were sometimes adopted into multigenerational clans.

To explore kinship practices at chambered tombs in Early Neolithic Britain, here we combined archaeological and genetic analyses of 35 individuals who lived about 5,700 years ago and were entombed at Hazleton North long cairn. Twenty-seven individuals are part of the first extended pedigree reconstructed from ancient DNA, a five-generation family whose many interrelationships provide statistical power to document kinship practices that were invisible without direct genetic data. Patrilineal descent was key in determining who was buried in the tomb, as all 15 intergenerational transmissions were through men. The presence of women who had reproduced with lineage men and the absence of adult lineage daughters suggest virilocal burial [ed. women were buried with their husbands] and female exogamy. We demonstrate that one male progenitor reproduced with four women: the descendants of two of those women were buried in the same half of the tomb over all generations. This suggests that maternal sub-lineages were grouped into branches whose distinctiveness was recognized during the construction of the tomb. Four men descended from non-lineage fathers and mothers who also reproduced with lineage male individuals, suggesting that some men adopted the children of their reproductive partners by other men into their patriline. Eight individuals were not close biological relatives of the main lineage, raising the possibility that kinship also encompassed social bonds independent of biological relatedness.
Chris Fowler, et al., "A high-resolution picture of kinship practices in an Early Neolithic tomb" Nature (December 22, 2021). Hat tip: Bernard's blog.

Sunday, December 19, 2021

A Muon g-2 Recap

The introduction of an otherwise unimpressive new muon g-2 paper provides a nice recap of where the physics world is in terms of experimental measurements and theoretical calculations of the anomalous magnetic moment of the muon, muon g-2.

In particular, it notes the not widely known fact that the data underlying the "data driven" Standard Model predicted value showing a 4.2 sigma discrepancy with the experimental value itself shows large discrepancies that bely the claimed low uncertainty associated with this method.

The experimental value of the muon anomalous magnetic moment measured recently by the FNAL Muon g-2 experiment confirms the old BNL result and adds significance to the long standing discrepancy between the measured value and the standard model (SM) prediction, now raised to 4.2 σ. Currently, the world average for this discrepancy is 

∆aµ ≡ a exp µ − a SM µ = (2.51 ± 0.59) · 10^−9 . (1.1) 

The SM estimate recommended by the Muon g-2 Theory Initiative relies on a data-driven approach that makes use of experimental measurements of the σhad = σ(e +e − → hadrons) cross section to determine the hadronic vacuum polarization contribution a HVP µ. This is the most uncertain input in the prediction for aµ and, due to its non-perturbative nature, improving in precision is a difficult task. Apart for the uncertainty, one can also wonder to which level the adopted value can be considered reliable, since determinations of a HVP µ using data from different experiments exhibit a certain disagreement. In particular, KLOE and BaBar disagree at the level of 3σ, especially in the π +π − channel that accounts for more than 70% of the value of a HVP µ, and while BaBar data favour smaller values of ∆aµ, KLOE data pull to increase the discrepancy. 

[1] Due to relatively larger errors, there is instead agreement within 1.5 σ between KLOE and CMD-2, SND, BES-III. 

The HVP contribution can also be determined from first principles by means of lattice QCD techniques. However, until recently, the uncertainties in lattice results were too large to allow for useful comparisons with the data-driven results. A first lattice QCD determination of a HVP µ with subpercent precision was recently obtained by the BMW collaboration a HVP µ = 707.5(5.5) × 10^−10. It differs from the world average obtained from the data-driven dispersive approach by 2.1 σ and, in particular, it would yield a theoretical prediction for aµ only 1.3 σ below the measurement.[2] 

[2] Other lattice determinations also tend to give larger a HVP µ central values although with considerably larger errors. 

. . . [N]ew high statistics measurements of σhad, and in particular in the π +π − channel, that might be soon provided by the CMD-3 collaboration, as well as new high precision lattice evaluations, which might confirm or correct the BMW result, will be of crucial importance not only to strengthen or resize the evidences for a (g − 2)µ anomaly, but also to asses the status of the related additional discrepancies. 

From arXiv:2112.09139.

Personally, I am quite confident that the BMW result, with minor modifications, will turn out to be the correct Standard Model prediction. But it will take time for the scientific consensus to catch up.

If this is true, there is no muon g-2 anomaly and hence, no hint from it of new physics by this very global measure of discrepancies from the Standard Model. This basically rules out most kinds of new physics at the electro-weak scale or anywhere close to it at higher energies.

Superdeterminism and More

Sabine Hossenfelder is an advocate for (although not necessarily very dogmatically) and has published papers on superdeterminism in quantum mechanics. She has a new blog post on the topic.

Some of the weirdest aspects of quantum mechanics are its seeming non-locality, particularly but not only when there is entanglement, and the fact that measurement changes how particles behave, with what constitutes measurement not defined in a very satisfactory manner. Superdeterminism is a theory that seeks to explain these weird aspects of quantum mechanics in a way that seems less werid.

Basically, superdeterminism is a hidden variables theory (with properties that escape Bell's Inequality like a lack of statistical independence) that argues that the non-local effects in quantum mechanics are really due to individual quantum mechanical particles having pre-determined non-linear properties that are measured when a measurement happens.

So, for example, the slit that a photon goes through in a two slit experiment is, in a superdeterminism framework, already determined when it is emitted.

Superdeterminists assert that the behavior of quanta is too mathematically chaotic to be measured or predicted otherwise, leaving us with average outcomes of chaotic processes that are functionally random from the point of view of an observer, even though they are actually deterministic at the level of the individual particle.

She also makes the important point that the colloquial understanding of free will is not consistent with the purely stochastic leading theory of quantum mechanics any more than it is with determinism, since we have no choice regarding how the pure randomness of quantum mechanics manifests itself. 

The way that the term "free will" is used in quantum mechanics, which involves statistical independence, is a technical meaning that is a false friend and does not imply what "free will" means in colloquial discussion.

I am not convinced that superdeterminism is correct. And, she acknowledges that we lack the instrumentation to tell at this point, while bemoaning the scientific establishments failure to invest in what we would need to get closer to finding out. 

But, her points on free will, and on the sloppy way that the Bell's Inequality is assumed to rule out fewer hidden variables theories than it does, are well taken.

Friday, December 17, 2021

Émilie du Châtelet

She was featured in today's Google doodle.

Gabrielle Émilie Le Tonnelier de Breteuil, Marquise du Châtelet (French pronunciation: [emili dy ʃɑtlɛ] (listen); 17 December 1706–10 September 1749) was a French natural philosopher and mathematician during the early 1730s until her death due to complications during childbirth in 1749. Her most recognized achievement is her translation of and commentary on Isaac Newton's 1687 book Principia containing basic laws of physics. The translation, published posthumously in 1756, is still considered the standard French translation today. Her commentary includes a contribution to Newtonian mechanics—the postulate of an additional conservation law for total energy, of which kinetic energy of motion is one element. This led to her conceptualization of energy as such, and to derive its quantitative relationships to the mass and velocity of an object.

Her philosophical magnum opus, Institutions de Physique (Paris, 1740, first edition), or Foundations of Physics, circulated widely, generated heated debates, and was republished and translated into several other languages within two years of its original publication. She participated in the famous vis viva debate, concerning the best way to measure the force of a body and the best means of thinking about conservation principles. Posthumously, her ideas were heavily represented in the most famous text of the French Enlightenment, the Encyclopédie of Denis Diderot and Jean le Rond d'Alembert, first published shortly after du Châtelet's death. Numerous biographies, books and plays have been written about her life and work in the two centuries since her death. In the early 21st century, her life and ideas have generated renewed interest.

Émilie du Châtelet had, over many years, a relationship with the writer and philosopher Voltaire.

From Wikipedia

Go read the biography portion of the linked entry. Her life was quite remarkable. For example, "by the age of twelve she was fluent in Latin, Italian, Greek and German; she was later to publish translations into French of Greek and Latin plays and philosophy. She received education in mathematics, literature, and science. Du Châtelet also liked to dance, was a passable performer on the harpsichord, sang opera, and was an amateur actress. As a teenager, short of money for books, she used her mathematical skills to devise highly successful strategies for gambling."

Her life is also a reminder of the reality that in the 18th century even the scientifically minded nobility (her parents were minor nobles and part of the French King's court, her husband was a high ranking noble, and later in life she had affairs first with Voltaire, and then with a leading French poet) was not free of widespread infant and child and maternal mortality. Her experience in this regard was the norm, and not unusually tragic (see also, e.g., Euler).

Two of her five brothers died in childhood and two more died young. 

She gave birth four times. Her third and fourth children died as infants and she died giving birth to her fourth child at age forty-two.

Also:

On 12 June 1725, she married the Marquis Florent-Claude du Chastellet-Lomont. Her marriage conferred the title of Marquise du Chastellet. Like many marriages among the nobility, theirs was arranged. As a wedding gift, the husband was made governor of Semur-en-Auxois in Burgundy by his father; the recently married couple moved there at the end of September 1725. Du Châtelet was eighteen at the time, her husband thirty-four.

Wednesday, December 15, 2021

Quantum Physics Needs Imaginary Numbers

Multiple experiments have shown that the most straightforward versions of quantum physics equations that don't use imaginary numbers (i.e. quantities with values equal to the square root of negative one called "i"), don't work. This confirms the theoretical expectation.

Another way of putting this is that quantum physics requires some possible ways for events to happen to have negative probabilities of happening, although the ultimate observable result, which is calculated as the sum of all possible ways that something can happen, is still always between zero and one hundred percent.

Science News explains the limits of these findings as well in its article published today:

[T]he results don’t rule out all theories that eschew imaginary numbers, notes theoretical physicist Jerry Finkelstein of Lawrence Berkeley National Laboratory in California, who was not involved with the new studies. The study eliminated certain theories based on real numbers, namely those that still follow the conventions of quantum mechanics. It’s still possible to explain the results without imaginary numbers by using a theory that breaks standard quantum rules. But those theories run into other conceptual issues, making them “ugly,” he says. But “if you’re willing to put up with the ugliness, then you can have a real quantum theory.”
The article recaps conclusions from three recent physics papers, specifically:
Although complex numbers are essential in mathematics, they are not needed to describe physical experiments, as those are expressed in terms of probabilities, hence real numbers. Physics, however, aims to explain, rather than describe, experiments through theories. Although most theories of physics are based on real numbers, quantum theory was the first to be formulated in terms of operators acting on complex Hilbert spaces. This has puzzled countless physicists, including the fathers of the theory, for whom a real version of quantum theory, in terms of real operators, seemed much more natural. In fact, previous studies have shown that such a ‘real quantum theory’ can reproduce the outcomes of any multipartite experiment, as long as the parts share arbitrary real quantum states. Here we investigate whether complex numbers are actually needed in the quantum formalism. We show this to be case by proving that real and complex Hilbert-space formulations of quantum theory make different predictions in network scenarios comprising independent states and measurements. This allows us to devise a Bell-like experiment, the successful realization of which would disprove real quantum theory, in the same way as standard Bell experiments disproved local physics.
M.-O. Renou et al. Quantum theory based on real numbers can be experimentally falsified. Nature. Published online December 15, 2021. doi: 10.1038/s41586-021-04160-4.
Standard quantum theory was formulated with complex-valued Schrdinger equations, wave functions, operators, and Hilbert spaces. Previous work attempted to simulate quantum systems using only real numbers by exploiting an enlarged Hilbert space. A fundamental question arises: are the complex numbers really necessary in the standard formalism of quantum theory? 
To answer this question, a quantum game has been developed to distinguish standard quantum theory from its real-number analogue, by revealing a contradiction between a high-fidelity multi-qubit quantum experiment and players using only real-number quantum theory. Here, using superconducting qubits, we faithfully realize the quantum game based on deterministic entanglement swapping with a state-of-the-art fidelity of 0.952. Our experimental results violate the real-number bound of 7.66 by 43 standard deviations. Our results disprove the real-number formulation and establish the indispensable role of complex numbers in the standard quantum theory.
M.-C. Chen et al. Ruling out real-valued standard formalism of quantum theory. Physical Review Letters. In press, 2021.
Quantum theory is commonly formulated in complex Hilbert spaces. However, the question of whether complex numbers need to be given a fundamental role in the theory has been debated since its pioneering days. Recently it has been shown that tests in the spirit of a Bell inequality can reveal quantum predictions in entanglement swapping scenarios that cannot be modelled by the natural real-number analog of standard quantum theory. Here, we tailor such tests for implementation in state-of-the-art photonic systems. We experimentally demonstrate quantum correlations in a network of three parties and two independent EPR sources that violate the constraints of real quantum theory by over 4.5 standard deviations, hence disproving real quantum theory as a universal physical theory.
Z.-D. Li et al. Testing real quantum theory in an optical quantum network. Physical Review Letters. In press, 2021.

Tuesday, December 14, 2021

What Do Non-Convergent Series In The Standard Model Tell Us?

Most calculations in the Standard Model are made by adding up infinite series of Feynman diagrams to get a (complex valued) square root of a probability, called an amplitude. Each term in the series has an integer exponent of the coupling constant for the force whose effects are being evaluated which we call the number of "loops" in that diagram. 

These calculations represent every possible way that a given outcome could happen, assign a (complex valued)  amplitude to each such possibility, and add up the amplitudes of all possible ways that something could happen. This, in turn, gives us the overall probability that it will happen. 

Since we can't actually calculate the entire series, we truncate it, usually at a certain number of "loops" which have the same coupling constant factor and represent the number of steps in virtual interactions that are involved in subsets of possible ways that the event we are considering could happen.

The trouble is that by truncating the series, we are implicitly assuming that the infinite number of remaining terms of the infinite series will collectively be smaller than the finite number of terms that we have already calculated. But, from a rigorous mathematical perspective, that isn't proper, because naively, these particular infinite series aren't convergent.

[T]he problem of the slow convergence of the infinite series path integrals we use do to QCD calculations isn't simply a case of not being able to throw enough computing power at it. The deeper problem is that the infinite series whose sum we truncate to quantitatively evaluate path integrals isn't convergent. After about five loops, using current methods, your relative error starts increasing rather than decreasing.


1596817558222.png 
When you can get to parts per 61 billion accuracy at five loops as you can in QED, or even to the roughly one part per 10 million accuracy you can in weak force calculations to five loops, this is a tolerable inconvenience since our theoretical calculations still exceed our capacity to measure the phenomena that precisely [and the point of non-convergence also kicks in at more loops in these theories]. But when you can only get to parts per thousand accuracy at five loops as is the case in perturbative QCD calculations, an inability to get great precision by considering more loops is a huge problem when it comes to making progress.

Of course, despite the fact that mathematically, methods using non-convergent infinite series shouldn't work, in practice, when we truncate it, we do actually reproduce experimental results up to the expected precision we would get if it really were a convergent infinite series.

Nature keeps causing particles to behave with probabilities just in line with these calculations up to their calculated uncertainties.

So, for some reason, reality isn't being perfectly represented by the assumption that we have an infinite series with terms in powers of coupling constants, whose value at any given loop level is a probability that is not more than ± 1 (quantum mechanics employs negative probabilities in intermediate steps of its calculations).

There are at least two plausible reasons why this theoretically untenable approach works, they aren't mutually exclusive.

One is that the number of possible loops isn't actually infinite. Each step of an interaction represented by a Feynman diagram takes non-zero time. In these calculations, the probability calculated generally involves something happening at a finite distance. At some point, one has to discount the probability of having more than a given number of loops in a distance by the intrinsic limitation that there is a probability distribution of how long each step takes, and in the finite amount of time necessary to cover that finite distance, there needs to be an additional discounting factor, in addition to the coupling constant exponents themselves, at each subsequent loop, to reflect the probability that this many loops can be squeezed into a finite period of time. 

There is a largely equivalent way of expressing this idea using an effective minimum length scale (perhaps the Planck length or perhaps even some larger value), rather than a probabilistic finite bound on the maximum number of loops that can impact a process that arises from related time and length limitations (either in isolation or when taken together).

A second is that there may be a "soft bound" on the contribution to the overall probability of an event that gets smaller at each subsequent loop, separate and apart from the coupling constant. This discount in the estimation of the error arising from the truncated terms would reflect the fact that at each Feynman diagram loop number, there are more and more individual Feynman diagrams that are aggregated, and that the amplitudes (the complex valued square roots of probability associated with each diagram) are pseudo-random. Thus, the more diagrams go into a given loop level calculation, the more likely they are to cancel out against each other, making probability distribution of the magnitude of the amplitude contributed to the total value at each subsequent loop have a strictly decreasing mean value at each successive greater number of loops. This means that the successive terms of the infinite series are getting smaller at a rate faster than we could rely upon, solely considering the exponent of the coupling constant at each loop. 

In each of these cases, because reality seems to be convergent, even though the series naively isn't, this additional rate of diminution causes the additional terms in the series to get smaller at rate fast enough to make the series actually turn out to be a convergent infinite series once we understand its structure better. It could be that either of these considerations in isolation, which are largely independent of each other, is sufficient to produce this result, or it could be that you need both of these adjustment to reach the convergence threshold.

I'm sure that I haven't exhausted the entire realm of possibilities. It may be that one can limit oneself to on shell terms only or some other way out ignoring certain terms, and beat the convergence problem. Or, as I noted previously:

It might be possible to use math tricks to do better. For example, mathematically speaking, one can resort to techniques along the lines of a Borel transformation to convert a divergent series into a convergent one. And, path integral formulations are limitations associated with perturbative QCD calculations that can be overcome by using non-perturbative QCD methods like lattice QCD. 

Nonetheless, it seems likely that including one or both of these factors that make the infinite series terms get smaller faster than we usually mathematically assume that they must when we make only weaker assumptions about the values of the terms that are multiplied by the coupling constant power scale factor at each loop, could be part of the solution. 

It likewise must surely be the case that a "correct" description of reality does not dependent upon a quantity that is truly correctly calculated by adding up the terms of an unmodified non-convergent infinite series. Either these solutions, or something along the lines of these considerations, must be present in a "correct" formula that can perfectly reproduces reality (at least in principle).

Monday, December 13, 2021

Four Waves Of Denisovans And Humans In Tibet

A new review article recaps the four waves of settlement of the Tibetan Plateau by modern humans and archaic hominins. 

The first wave was by archaic hominins called Denisovans reached Tibet in the Middle Stone Age. Denisovans are a sister clade of Neanderthals, and are closer to both Neanderthals and modern humans than the dominant hominin species outside of Africa that preceded all three of these hominin species, Homo eretcus which emerged from Africa about 1.8 million years ago after evolving in Africa about 2 million years ago. 

Some Denisovans evolved genetic adaptations to the low oxygen levels at high altitudes that were passed to modern human populations in Asia in the Upper Paleolithic era, through admixture (i.e. via mixed Denisovan and modern human children) around 47,000 years ago, probably somewhere in the lowland of East Asia near Tibet.

The second wave was made up of Upper Paleolithic modern human hunter--gatherers around 40,000 years ago.

The third wave was made up of Mesolithic modern human hunter-gatherers around 16,000 years ago.

The languages of these two waves of hunter-gatherer people in Tibet (let alone any Denisovan language) probably died and were lost forever around the time that herders and farmers arrived in Tibet.

The last wave was made up of Neolithic modern human herders and farmers around 8,000 years ago who probably brought the Sino-Tibetan languages to the Tibetan Plateau from a source somewhere in Northern China. We know that they continuously occupied the Plateau, but don't know if the previous hominin inhabitants of the Tibetan Plateau settled there year round or occupied it continuously.

Science Daily has the press release:
Denisovans were archaic hominins once dispersed throughout Asia. After several instances of interbreeding with early modern humans in the region, one of their hybridizations benefited Tibetans' survival and settlement at high altitudes. . . . 
Peiqi Zhang, a UC Davis doctoral student who has participated in excavations of an archaeological site above 15,000 feet (4,600 meters) . . . review[ed] . . . evidence of human dispersal and settlement in the Tibetan Plateau, integrating the archaeological and genetic discoveries so far. . . .  
Archaeological investigations suggest four major periods of occupation, beginning with Denisovans about 160,000 years ago and followed by three periods of humans who arrived starting around 40,000 years ago, 16,000 years ago and 8,000 years ago.

"Based on archaeological evidence, we know that there are gaps between these occupation periods," Peiqi Zhang said. "But the archaeological work on the Tibetan Plateau is very limited. There's still a possibility of continuous human occupation since the late ice age, but we haven't found enough data to confirm it." . . . 

"From the genetic studies, we can detect that all East Asians, including the Tibetans, interbred with two distinct Denisovan groups, with one of such events unique to East Asians (and the other shared with other South Asians). . . . Since all East Asians show the same patterns, we have reason to believe that this interbreeding event (the one that's unique to East Asians) happened somewhere in the lowland instead of on the plateau."

Zhang and Zhang propose two models of human occupation of the Tibetan Plateau as a framework for scholars that can be tested by future discoveries: 
* Intermittent visits before settling there year-round about the end of the ice age, about 9,000 years ago. 
* Continuous occupation beginning 30,000 to 40,000 years ago.

In either model, Denovisans could have passed the EPAS1 haplotype to modern humans about 46,000 to 48,000 years ago.

"The main question is whether they're staying there all year-round, which would mean that they were adapted biologically to hypoxia. . . . Or did they just end up there by accident, and then retreated back to the lowlands or just disappeared?"

It's unclear when Denisovans went extinct, but some studies suggest it may have been as late as 20,000 years ago.

Neanderthals probably went extinct about 29,000 years ago. 

The paper and its abstract are as follows:
The peopling of the Tibetan Plateau is a spectacular example of human adaptation to high altitudes as Tibetan populations have thrived for generations under strong selective pressures of the hypoxic environment. Recent discoveries are leading to paradigmatic changes in our understanding of the population history of the Tibetan Plateau, involving H. sapiens and the archaic hominin known as Denisovan. 
Archaeological and genetic studies provide essential insights into behavioral and biological human adaptations to high elevations but there is a lack of models integrating data from the two fields. Here, we propose two testable models for the peopling process on the plateau leveraging evidence from archaeology and genetics. 
Recent archaeological discoveries suggest that both archaic Denisovans and Homo sapiens occupied the Tibetan Plateau earlier than expected. Genetic studies show that a pulse of Denisovan introgression was involved in the adaptation of Tibetan populations to high-altitude hypoxia. These findings challenge the traditional view that the plateau was one of the last places on earth colonized by H. sapiens and warrant a reappraisal of the population history of this highland. Here, we integrate archaeological and genomic evidence relevant to human dispersal, settlement, and adaptation in the region. We propose two testable models to address the peopling of the plateau in the broader context of H. sapiens dispersal and their encounters with Denisovans in Asia.
Peiqi Zhang, et al., "Denisovans and Homo sapiens on the Tibetan Plateau: dispersals and adaptations" Trends in Ecology and Evolution (December 1, 2021). DOI: https://doi.org/10.1016/j.tree.2021.11.004

Strong Force Coupling Constant Measured In Tau Decays

This measurement of the strong force coupling constant of 0.1171(10) when the measurement is converted to the W boson mass scale is just 0.6 sigma away from the Particle Data Group value of 0.1179(9) at the W boson mass scale (combining the uncertainties in quadrature). About two-thirds of the uncertainty in the latest measurement is statistical, so there is room to improve the precision of this measurement with mere brute force additional data collection.

It is also solid evidence of the soundness of the Standard Model prediction for the running of the strong force coupling constant with energy scale, which is established in the Standard Model by a beta function that is determined exactly from theory.

And, this is a good reminder that the strength of the strong force coupling constant is much stronger, 0.3077(75), at the tau lepton mass of about 1,776.86(12) MeV, which is much closer to the typical hadronic energy scale, than it is at the usually quoted value at the W boson mass of 80,379(12) MeV

The strong force coupling constant gets even larger at smaller masses. The peak value of the strong force coupling constant, of almost 1.0000, is at an energy scale at or below the proton mass, after which is declines, although there is dispute over whether it declines to zero or to a fixed value more than zero.

The high precision of this physical constant extraction from experimental data was made possible largely by improvements in the formulas used to do so. Using the decays of a fundamental lepton, whose mass and properties are known very precisely, into familiar and well understood pions and kaons, also helps the precision of the measurement.

We perform a precise extraction of the QCD coupling at the τ-mass scale, α(s)(m(τ)), from a new vector isovector spectral function which combines ALEPH and OPAL distributions for the dominant channels, τ→ππ(0)ν(τ), τ→3ππ(0)ν(τ) and τ→π3π(0)ν(τ), with estimates of sub-leading contributions obtained from electroproduction cross-sections using CVC, as well as BaBar results for τ→K(−)K(0)ν(τ.) The fully inclusive spectral function thus obtained is entirely based on experimental data, without Monte Carlo input. 
From this new data set, we obtain α(s)(m(τ))=0.3077±0.0075, which corresponds to α(s)(m(Z))=0.1171±0.0010. 
This analysis can be improved on the experimental side with new measurements of the dominant ππ0, π3π0, and 3ππ0 τ decay modes.
Diogo Boito, et al., "Strong coupling at the τ-mass scale from an improved vector isovector spectral function" arXiv:2112.05413 (December 10, 2021) (Contribution to the Proceedings of "The 16th International Workshop on Tau Lepton Physics (TAU2021) (Virtual Edition)", from September 27th to October 1st 2021, Indiana University).

Friday, December 10, 2021

The Hubble Tension May Be Real

A new preprint making an extraordinarily high precision measurement of the Hubble constant at low z (i.e. in recent time periods) compared to the early Universe measurement, also made to high precision by Planck cosmic microwave background measurements, continues to show the discrepancy between the early time and late time measurements known as the "Hubble tension". In these measurements the late time measured values of the Hubble constant are consistently larger than those derived from the CMB in the early universe.

The increased accuracy of these late time results (i.e. low z results) strongly imply that the Hubble tension may be a real difference, in what should be a physical constant in LambdaCDM cosmology, and not just a measurement error driven discrepancy. In other words, this strengthens the case that the Hubble tension is an indication of beyond LambdaCDM physics, presumably in the cosmological constant/dark energy sector (i.e. the Lambda sector) of the "Standard Model of Cosmology". There is a considerable cottage industry already, that will surely only surge after this new result, to devise alternative theories that can explain the discrepancy.

Prior to this result, discrepancy due to measurement errors was a much more plausible hypothesis, since there were reasons to believe that the most accurate previous late time Hubble constant measurement were understated. See Edvard Mortsell, et al., "The Hubble Tension Bites the Dust: Sensitivity of the Hubble Constant Determination to Cepheid Color Calibration" arXiv (May 24, 2021). See also S.L.Parnovsky "Bias of the Hubble constant value caused by errors in galactic distance indicators" arXiv:2109.09645 (September 20, 2021) (Accepted for publication at Ukr. J. Phys).
We report observations from HST of Cepheids in the hosts of 42 SNe Ia used to calibrate the Hubble constant (H0). These include all suitable SNe Ia in the last 40 years at z<0.01, measured with >1000 orbits, more than doubling the sample whose size limits the precision of H0. The Cepheids are calibrated geometrically from Gaia EDR3 parallaxes, masers in N4258 (here tripling that Cepheid sample), and DEBs in the LMC. The Cepheids were measured with the same WFC3 instrument and filters (F555W, F814W, F160W) to negate zeropoint errors. 
We present multiple verifications of Cepheid photometry and tests of background determinations that show measurements are accurate in the presence of crowding. The SNe calibrate the mag-z relation from the new Pantheon+ compilation, accounting here for covariance between all SN data, with host properties and SN surveys matched to negate differences. We decrease the uncertainty in H0 to 1 km/s/Mpc with systematics. We present a comprehensive set of ~70 analysis variants to explore the sensitivity of H0 to selections of anchors, SN surveys, z range, variations in the analysis of dust, metallicity, form of the P-L relation, SN color, flows, sample bifurcations, and simultaneous measurement of H(z). 
Our baseline result from the Cepheid-SN sample is H0=73.04+-1.04 km/s/Mpc, which includes systematics and lies near the median of all analysis variants. We demonstrate consistency with measures from HST of the TRGB between SN hosts and NGC 4258 with Cepheids and together these yield 72.53+-0.99. Including high-z SN Ia we find H0=73.30+-1.04 with q0=-0.51+-0.024. We find a 5-sigma difference with H0 predicted by Planck+LCDM, with no indication this arises from measurement errors or analysis variations considered to date. The source of this now long-standing discrepancy between direct and cosmological routes to determining the Hubble constant remains unknown.
Adam G. Riess, et al., "A Comprehensive Measurement of the Local Value of the Hubble Constant with 1 km/s/Mpc Uncertainty from the Hubble Space Telescope and the SH0ES Team" arXiv:2112.04510 (December 8, 2021).

Thursday, December 9, 2021

Conformal Gravity As A MOND Alternative

Conformal gravity is a quite subtle modification of General Relativity but is sufficient to reproduce MOND/Radical Acceleration Relation/Tully-Fisher relation in galaxies. Like MOND it has a physical constant corresponding to the MOND a(0) constant. (Incidentally, another paper receives a similar outcome by inserting torsion into the General Relativity.)

Also, unlike MOND, it is more than a toy model theory. It is relativistic in the first place, so it doesn't require a relativistic generalization. The introduction to a new paper on this modified gravity theory explains that:
Despite the enormous successes of the Einstein’s theory of gravity, the latter appears to be about “twenty five percent wrong”. So far the scientists proposed two possible solutions of the problem that are known under the name of “dark matter” or “dark gravity”, and both are extensions of the Einstein’s field equations. The first proposal consists on modifying the right side of the Einstein’s equations, while according to the second proposal it is modified the left hand side. Indeed, in order to take into account of all the observational evidences: galactic rotation curves, structure formation in the universe, CMB spectrum, bullet cluster, and gravitational lensing, it seems needed to somehow modify the Einstein’s field equations. However, in this paper we propose the following different approach, namely: “understand gravity instead of modifying it”. 

In this document we do not pretend to provide a definitive answer to the “mystery of missing mass” or “missing gravity in the universe”, but we only focus on the galactic rotation curves. Nevertheless, we believe our result to be quite astonishing on both the theoretical and observational side. 

The analysis here reported, which follows the previous paper*, is universal and apply to any conformally invariant theory, nonlocal, or local, that has the Schwarzschild-metric as an exact and stable solution. 

* The previous seminal paper did not address the issue of conformally coupled matter that completely changes the geometrical interpretation of our proposal underlining the crucial role of the asymptotic but harmless spacetime singularity. Notice that in the previous paper the massive particles break explicitly the conformal invariance, even if slightly, making the solution no longer exact. Moreover, we will show in this paper that in presence of conformally coupled matter we do not need to resort to the global structure of space-time and to invoke the small inhomogeneities on the cosmological scale or the presence of the cosmological constant, which will turn out to be too small to affect on the rotation curves on a galactic scale: “everything will be limited to the single galaxies”.

However, for the sake of simplicity we will focus on Einstein’s conformal gravity, whose general covariant action functional reads: 


which is defined on a pseudo-Riemannian spacetime Manifold M equipped with a metric tensor field ˆg(µν), a scalar field φ (the dilaton), and it is invariant under the following Weyl conformal transformation: 

where Ω(x) is a general local function. In (1) h is a dimensionless constant that has to be selected extremely small in order to have a cosmological constant compatible with the observed value. However, we here assume h = 0 because the presence of a tiny cosmological constant will not affect our result. For completeness and in order to show the exactness of the solutions that we will expand later on, we here remind the equations of motion for the theory (1) for h = 0, 
The Einstein-Hilbert action for gravity is recovered whether the Weyl conformal invariance is broken spontaneously in exact analogy with the Higgs mechanism in the standard model of particle physics. One possible vacuum of the theory (1) (exact solution of the equations of motion (3)) is φ = const. = 1/k(4) = 1/√16πG, together with the metric satisfying R(µν) ∝ gˆ(µν). Therefore, replacing φ = (1/√16πG)+ϕ in the action (1) and using the conformal invariance to eliminate the gauge dependent Goldstone degree of freedom ϕ, we finally end up with the Einstein-Hilbert action in presence of the cosmological constant, 
where Λ is consistent with the observed value for a proper choice of the dimensionless parameter h in the action (1). Ergo, Einstein’s gravity is simply the theory (1) in the spontaneously broken phase of Weyl conformal invariance. 

Let us now expand about the exact solutions in conformal gravity. Given the conformal invariance (2), any rescaling of the metric ˆg(µν) accompanied by a non trivial profile for the dilaton field φ, is also an exact solution, namely 
solve the EoM obtained varying the action (1) respect to ˆg(µν) and φ. 

So far the rescaling (5) has been used to show how the singularity issue disappearance in conformal gravity. However, and contrary to the previous papers, we here focus on a not-asymptotically flat rescaling of the Schwarzschild metric as a workaround to the non-Newtonian galactic rotation curves. Moreover, the logic in this project is literally opposite to the one implemented in the past works and it is somehow anti-intuitive. In fact, here, instead of removing the spacetime’s singularities, we apparently deliberately introduce an unreachable asymptotic singularity. However, as it will be proved later on, the spacetime stays geodetically complete. Indeed, the proper time to reach the singularity at the edge of the Universe will turn out to be infinite. 

Notice that in order to give a physical meaning to the metric (5), conformal symmetry has to be broken spontaneously to a particular vacuum specified by the function Q(x). The uniqueness of such rescaling will be discussed in section III. In the spontaneously broken phase of conformal symmetry observables are still invariant under diffeomorphisms. 
The paper and its abstract are as follows:
We show that Einstein's conformal gravity is able to explain simply on the geometric ground the galactic rotation curves without need to introduce any modification in both the gravitational as well as in the matter sector of the theory. 
The geometry of each galaxy is described by a metric obtained making a singular rescaling of the Schwarzschild's spacetime. The new exact solution, which is asymptotically Anti-de Sitter, manifests an unattainable singularity at infinity that can not be reached in finite proper time, namely, the spacetime is geodetically complete. It deserves to be notice that we here think different from the usual. Indeed, instead of making the metric singularity-free, we make it apparently but harmlessly even more singular then the Schwarzschild's one. 
Finally, it is crucial to point that the Weyl's conformal symmetry is spontaneously broken to the new singular vacuum rather then the asymptotically flat Schwarzschild's one. The metric, is unique according to: the null energy condition, the zero acceleration for photons in the Newtonian regime, and the homogeneity of the Universe at large scales. 
Once the matter is conformally coupled to gravity, the orbital velocity for a probe star in the galaxy turns out to be asymptotically constant consistently with the observations and the Tully-Fisher relation. Therefore, we compare our model with a sample of 175 galaxies and we show that our velocity profile very well interpolates the galactic rotation-curves for a proper choice of the only free parameter in the metric and the the mass to luminosity ratios, which turn out to be close to 1 consistently with the absence of dark matter.
Leonardo Modesto, Tian Zhou, Qiang Li, "Geometric origin of the galaxies' dark side" arXiv:2112.04116 (December 8, 2021).