Pages

Wednesday, June 30, 2021

Cosmology Bounds On Sum Of Neutrino Masses Narrowed To 87 meV

A new study fixed the bound of the sum of the three neutrino masses from cosmology data to 87 meV/c^2 or less with 95% confidence, using a novel way of combining multiple sources of data. 

This rules out the inverted neutrino mass hierarchy (for which the sum of the three neutrino masses exceeds 100 meV). It also reduces the uncertainties in the absolute neutrino masses, which have a minimum value (determined from mass differences in neutrino oscillations and assuming a lightest neutrino mass of almost zero) of about 60 meV. Thus, most uncertainty in the sum of the neutrino masses is the shared 0-9 meV range of uncertainty in the lightest neutrino mass.

The best fit point of the data (i.e. within the one sigma range), constrained by the minimum sum of the three neutrino masses from neutrino oscillation data, is very close to the minimum non-zero value that implies a lightest neutrino mass that is on the order of 1 meV or less. 

Without the neutrino oscillation data constraint, the best fit point from cosmology data is actually slightly below the minimum sum of neutrino masses from neutrino oscillation data, although the preference for the below 60 meV value is not statistically significant.

The paper and its abstract are as follows:

We present here up-to-date neutrino mass limits exploiting the most recent cosmological data sets. By making use of the Cosmic Microwave Background temperature fluctuation and polarization measurements, Supernovae Ia luminosity distances, Baryon Acoustic Oscillation observations and determinations of the growth rate parameter, we are able to set the most constraining bound to date, ∑mν<0.09~eV at 95%~CL. This very tight limit is obtained without the assumption of any prior on the value of the Hubble constant and highly compromises the viability of the inverted mass ordering as the underlying neutrino mass pattern in nature. The results obtained here further strengthen the case for very large multitracer spectroscopic surveys as unique laboratories for cosmological relics, such as neutrinos: that would be the case of the Dark Energy Spectroscopic Instrument (DESI) survey and of the Euclid mission.
Eleonora Di Valentino, Stefano Gariazzo, Olga Mena "On the most constraining cosmological neutrino mass bounds" arXiv:2106.16267 (June 29, 2021).

Tuesday, June 29, 2021

Conjectures Regarding Unsolved Mysteries Of The Standard Model

This post recaps some of my observations, conjectures and working hypotheses about the source of the Standard Model physical constants and the deeper workings of the model. 

It is notable that all fundamental SM particles with non-zero rest mass have weak force interactions, and that all fundamental SM particles with zero rest mass do not have tree level weak force interactions.

Neutrinos which lack electromagnetic charge have negligible mass, while charged leptons that do have electromagnetic charge much larger masses (by a factor on the order of one million).

The magnitude of the strong force QCD color charge of all three colors of all six types of quarks (ands their antiparticles) is identical; the magnitude of the strong force coupling of all eight kinds of gluons are identical to each other. Yet, quarks have very different rest masses from each other and gluons have no rest mass.

In the SM, the Higgs mechanism is part of the electroweak part of the SM and has no real interaction with the strong force and QCD interactions of the model. 

The Higgs vev is a function of the W boson mass and the weak force coupling constant. 

The Yukawas of the SM which are the coupling constants of the SM Higgs boson are particle rest masses normalized by the square of the Higgs vev.

In the SM electroweak theory, the mass of the Z boson relative to the W boson is a function of the electromagnetic and weak force coupling constants.

To a high degree of accuracy, the probability of a CKM matrix transition from the first generation to the third generation is equal to the probability of a CKM matrix transition from the first generation to the second generation, times the probability of a CKM matrix transition from the second generation to the third generation.

Taken together this presents a strong suggestive case that the fundamental particle rest masses in the SM are entirely a product of the electroweak sector of the SM (apart from possible negligible magnitude higher order loop factors), as the Higgs mechanism of the SM itself illustrates, rather than having anything meaningful to do with the QCD sector of the SM.

The principle that a particle's coupling to gravity is a universal function of its mass-energy is also well established and since no particle is treated differently than any other under this coupling relationship, it suggests that the electroweak sector of the SM is the sole meaningful source of the fundamental particle rest masses in the SM.

So, the fundamental masses of the SM flow from an SU(2) x U(1) group alone.

The reasonably close empirical fit of the LP & C relationship to the known rest masses (i.e. that the sum of the square of the fundamental particle masses is equal to the square of the Higgs vev) is suggestive of the hypothesis that the overall mass scale of the SM fundamental particle rest masses is strongly connected to a mass scale established by the weak force, since the Higgs vev is a function of the W boson mass and weak force coupling constant, both of which are exclusively part of the SU(2) weak force sector of the SM.

The large magnitude of the Higgs boson mass and top quark mass relative to the other fundamental SM particle masses can be understood as necessary and natural to fill out the LP & C totals in the heaviest particle in each category.

(The fact that the sum of the square of the fundamental boson masses exceeds half of the square of the Higgs vev, while the sum of the square of the fundamental fermion masses is less than half of the square of the Higgs vev, to high statistical significance, also suggests that any SM symmetry between fermions and bosons is not exact, as it is in SUSY theories, but instead, is broken or only approximate.)

Hadronic masses, in contrast, have a very significant contributions from QCD. Gluons and virtual sea quarks provide the predominant source of rest mass for light hadrons, and are a significant source of rest mass for heavy hadrons. This different mechanism for hadron mass in the SM (which can be calculated in principle from fundamental SM constants in the SM), and fundamental particle masses in the SM, suggests that comparisons between hadron masses may not be providing much insight.

The fundamental quark and charged lepton masses have a "normal hierarchy" (i.e. higher generation particles of a given electromagnetic charge are more massive than lower generation particles of the same type). 

The structure of the CKM matrix also shows that this normal hierarchy for quarks is not just a product of arranging quarks in order of mass. Generation assignments correspond to similar CKM matrix elements. 

All observational evidence also favors a "normal hierarchy" of the fundamental neutrino mass eigenstates, although the preference isn't terribly strong simply because the masses are so tiny and the precision of the available measurements is limited.

The CKM matrix appears to be "more fundamental" than the quark rest masses. CKM matrix elements at a given generation are similar despite the up and down quarks of a given generation having very different rest masses.

It is possible to "obtai[n] the PMNS matrix [which governs neutrino oscillation] without having to ever talk about mass diagonalization and mismatches between flavor and mass basis." Gustavo F. S. Alves, Enrico Bertuzzo, Gabriel M. Salla, "An on-shell perspective on neutrino oscillations and non-standard interactions" arXiv (March 30, 2021). While this is largely a matter of semantics, with equivalent observational outcomes, much like the different quantum mechanics interpretations, an "on-shell perspective" can be more helpful in conceptualizing neutrino mass and neutrino oscillation in the context of an overall understanding of the neutrino oscillation process. The on-shell perspective conceptualizes neutrino oscillation in the context of a virtual W boson mediated process, essentially identical to the virtual W boson mechanism for flavor changing in the CKM matrix governed quark side of the SM.

The Standard Model forces that do not experience CP violation (i.e. the electromagnetic force and the strong force) are mediated by zero rest mass carrier bosons that in special relativity do not experience the passage of time in their own reference frame. This is also true of the hypothetical massless graviton. So, it makes sense that only the weak force, which is mediated by a massive carrier boson would exhibit CP violation. This is an argument against the "strong CP problem" being a problem.

The Higgs boson and Z boson are also massive, but unlike the W boson, they lack electromagnetic charge, lack color charge, and have even parity, so CP reversal of a Higgs boson or Z boson is not something that can be observed. So, it makes sense that the sole cause of CP violation in the Standard Model as manifested through the CKM matrix which is basically a property of the W boson, is the W boson. In an on-shell interpretation of neutrino oscillation, this process is also W boson mediated, so the PMNS matrix is also basically a property of the W boson.

Koide's Rule And Its Extensions

The original Koide tuple has a couple of things going for it that are plausibly the source of the quality of the match: (1) charged lepton universality, and (2) the negligible masses of the neutrinos relative to the charge leptons. In contrast, quark universality is not present in the SM and the masses of the up-type quarks are of the same order of magnitude as the down type quarks.

Extended Koide sum rules produce a reasonable first order approximation of all of the SM quark and charged lepton rest masses (at pole mass values) in all three generations from the electron and muon rest masses. It makes an essentially perfect prediction of the tau lepton mass that has withstood the test of time. It isn't a "correct" rule for predicting the quark masses, but the predicted masses and the actual masses have the same order of magnitude. The Extended Koide sum rule uses quark triples on the decay chain t-b-c-s-u-d. Notably, at first order, the predicted up quark mass is very nearly zero.

The error in the first order approximations of the quark masses from the Extended Koide sum rules can be significantly reduced with an adjustment.

For a given up type quark (the "target quark") there are three down type quarks it could transform into via W boson interactions. Two of those down type quarks are included in the Koide triple. Adjust the first order Extended Koide sum rule estimate by multiplying the transition probability of that up type quark to the third down type quark (the one not included in the Extended Koide quark truple used to establish its mass) which is the square of the magnitude of the relevant CKM matrix element, times the Extended Koide sum rule mass estimate for the third down type quark. (The analogous rule is use for the masses of down type quarks).

This adjustment is motivated by the fact that discrepancy between the Extended Koide Rule first order estimate, and the measured rest mass is greatest in the cases where the CKM matrix element between a target quark and a third quark not included in its Extended Koide rule first order estimate calculation is largest.

Notably, the mass of the up quark comes almost entirely from this third-quark adjustment (i.e from the unaccounted for up quark to bottom quark transition).

The intuition of this adjustment (which is not done in a perfectly mathematically elegant and rigorous way and would need to be iterated or simultaneously solved to really be done right) is that the relative magnitudes of the fundamental quark masses are the product of dynamical balancing between them via W boson interactions, and the overall mass scale of the SM fundamental particles is also a function of the weak force.

Some of the geometric interpretations of the original Koide's rule are consistent with this kind of conceptualization.

In this analysis, the W boson interaction is the driving force behind the fundamental particle masses and the Higgs boson Yukawas are really just the tail wagged by the W boson dog, or a corollary that flows from the W boson interactions, instead of the usual conceptualization that the Higgs boson interactions are driving the bus with the source of those couplings due to some unknown deeper physics of an unknown source.

A conceptualization of fundamental fermion mass generation being driven primarily by W boson interactions, rather than an explanation that is centered on Higgs boson driven parity oscillations of SM fundamental fermions, also provides a path for a source of Dirac mass for neutrinos, arising from their W boson interactions, even though neutrinos can't also have Higgs boson driven parity oscillations of charged leptons and quarks since there are no right handed neutrinos and no left handed antineutrinos. The fact that only the W boson portion, rather than both the W boson and Higgs boson portion are at work in neutrinos could also help to explain their vastly different rest mass scales.

The reason that there are no right handed neutrinos and no left handed antineutrinos is that the weak force does not interact with particles of that parity, and neutrinos have no electromagnetic or strong force interactions. The lack of parity balance, in turn, is the reason that Higgs boson parity oscillation doesn't give rise to neutrino mass, only W boson interactions and self-interactions. No see-saw mechanism is necessary to explain their smallness and they do not need to be Majorana particles to acquire mass in this way.

Why Three Generations?

The reason that there are exactly three generations of SM fermions could be basically "accidental".

There are theoretical reasons in the deep math of the SM (especially its electroweak part) which have been known since the 1970s or 1980s why any given generation of SM fermions must have four members (one up type quark, one down type quark, one charged lepton, and one neutrino).

The mechanism by which higher generation fermions decay to lower generation fermions is through W boson interactions. So, no fermion can have a mean lifetime less than the W boson. The top quark mean lifetime is only marginally longer than the mean lifetime of the W boson. If there were a hypothetical fourth generation of SM fermions, the SM mean lifetime would be less than the W boson mean lifetime. But, because this would be a contradiction, there are no fourth generation SM fermions.

Combining These Conjectures

If a correct generalization of Koide's rule for all SM fermions was determined and the LP & C relationship is correct, you would need the electron mass, the muon mass, the W boson, the SU(2) coupling constant, and the U(1) coupling constant to determine all of the rest masses of the fundamental SM particles. Indeed, it might even be possible to have just the electron mass without the muon mass, using the Higgs vev to set the overall mass scale. So, there would be one fundamental fermion mass and one fundamental boson mass in this fundamental mass generation model.

The electron mass is well approximated by a mass established by its overall self-interaction, as is the lightest neutrino mass eigenstate (which differs from the electron mass by a ratio on the order of the electromagnetic coupling constant to the weak force coupling constant). It may be true that this approach could also be applied to one or both of the first generation quark rest masses. If so, there is a path to set of fundamental SM particle rest masses with only one non-derived and experimentally determined mass dimensioned physical constant: the W boson mass. This leave four CKM matrix parameters, four PMNS matrix parameters, three SM force coupling constants in addition to the W boson mass. The other fourteen SM particle fundamental masses could be derived.

* The baryon matter-antimatter asymmetry of the Universe is best explained by a Big Bang that has a matter dominated universe extending from it in one direction of time and an antimatter dominated universe extending from it in the other direction of time. This is important because a resolution of this asymmetry removes the pressure on theorists to find additional sources of CP violation, and additional sources of baryon number and/or lepton number violation in the Standard Model.

* The fact that quarks come in three colors, and have a 3-1 ratio to leptons in W and Z boson decays, and have electromagnetic charges that are in 1/3rd multiples of the charge lepton electromagnetic charge magnitude, is probably not a coincidence and says something about the deeper structure of quarks.

* It could be that charged leptons are color charge complete, like a baryon, rather than actually lacking color charge. Similarly, could be that W bosons are color charge complete, rather than actually lacking color charge. 

* Could Z bosons and Higgs bosons be different sides of the same coin, that are color charge complete in the manner of quarkonia (or tetraquarks or hexaquarks), with Higgs bosons and Z bosons differing by spin alignments?

* A color charge-electromagnetic charge connection would then make sense, with the color charge component merely cancelling out in charged leptons and W bosons, and perhaps also in Higgs bosons and Z bosons.

* Since both color charge and electromagnetic charge have antiparticle opposites, the notion of neutrinos having either that cancel out makes less sense. Are neutrinos "empty nets"?

* Would gluons be color charge pairs bound without "the nets"?

* Is there a better way than the three color charge approach (such as a topological one), to think about color charge that addresses the fact that there are eight rather than nine kinds of gluons? I think that the answer must be yes.

Monday, June 28, 2021

Homo Longi Were Probably Denisovans

Homo longi ("Dragon men") the new species identification temporarily assigned to a Chinese specimen with a mix of archaic and modern features was probably one of a fairly heterogeneous species of hominin known genetically as Denisovans after the cave in Siberia where their DNA was first characterized. They were the predominant post-Homo erectus hominin of Asia and modern Papuans and Australian aborigines have particularly high level of Denisovan admixture which is present at trace levels in other Asians and in the indigenous peoples of the Americas. 

Their facial structure, although not necessarily their stature or build, resembles the typical artistic depiction of J.R.R. Tolkien's dwarves. An artist's impression of this archaic hominin previously known only from teeth and DNA is as follows:
 
Background and Analysis


The images above (via John Hawks) capture the big picture of archaic hominins. As Razib Khan explains, setting the background:
In 2010, genomes recovered from ancient remains of “archaic hominins” in Eurasia turned out to have genetic matches in many modern humans. . . . we had to get used to the new reality that a solid 2-3% of the ancestry of all humans outside Africa is Neanderthal. About 5% of the ancestry of Melanesian groups, like the Papuans of New Guinea, actually comes from a previously unimagined new human lineage discovered in Denisova cave, in Siberia of all places. . . . Trace, but detectable (0.2% or so), levels of “Denisovan” ancestry are found across South, Southeast, and East Asia (as well as among indigenous people of the Americas). Similarly, trace but detectable levels of Neanderthal ancestry actually appear in most African populations. And, though we have no ancient genomes to make the triumphant ID, a great deal of circumstantial DNA evidence indicates that many African groups harbor silent “archaic” lineages equivalent to Neanderthals and Denisovans. We call them “ghost” populations. We know they’re there in the genomes, but we have no fossils to identify them with. . . . an Israeli group has a paper out in Science on a human population discovered there which seems to resemble Neanderthals and dates to 120,000 to 140,000 years ago. . . .
Neanderthals, Denisovans, and modern humans are just the main actors in the plotline of our species’ recent origins. Today on our planet there is just one human species, but this is an exceptional moment. For most of the past few million years there were many human species. Up until 50,000 years ago in the Southeast Asian islands of Flores and Luzon, we see strong evidence of very specialized species of small humans, the pygmy Hobbits and Homo luzonensis. They are different not only from each other, but from modern humans, Denisovans and Neanderthals. In Africa, there were almost certainly very different human populations which over time were absorbed, just as the Denisovans and Neanderthals were. Homo naledi in South Africa almost certainly persisted down to the period of the rise of modern humans on the continent, 200,000 years ago.

Finally, a great deal of circumstantial archaeological and genetic evidence is accumulating that some earlier African lineages related to modern humans expanded out into eastern Eurasia before our own expansion. Artifacts in China and Sumatra dating to before 60,000 years ago seem suspiciously modern, and genetic analysis of Siberian Neanderthals dating to 120,000 years ago suggests admixture from populations related to modern humans. It is still possible that Homo longi descends from one of these early populations. Only DNA can establish this for a fact, but most older fossil remains do not yield genetic material, and this skull is old enough that only perfect conditions would have yielded DNA.
Ancient hominins from China ca. 200,000 to 100,000 years ago with a mix of archaic and modern features were probably Denisovans. John Hawks calls them "H. antecessor groups with the Jinniushan-Dali-Harbin-Xiahe clade." 

A new published journal article describes a specimen skull from these archaic hominins and puts them in context. Morphologically, it is closer to modern humans than to Neanderthals, although Denisovan DNA is shares a clade closer to Neanderthals than to modern humans (within the clade that all three share), and this is likely what this specimen is (although we don't have DNA to confirm it). It is akin to several other roughly contemporaneous sets of remains from Asia:


Razib Khan has this commentary:
Some researchers want to call “Dragon ManHomo longi (龙, pronounced lóng, being Chinese for dragon), a new human species, and assert its features mean it is more closely related to modern humans than Neanderthals. This is particularly true of the Chinese researchers, in whom I can’t help but sense a drive to establish precedence for China as one of the major hearths of modern humans.*

Paleoanthropologists outside of China seem more inclined to believe that “Dragon Man” is actually the paradigm-busting species we have only known definitively from genomics: Denisovans. This faction points out that “Dragon Man” had massive teeth, just like a confirmed Denisovan jaw discovered in Tibet in 2019 (ancient-protein analysis indicated it was Denisovan). So why do others disagree? Because the skull is so intact they performed an evolutionary analysis of its relationships, using a full suite of characteristics (unfortunately the find did not yield DNA). On that inferred family tree, Homo longi lies closer to modern humans. In contrast, we know from genomics that Denisovans are more closely related to Neanderthals than they are to modern humans.

My bet is that Homo longi and Denisovans are one and the same. Or, more precisely, Homo longi is one of the many Denisovan lineages. 
. . .
The best genetic work indicates that Denisovans were not one homogeneous lineage, as seems to have been the case with Neanderthals, but a diverse group that were strikingly differentiated. The Denisovan ancestry in modern populations varies considerably in relatedness to the genome sequences from Denisova cave. It is clear that the Denisovan ancestry in Papuans is very different from the Siberian Denisovan sequences. The most geographically distant Denisovan groups, those in Siberia and those from on the far edge of Southeast Asia into Wallacea, were likely far more genetically different from each other than Khoisan are from the rest of humanity. Depending on the assumptions you set your “molecular clock” with, the most distant Denisovan lineages probably separated into distinct populations from each other 200,000 to 400,000 years before their extinction.

John Hawks states in a Tweet:

Now, we have learned a few things from DNA and ancient proteins. H. antecessor is a sister of the Neandertal-Denisovan-modern clade. Neandertals, today's humans, and Denisovans share common ancestors around 700,000 years ago. Neandertals and Denisovans were related.

I agree that the Homo longi remains are probably Denisovans. 

The main new article is Xijun Ni, "Massive cranium from Harbin in northeastern China establishes a new Middle Pleistocene human lineage" The Innovation (June 25, 2021) (open access).

Ancient Southern Chinese DNA

Ancient Mesolithic and Neolithic DNA from Southern China reveals that it once had three distinct diverged ancestries, none of which exist in unadmixed form today. The results are largely paradigm confirming. 

More commentary and context is available in a comment from "Matt" here.

Highlights

• Guangxi region in southern China had distinct East Asian ancestry 11 kya not found today 
• At least three distinct ancestries were in southern China and SE Asia prior to 10 kya 
• Three admixed ancestries were present in pre-agricultural Guangxi 9–6 kya 
• Tai-Kadai- and Hmong-Mien-related ancestry present in Guangxi by 1.5–0.5 kya 
Summary 
Past human genetic diversity and migration between southern China and Southeast Asia have not been well characterized, in part due to poor preservation of ancient DNA in hot and humid regions. 
We sequenced 31 ancient genomes from southern China (Guangxi and Fujian), including two ∼12,000- to 10,000-year-old individuals representing the oldest humans sequenced from southern China. 
We discovered a deeply diverged East Asian ancestry in the Guangxi region that persisted until at least 6,000 years ago. 
We found that ∼9,000- to 6,000-year-old Guangxi populations were a mixture of local ancestry, southern ancestry previously sampled in Fujian, and deep Asian ancestry related to Southeast Asian Hòabìnhian hunter-gatherers, showing broad admixture in the region predating the appearance of farming. 
Historical Guangxi populations dating to ∼1,500 to 500 years ago are closely related to Tai-Kadai and Hmong-Mien speakers. 
Our results show heavy interactions among three distinct ancestries at the crossroads of East and Southeast Asia. 
Graphical abstract

Wednesday, June 23, 2021

Austronesians In Antarctica

Oral histories that are likely to be accurate indicate that Austronesian Maori mariners from New Zealand made regular trips to Antarctica starting ca. 600s CE

The paper is:

Priscilla M. Wehi, et al., "A short scan of Māori journeys to Antarctica." Journal of the Royal Society of New Zealand 1 (2021) DOI: 10.1080/03036758.2021.1917633 (open access).

For what it is worth, the abstract is poorly written, but the body text has some decent nuggets of insight.

Ancient Yellow River mtDNA

A large sample of ancient mtDNA from a Neolithic Yellow River site (in Northern China) reveals that this population or close sister populations to it are probably ancestral to the modern Northern Han people.

More Precise Neutron Lifetime Measurements Are Consistent With Corrected Standard Model Predictions

Experimental indications of new physics from a lack of unitarity in the CKM matrix and disparities in the neutron mean lifetime are both starting to resolve due to a combination of improved Standard Model calculations and better measurements. The exact cause of disparities in the disparate beam based measurements of the neutrino lifetime still aren't clearly understood, however. Thus, another potential indicator of new physics has gone away.

As I noted in a February 22, 2021 blog post:

There are two methods that have been used historically to measure the mean lifetime of a free neutron: the beam method and the storage method.

The beam method measures neutron lifetime by counting the injected neutron and decay product in the beam. 
The storage method measures neutron lifetime by storing ultracold neutron in the specific bottle. They count the number of surviving neutrons S(1) and S(2) after distinct storing times t(1) and t(2).

The global average mean lifetime of a free neutron by the beam method is

888.0 ± 2.0 seconds.

The global average mean lifetime of a free neutron by the storage method is

879.4 ± 0.6 seconds

There is an 8.6 second (4.1 standard deviation) discrepancy between results from the two measurement methods, which is huge for fundamental physics in both absolute terms and relative to the amount of uncertainty in the respective measurements.

A 2014 review of the literature summed up the experimental measurements to date as follows:


The narrow uncertainty in a single measurement published in 2013 drives to seemingly low uncertainty of the beam estimate. The Particle Data Group has based its current world average (currently 879.4 ± 0.6 seconds) on "bottle" measurements. It supports this decision, in part, with a published analysis from 2018 (open access pre-print here).

The disparity is particularly striking because it involves a very common particle whose other properties have been measured to exquisite detail with significant practical applications.  It is also important, for example, in Big Bang Nucleosynthesis calculations.

More than eight seconds of disparity is hardly a precision measurement (and even the newly reported measurement by the more precise method is still less precise than the measurement precision of horserace outcomes). 

The free neutrino is the longest lived hadron or fundamental particle that isn't completely stable by a factor of roughly a billion. 

Recent theory work has helped resolve disparity based upon measurements including the neutron mean lifetime that is used to determine the CKM matrix element for weak force mediated up to down quark transitions, which had previously been in a three sigma tension with unitarity. Using the corrected theoretical approach, "the extracted |Vud|mirror=0.9739(10) now is in excellent agreement with both neutron and superallowed 0+→0+ Fermi determinations."

The universal radiative corrections common to neutron and superallowed nuclear beta decays (also known as “inner” corrections) are revisited in light of a recent dispersion relation study that found +2.467(22)%, i.e., about 2.4σ larger than the previous evaluation. For comparison, we consider several alternative computational methods. All employ an updated perturbative QCD four-loop Bjorken sum rule defined QCD coupling supplemented with a nucleon form factor based Born amplitude to estimate axial-vector induced hadronic contributions. In addition, we now include hadronic contributions from low Q2 loop effects based on duality considerations and vector meson resonance interpolators. Our primary result, 2.426(32)%, corresponds to an average of a light-front holographic QCD approach and a three-resonance interpolator fit. It reduces the dispersion relation discrepancy to approximately 1.1σ and thereby provides a consistency check. 
Consequences of our new radiative correction estimate, along with that of the dispersion relation result, for Cabibbo-Kobayashi-Maskawa unitarity are discussed. The neutron lifetime-gA connection is updated and shown to suggest a shorter neutron lifetime less than 879 s. We also find an improved bound on exotic, non–Standard Model, neutron decays or oscillations of the type conjectured as solutions to the neutron lifetime problem, BR(n→exotics)<0.16%.
The "shift reduces the predicted neutron lifetime from 879.5(1.3) s to τn = 878.7(0.6)." This results is consistent with the new measurement to less than two sigma, also very strongly favoring the storage method over the beam method determination. The adjusted result from this paper produces a new value of the CKM matrix element Vud of "0.97414(28), where the increased error is due to an additional nuclear quenching uncertainty. Using it together with Vus=0.2243(9), one finds |VQud|2+|Vus|2+|Vub|2−1=−0.00074(68), so the first CKM row sum is consistent with unity at close to the 1σ level."

The Wikipedia values of the CKM matrix first row which don't reflect this new theoretical calculation are:

 0.97370(14)    0.2245(8)    0.00382(24)

For a sum of squares of 0.99850-1=-0.00150(50).

The likely cause of the discrepancy is unrecognized systemic error in the beam method measurements (such as the large error in the recent J-PARC beam measurement discussed in the linked previous blog post) that causes to beam measurement to miss about 1% of decays that actually occur (or some other modeling error such as beam neutrinos not truly qualifying as "free" or relativistic adjustments being applied incorrectly). As of October 2020, experiments were in the works to identify the source of the discrepancy.

A new measurement from the paper below is in a 2.5 sigma tension with the previous global average of storage based measurement in the direction away from the beam measurements, making the disparity between beam and storage based measurements even greater (a 5.1 sigma discrepancy). But the new measurement is 1.4 sigma from the Standard Model prediction based upon improved calculations that the improved measurements of the CKM matrix elements.

The combined error in the direction of the beam measurement of ± 0.28 seconds, cutting the uncertainty in previous storage based measurements in half, and probably significantly shifting the new world average storage based measurement in the direction of this new measurement (since measurements with less uncertainty are weighted more heavily).

We report an improved measurement of the free neutron lifetime τn using the UCNτ apparatus at the Los Alamos Neutron Science Center. We counted a total of approximately 38 ×10^6 surviving ultracold neutrons (UCN) after storing in UCNτ ’s magneto-gravitational trap over two data acquisition campaigns in 2017 and 2018. We extract τn from three blinded, independent analyses by both pairing long and short storage-time runs to find a set of replicate τn measurements and by performing a global likelihood fit to all data while self-consistently incorporating the β-decay lifetime. Both techniques achieve consistent results and find a value τn = 877.75±0.28 stat+0.22/−0.16 syst s. With this sensitivity, neutron lifetime experiments now directly address the impact of recent refinements in our understanding of the standard model for neutron decay.

From a new preprint

More Galaxy Dynamics Data Test Dark Matter and Modified Gravity Theories (And More)

Observations v. Modified Gravity and DM Models

The confirmation of modified gravity theories over a large range of scales is generically true of any modified gravity theory that can reproduce the core conclusions of MOND, such as MOG and Deur's approach, even though just two were specifically tested in this paper.

I would like to know if the early- and late-type galaxies have different shapes, although a difference in interstellar gas concentrations does provide a plausible way to reconcile a disparity between the two galaxy types. This could provide another Deur motivated distinction between the two groups of galaxies. 

The paper does suggest that there is a shape difference between the two types of galaxies:




The earlier type bulge dominated galaxies have larger inferred halo masses relative to ordinary mass, which is the opposite of what I had expected (as the paper also notes, spiral galaxies tend to have more inferred dark matter than elliptical galaxies and bulge dominated galaxies would seem to be closer to elliptical galaxies) but this could be (as the paper suggests), a function of unobserved interstellar gas being present to a larger degree in these galaxies than in older type galaxies (which by that time is exhausted in star formation). The paper states:
The higher values of gobs for red and bulge-dominated galaxies that we find in Fig. 8 are in qualitative agreement with earlier GGL studies. A recent KiDS-1000 lensing study by Taylor et al. (2020) found that, within a narrow stellar mass range near the knee of the SHMR (M* ∼ 2 − 5 × 1010 h −2 70M ), galaxy halo mass varied with galaxy colour, specific star formation rate (SSFR), effective radius Re and Sérsic index n. Although not explicitly mentioned, their figures 1 and 6 reveal that their early-type (red, low-SSFR) galaxies have larger halo masses than their late-type (blue, low-n, high-SSFR) galaxies of the same stellar mass. Sérsic parameter coupling between n and Re, for a fixed galaxy luminosity, may also contribute towards the trends seen among the early-type galaxies in their Mhalo–n and Mhalo–Re diagrams. Much earlier Hoekstra et al. (2005) measured the GGL signal of a sample of ‘isolated’ Red-sequence Cluster Survey galaxies as a function of their rest-frame B-, V-, and R-band luminosity, and found that early-type galaxies have lower stellar mass fractions. In contrast, Mandelbaum et al. (2006) found no dependence of the halo mass on morphology for a given stellar mass below M* < 10^11 M , although they did find a factor of two difference in halo mass between ellipticals and spirals at fixed luminosity.

Finding a significantly different RAR at equal M* would have interesting implications for galaxy formation models in the ΛCDM framework. In the ΛCDM framework it is expected that the galaxy-to-halo-mass relation, and therefore the RAR, can be different for different galaxy types through their galaxy formation history (Dutton et al. 2010; Matthee et al. 2017; Posti et al. 2019; Marasco et al. 2020). Two parameters that correlate heavily with galaxy formation history are Sérsic index and colour.

Current MG theories do not predict any effect of galaxy morphological type on the RAR, at least on large scales [ed. not true for Deur's approach]. The MOND paradigm gives a fixed prediction for the relation between gbar and gobs given by Eq. 11. Since the RAR is the observation of exactly this relation, in principle MOND gives a fixed prediction, independent of any galaxy characteristic. As discussed in Section 2.3, the main exception is the EFE that could be caused by neighbouring mass distributions. However, Fig. 4 shows that an increase in the EFE only predicts an increase in steepness of the downward RAR slope at low accelerations (gbar < 10^−12 m s^−2 ), while the observed RAR of both early- and late-type galaxies follow approximately the same slope across all measured accelerations. It is therefore unlikely that their amplitude difference can be explained through the EFE.

. . .

In conclusion, unless early-type galaxies have significant circumgalactic gaseous haloes while late types (of the same stellar mass) do not, the difference we find in the RARs of different galaxy types might prove difficult to explain within MG frameworks. In MOND, gbar and gobs should be directly linked through Eq. 11 without any dependence on galaxy type. 
In EG the effect might be a consequence of yet unexplored aspects of the theory, such as a non-symmetric mass distribution or the effect of large-scale dynamics. To explore whether this is the case, however, more theoretical work is needed. Through the derivative in Eq. 14, EG does include a dependence on the slope of the baryonic density distribution. A shallower slope of Mbar(r) increases MADM and thus gobs, which might solve the current tension if early-type galaxies have significantly shallower baryonic mass distributions that extend far beyond 30 h −1 70 kpc, such as gaseous haloes (although Brouwer et al. 2017 did not find evidence for a significant effect of the baryonic mass distribution on the EG prediction; see their section 4.3). In addition, EG is currently only formulated for spherically symmetric systems. It would be interesting to investigate whether discs and spheroidal galaxies yield different predictions, and whether these differences would extend beyond 30 h −1 70 kpc. 
In a ΛCDM context, our findings would point to a difference in the SHMR for different galaxy types. Recently Correa & Schaye (2020) used SDSS data with morphological classifications from Galaxy Zoo to find that, at fixed halo mass (in the range 10^11.7 − 10^12.9 M ), the median stellar mass of SDSS disc galaxies was a factor of 1.4 higher than that of ellipticals. They found this to be in agreement with the EAGLE simulations, where haloes hosting disc galaxies are assembled earlier than those hosting ellipticals, therefore having more time for gas accretion and star formation.

Also, I would suggest that in the absence of a clear reason to prefer a ΛCDM model, that modified gravity explanations are preferred as more economical and not facing so many challenges in other areas. Also, the ΛCDM comes across much more ad hoc and hasn't made ex ante predictions.

The new paper and its abstract are as follows:

We present measurements of the radial gravitational acceleration around isolated galaxies, comparing the expected gravitational acceleration given the baryonic matter with the observed gravitational acceleration, using weak lensing measurements from the fourth data release of the Kilo-Degree Survey. 
These measurements extend the radial acceleration relation (RAR) by 2 decades into the low-acceleration regime beyond the outskirts of the observable galaxy. We compare our RAR measurements to the predictions of two modified gravity (MG) theories: MOND and Verlinde's emergent gravity. We find that the measured RAR agrees well with the MG predictions. In addition, we find a difference of at least 6σ between the RARs of early- and late-type galaxies (split by Sérsic index and u−r colour) with the same stellar mass. Current MG theories involve a gravity modification that is independent of other galaxy properties, which would be unable to explain this behaviour. The difference might be explained if only the early-type galaxies have significant (Mgas≈M∗) circumgalactic gaseous haloes. 
The observed behaviour is also expected in ΛCDM models where the galaxy-to-halo mass relation depends on the galaxy formation history. We find that MICE, a ΛCDM simulation with hybrid halo occupation distribution modelling and abundance matching, reproduces the observed RAR but significantly differs from BAHAMAS, a hydrodynamical cosmological galaxy formation simulation. Our results are sensitive to the amount of circumgalactic gas; current observational constraints indicate that the resulting corrections are likely moderate. 
Measurements of the lensing RAR with future cosmological surveys will be able to further distinguish between MG and ΛCDM models if systematic uncertainties in the baryonic mass distribution around galaxies are reduced.
Margot M. Brouwer, et al., "The Weak Lensing Radial Acceleration Relation: Constraining Modified Gravity and Cold Dark Matter theories with KiDS-1000" (June 22, 2021) (650 Astronomy & Astrophysics A113 (2021)

More Primordial Black Hole Exclusions

Meanwhile, primordial black hole dark matter theories suffer another blow, although in a size range much greater than the asteroid sized PBHs most commonly considered as a dark matter candidate.
The possibility that primordial black holes (PBHs) form a part of dark matter has been considered over a wide mass range from the Planck mass (10^−5 g) to the level of the supermassive black hole in the center of the galaxy. Primordial origin might be one of the most important formation channel of massive black holes. We propose the lensing effect of very long baseline interferometer observations of compact radio sources with extremely high angular resolution as a promising probe for the presence of intergalactic PBHs in the mass range ∼10^2-10^9 M⊙. 
For a sample of well-measured 543 compact radio sources, no millilensing multiple images are found with angular separations between 0.2 milliarcsecond and 50 milliarcseconds. From this null search result, we derive that the fraction of dark matter made up of PBHs in the mass range ∼10^4-10^8 M⊙ is ≲0.56% at 68% confidence level.

Disintegrating Open Galaxy Clusters

Finally, evidence that "open" galaxy clusters sometimes fall apart.

Thursday, June 17, 2021

Evidence Of Agriculture In The Amazon Ca. 1500 BCE

Some of the least well understood prehistoric cultures of the Americas are the sedentary food producing cultures of the Amazon river basin in South America, much of which is marsh and jungle now, and some of which was transformed into a savanna landscape when European cattle ranchers cleared the forest for this purpose. 

This farming and fish farming culture existed from at least 1500 BCE to 300 CE in the region.

The University of Central Florida's press release regarding a new paper notes that:
[P]re-Columbian people of a culturally diverse but not well-documented area of the Amazon in South America significantly altered their landscape thousands of years earlier than previously thought. . . . [There is] evidence of people using fire and improving their landscape for farming and fishing more than 3,500 years ago. This counters the often-held notion of a pristine Amazon during pre-Columbian times before the arrival of Europeans in the late 1400s. The study . . . also provides mores clues to the past of the diverse, but not well-documented, cultures that live in the area known as the Llanos de Mojos in northeastern Bolivia.

"This region has one the highest diversity of languages in the world, which reflects distinct ways of life and cultural heritage," says study co-author John Walker, an associate professor in UCF's Department of Anthropology. "We know something about the last 3,000 to 4,000 years of, say Europe or the Mediterranean, but we don't have some of that same information for the people here. That makes this an incredible story waiting to be written.". . . 

The flat, wetland landscape of the Llanos de Mojos is used for cattle ranching today, but archaeologists have noted for years the evidence from remnants of pre-Columbian raised fields and fish weirs for aquaculture. These remnants indicated the land was once used instead for farming and fishing. The archaeologists just didn't know when or how far back in time these activities started -- until now.

Previous research pointed to a date of about 300 C.E., or about 1,700 years ago. However, the new study combined expertise from multiple disciplines, such as anthropology, paleoethnobotany and paleoecology, to indicate that intensive land management started much earlier, at about 1,500 B.C.E, or about 3,500 years ago.

"This finding is important because it provides evidence that the Amazon is not a pristine wilderness but has been shaped and designed by indigenous people thousands of years before the Spanish arrived," Walker says.

This is new information for both the history of the cultures of the Amazon, which have not been studied as much as other cases, like the Mayas or Incas, and for the area, which is often thought of as an untouched world before the arrival of the Spanish.

Neil Duncan, the study's lead author . . . extracted two, five-foot long cores of earth from two locations about 13 miles apart in the Llanos de Mojos. By examining these cores, Duncan found corn and squash phytoliths dating as early as 1380 B.C.E and 650 B.C.E, or about 3,000 years ago. Phytoliths are microscopic silica particles from plant tissue, and the findings suggest these were crops grown in the numerous raised fields that dot the area. . . .

Both cores showed similar trends of initial dry conditions in the oldest layers of earth, followed by increased wet conditions and increased use of wood burning, as evidenced by the presence of high diatom concentrations and charcoal concentrations, respectively. The researchers say wood burning could be for cooking, pottery, warmth and more. . . .

"The intensification of plant, fire and water management occurred at the same time, which emphasizes how farming or fishing were equally important to the people of the region," . . .

Also of note is that the shifts in the two cores to more intensive land management happened at different periods, the researchers say.

One core, known as the Mercedes core, showed the shift to wetter conditions and increased fire use starting at 1,500 B.C.E, or about 3,500 years ago. The other, extracted from a location about 13 miles farther south and known as the Quinato-Miraflores core, showed the shift occurring at about 70 B.C.E., or about 2,100 years ago.

Since broadscale climate changes would have affected both areas at the same time, the time difference between the two cores suggests humans were purposefully engineering the land, including draining water in some areas, retaining it in others, and using trees for fuel.

"So, what's happening in the landscape is that that it's becoming wetter, and we think that some of those trees are being flooded out and so they're not as well represented," Duncan says. "And if things are getting wetter then we shouldn't see more charcoal. So, the interpretation is that we would only see these high amounts of charcoal if it's humans doing some very intentional and intensive burning."

The paper and its abstracts are as follows:

Significance

The Chavín, Moche, Tiwanaku, and Inka are well-known pre-Columbian cultures, but during the same time, in the southwestern Amazon, people were transforming a 100,000-km2 landscape over thousands of years. The extent of earthworks in the Llanos de Mojos has become clear since the 1960s, but dating these features has been difficult. We show that pre-Columbian people used hydrological engineering and fire to maximize aquatic and terrestrial resources beginning at least 3,500 years ago. 
In the 17th century CE, cattle and new technologies brought by Jesuit missions altered the form and function of these landscapes. 
The scale and antiquity of these Amazonian earthworks demand comparison with domesticated landscapes and civilizations from around the world.

Abstract

In landscapes that support economic and cultural activities, human communities actively manage environments and environmental change at a variety of spatial scales that complicate the effects of continental-scale climate. 
Here, we demonstrate how hydrological conditions were modified by humans against the backdrop of Holocene climate change in southwestern Amazonia. 
Paleoecological investigations (phytoliths, charcoal, pollen, diatoms) of two sediment cores extracted from within the same permanent wetland, ∼22 km apart, show a 1,500-y difference in when the intensification of land use and management occurred, including raised field agriculture, fire regime, and agroforestry. Although rising precipitation is well known during the mid to late Holocene, human actions manipulated climate-driven hydrological changes on the landscape, revealing differing histories of human landscape domestication. 
Environmental factors are unable to account for local differences without the mediation of human communities that transformed the region to its current savanna/forest/wetland mosaic beginning at least 3,500 y ago. Regional environmental variables did not drive the choices made by farmers and fishers, who shaped these local contexts to better manage resource extraction. 
The savannas we observe today were created in the post-European period, where their fire regime and structural diversity were shaped by cattle ranching.

Thursday, June 10, 2021

A Stunning New Neutral D Meson Oscillation Anomaly

Shorter Summary 

For the first time ever, an experiment at the Large Hadron Collider (LHC) has seen a highly statistically significant (albeit slight) difference in mass between a particle and its antiparticle, contrary to the Standard Model. This has suddenly risen to the most statistically significant anomaly in all of high energy physics.

Analysis

It isn't clear why this is the case, or whether there is any good reason to suspect underestimated systemic error (other than the fact that it contradicts the Standard Model and wasn't strongly predicted by any of the front runner beyond the Standard Model physics theories currently in circulation as viable proposals in light of other HEP experimental data).

If this results is independently replicated by another experiment (since the LHCb operates at lower energies than the ATLAS and CMS experiments, there are other colliders in the world that can do so), this will be a very big deal, probably implying "new physics" of some undetermined nature. 

But, despite the high statistical significance of the error, because the discrepancy is so small in both absolute and relative terms, it is easy to imagine that an overlooked source of systemic error in the measurement that could resolve the anomaly on highly technical grounds (although I have no specific ones in mind).

Is This A Discovery Yet?

While "5 sigma" statistical significance is the standard for making a new discovery in high energy physics, this isn't the only requirement. 

The preprint results still need to be peer reviewed, the 5 sigma observation needs to be independently replicated, and the proponents of new physics based upon the observation need to propose some theory to explain that result that is consistent with the rest of the laws of physics supported by empirical observations in contexts where anomalies aren't seen. 

This long road is a bit like the Roman Catholic church's arduous process for declaring someone to be a "saint." But this is enough to get the ball rollings towards a widely recognized beyond the Standard Model physics result that all credible high energy physicists would have to accept and reckon with somehow.

The New Results In Context

A Dº meson has two valence quarks, a charm quark and an anti-up quark, bound by gluons. The Dº mass is about 1865 MeV. Its antiparticle, anti-Dº meson also has two valence quarks, an anti-charm quark and an up quark. Since it is neutral, the electromagnetic charge of the particle and the antiparticle is the same. Both the particle and its antiparticle are pseudoscalar (i.e. spin-0 odd parity) mesons.

The particle and its antiparticle can't oscillate in "tree level" (i.e. single step Feynman diagram) processes. But, they can oscillate via a number of two step processes.

For example, the charm quark can decay to a strange quark which can decay to an up quark, while the anti-up quark can become an anti-strange quark which can become an anti-charm quark, in each case, via two rounds of simultaneous W boson transitions, one W- and one W+ each, and those four weak force bosons can be virtual ones that cancel each other out.

Because it can happen, the Standard Model predicts that it will happen, so Dº-anti-Dº meson oscillation while nice to observe to confirm that hypothesis, is no big deal.

The probability of going from a Dº to an anti-Dº meson and the probability of going from an anti-Dº meson to a Dº meson are not identical due to the CP violating phase in the CKM matrix. So, the observed oscillating Dº-anti-Dº pair, isn't exactly a 50-50 mix of each of them, although the difference between what was observed and a 50-50 mix wasn't statistically significant (as predicted in the Standard Model given the precision of the measurements done).

But, as a PDG review paper explains, in the Standard Model, the mass and decay width of the Dº meson and the mass and decay width of the anti-Dº meson should be identical.

preprint of a June 7, 2021 letter from the LHCb experiment, however, concludes that there is a small, but still 7.3 sigma mass difference (without considering look elsewhere effects) between a Dº meson and an anti-Dº meson, the first time ever that a particle and its antiparticle of any kind have been observed to have statistically significantly different masses, and a statistically significant 3.1 sigma difference in decay widths.

Normalized by the decay width, the mass difference is 0.397% of the decay width, and the decay width difference is 0.459% of the average decay width.  The average decay width is less than 2.1 MeV. So, the observed discrepancy between the masses of the Dº and anti-Dº meson is on the order of 0.008 MeV or less, and the observed discrepancy between the decay widths of the Dº and anti-Dº meson is on the order of 0.009 MeV or less. 

In both absolute terms (less than 0.2% of the electron mass), and relative to the mass of the Dº meson (on the order of two or three parts per million), these differences are very small. 

But, as noted above, the differences are reported to be statistically significant despite the fact that in the Standard Model they shouldn't occur at all. 

Look elsewhere effects probably reduce the differences in decay widths to a non-statistically significant level of 2 sigma or less because over the years probably something like a thousand or more HEP matter-antimatter mass comparisons have been done, all producing null results. But, even with look elsewhere effects, the locally 7.3 sigma mass difference should be at least 4-5 sigma, and it is also worth noting that the mean percentage mass difference in reported in this preprint does replicate the prior world average for this measurement (which previously was "marginally compatible with" no mass difference due to a much larger experimental uncertainty in previous measurements).

The Letter contains no meaningful or insightful commentary regarding what could be leading to this bombshell conclusion.

Wednesday, June 9, 2021

Are Baryon Number And Lepton Number Ever Violated?


I am not confident that even sphaleron interactions actually happen, although whether or not they do at very high energies (which are far beyond what can be reproduced at the current LHC and are limited predominately to the first microsecond after the Big Bang) is largely irrelevant to the larger conclusions about the source of baryon asymmetry in the universe, because the effect size is too small. 

A sphaleron requires a roughly 9 TeV energy to be concentrated at a density of about 1000 times that of the mass-energy density of a proton (i.e., a radius of about 8.4 * 10^-17 meters). The Schwarzschild radius of a 9 TeV sphaleron is about 2.4 * 10^-50 meters. The Planck length is about 1.6 * 10^-35 meters. A Planck length black hole, arguably the least massive possible black hole, has a mass of about 1.2 * 10^16 TeV (22 micrograms).

Creating a sphaleron would probably require a collider at least 100 times as powerful as the LHC, if it is possible at all. This is probably something that the next generation of more powerful particle colliders will not achieve. 

Creating a sphaleron would require a mass-energy density more than nine million times greater than a neutron star, i.e. 10^17 kg/m^3, or the mass-energy density of a minimum sized stellar collapse black hole. A mass-energy density this great has never been observed. 

This concentrated a mass-energy may not be possible. It is possible and plausible that instead, there is a minimum mass-energy density in the universe, something that might provide a fixed point that makes an asymptotically safe theory of quantum gravity possible. If this threshold existed, it would prevent the non-conservation of baryon number and lepton number from occurring at all. It would also make primordial black holes not merely impossible to produce in practice in the modern era, but theoretically impossible.

This isn't necessarily inconsistent with the observed baryon asymmetry of the universe. The baryon asymmetry of the universe could arise from a scenario in which the birth of the universe arises from a pair of universes, one the CPT image of the other, living in pre- and post-big bang epochs. The CPT-invariance strictly constrains the vacuum states of the quantized fields. Thus, before the Big Bang is a predominantly antimatter universe in which time runs backwards relative to our universe. See also here and here.

Tuesday, June 8, 2021

Another Observational Problem With ΛCDM

The problems with the ΛCDM model, which is the "Standard Model of Cosmology," aren't limited to galaxy scale problems that have been widely discussed. Increasingly, there are problems with it at the time depth and scale reserved to cosmology.

The latest problem is that the model assumes that at a large enough scale the universe is isotropic and homogeneous (i.e. that it is basically the same in all directions). But compelling observational evidence contradicts this assumption. 

The evidence for dark energy also isn't as strong as commonly believed in the first place. As the body text of a new preprint explains, contrary to a late 1990s study of 93 SNe Ia supernova that showed clear evidence of an accelerated expansion of the universe, a "principled statistical analysis of a bigger 740 star 2014 Joint Lightcurve Analysis catalogue data set done in a 2016 "demonstrated that the evidence for acceleration is rather marginal i.e. < 3σ[.]" 

Furthermore, it observes that: "The observed Universe is not quite isotropic. It is also manifestly inhomogeneous."

This doesn't necessarily mean that the concept of dark energy is fundamentally wrong, or that dark energy phenomena aren't real. But if these fundamental assumptions are flawed and the evidence is fairly equivocal to start with, then it follows that our estimates of the magnitude of dark energy phenomena are far less reliable and precise than currently widely believed.
In the late 1990's, observations of 93 Type Ia supernovae were analysed in the framework of the FLRW cosmology assuming these to be `standard(isable) candles'. It was thus inferred that the Hubble expansion rate is accelerating as if driven by a positive Cosmological Constant Λ. This is still the only direct evidence for the `dark energy' that is the dominant component of the standard ΛCDM cosmological model. Other data such as BAO, CMB anisotropies, stellar ages, the rate of structure growth, etc are all `concordant' with this model but do not provide independent evidence for accelerated expansion. 
Analysis of a larger sample of 740 SNe Ia shows that these are not quite standard candles, and highlights the "corrections" applied to analyse the data in the FLRW framework. The latter holds in the reference frame in which the CMB is isotropic, whereas observations are made in our heliocentric frame in which the CMB has a large dipole anisotropy. This is assumed to be of kinematic origin i.e. due to our non-Hubble motion driven by local inhomogeneity in the matter distribution. 
The ΛCDM model predicts how this peculiar velocity should fall off as the averaging scale is raised and the universe becomes sensibly homogeneous. However observations of the local `bulk flow' are inconsistent with this expectation and convergence to the CMB frame is not seen. Moreover the kinematic interpretation implies a corresponding dipole in the sky distribution of high redshift quasars, which is rejected by observations at 4.9σ. The acceleration of the Hubble expansion rate is also anisotropic at 3.9σ and aligned with the bulk flow. Thus dark energy may be an artefact of analysing data assuming that we are idealised observers in an FLRW universe, when in fact the real universe is inhomogeneous and anisotropic out to distances large enough to impact on cosmological analyses.
Roya Mohayaee, Mohamed Rameez, Subir Sarkar, "Do supernovae indicate an accelerating universe?" arXiv:2106.03119 (June 6, 2021).