Monday, October 31, 2022

Ethnic Turkish Expansion

Razib Khan has been publishing a three part series on the genetics and human population history of Anatolia on his Unsupervised Learning platform. His latest, on how Anatolia became Turkish is here and opens with the useful summary map below. 

The site is worth subscribing to, if you have the time for it, and the quality is, as usual, first rate: artfully written and informed by Razib's wide historical, linguistic and genetic knowledge. I agree pretty much 100% with the narrative told by the map below.

Turkish invaders ca. 1100 CE (a quite accurate estimate, more specifically, it is historically attested to have happened in 1071 CE) led to language shift in Anatolia, even though genetic estimates are that these invaders account for only perhaps 8%-20% of modern Turkish genetic ancestry (probably closer to the low end of that range). Their expansion was a classic case of expanding herder tribes confronting sedentary farmers.

Turkish expansion predates the Mongolian Empire and postdates by centuries, the Bronze Age. Their expansion was arguably the last of a series of sweeps east to west and west to east across the general region of roughly comprised of the Pontic Caspian steppe, Central Asia and Siberia that persisted. The Mongolian Empire left only a weak trace after it collapsed, but one could arguably call Russian expansion to the east the last sweep.

A Novel Method Of Measuring Time

A new paper describes a novel way of measuring time. It involves no new scientific discoveries and is simply a clever engineer application of existing science. But it has the potential to be useful in a wide variety of scientific and engineering applications.

[A]ccording to researchers from Uppsala University in Sweden. . . . experiments on the wave-like nature of something called a Rydberg state have revealed a novel way to measure time that doesn't require a precise starting point.

Rydberg atoms are the over-inflated balloons of the particle kingdom. Puffed-up with lasers instead of air, these atoms contain electrons in extremely high energy states, orbiting far from the nucleus. . . . In some applications, a second laser can be used to monitor the changes in the electron's position, including the passing of time. These 'pump-probe' techniques can be used to measure the speed of certain ultrafast electronics, for instance.
From here citing Marta Berholts, et al., "Quantum watch and its intrinsic proof of accuracy" 4 Phys. Rev. Research 043041 (October 18, 2022). The abstract of the paper is as follows:
We have investigated the rich dynamics of complex wave packets composed of multiple high-lying Rydberg states in He. A quantitative agreement is found between theory and time-resolved photoelectron spectroscopy experiments. We show that the intricate time dependence of such wave packets can be used for investigating quantum defects and performing artifact-free timekeeping. The latter relies on the unique fingerprint that is created by the time-dependent photoionization of these complex wave packets. These fingerprints determine how much time has passed since the wave packet was formed and provide an assurance that the measured time is correct. Unlike any other clock, this quantum watch does not utilize a counter and is fully quantum mechanical in its nature. The quantum watch has the potential to become an invaluable tool in pump-probe spectroscopy due to its simplicity, assurance of accuracy, and ability to provide an absolute timestamp, i.e., there is no need to find time zero.

Thursday, October 27, 2022

Theoretical Calculations Are Converging On The Measured Muon g-2 Result

Background

Two high precision efforts have been made to measure a property of muons which are identical to electrons but more massive, called the anomalous magnetic moment of the muon called muon g-2, and the two experimental results largely confirm each other. A few years from now, a third precision measurement of muon g-2 will be made, but the expectation that it will confirm the two prior high precision measurements of muon g-2 is great.

When the new muon g-2 measurement results were announced on April 7, 2021, there were two leading calculations of what muon g-2 should be in the Standard Model of Particle Physics that differed in how the value they assigned to the indirect QCD (i.e. strong force) contribution to this observable measured to extreme precision.

One calculation from the Theory Initiative, supported by the group that conducted the latest measurement, shows a highly significant deviation from the experimentally measured value based upon a mix of different experimental measurements and actual QCD calculations.

The other calculation by the BMW group was consistent with the experimental measurement and used more or less pure lattice QCD calculations from first principles without substituting experimental results for parts of the QCD calculations. The BMW group asserts that the Theory Initiative didn't do this properly.

It also needs to be said that both prediction for the value of and both precision experimental measurements of the muon anomalous magnetic moment (without the -2 modification or the customary division by two) are identical when rounded to the first nine digits (basically, up to the parts per billion level). The range of values is: 

2.002 331 84 + 0.000 000 002 - 0.000 000 004. 

It is only the extreme precision of both the measurements and the ability to calculate a predicted value of this quantity in the Standard Model that makes it possible to notice than any of these four values are significantly different from any other of them. This open question in physics is being explored against a background of wide agreement on most points. The differences boil down to modest differences in how to conduct the QCD part of the calculation which makes only a small contribution to the total value of the muon anomalous magnetic moment.

So, even if the Standard Model is missing some new physics that is reflected in the observed value of muon g-2 the magnitude of the new physics must be quite small.

All of this background can be summed up in the following chart:


Where Are We Eighteen Months Later?

Increasingly, more than a year and a half later: "Our new lattice calculations are making it more apparent that the theoretical prediction [for the value of the muon g-2] is likely to move closer to the measured result.

This statement is based largely on multiple papers looking at the HVP window (a subpart of the HVP calculation) this summer.

In other words, the BMW calculation is increasingly looking more correct than the Theory Initiative calculation.

What's At Stake?

The stakes are high because the measured value of muon g-2 is powerful global measure of the extent to which all components of the Standard Model combined work together to reproduce a highly precisely measurable observable quantity.

A significant experimental deviation from the correct Standard Model calculation of muon g-2 implies that beyond the Standard Model physics must exist and approximates the magnitude of those BSM physics effects.

But, if the correct Standard Model calculation of muon g-2 is consistent with the experimental measurement, then a huge swath of possible deviations from the Standard Model are ruled out. 

If the measured value of muon g-2 is consistent with a correctly calculated value of muon g-2 in the Standard Model, then only BSM effects that cancel out in the muon g-2 calculation, or BSM effects have a negligible impact on the muon g-2 calculation (essentially only extremely high energy effects) would be consistent with this experimental result. 

A muon g-2 measurement that is consistent with a correct Standard Model calculation of muon g-2 is particularly disappointing for anyone hoping to see BSM physics at a next generation particle collider, because it is much more heavily influenced by new physics that are just out of reach, than it is by new physics that only manifest significantly at energies far beyond the energy scales of even a next generation particle collider. 

Even if the muon g-2 is not perfectly consistent with the correct Standard Model calculation of muon g-2, the smaller the difference is the more subtle or remote in energy scale the new physics giving rise to the discrepancy must be. So, every time the theoretical calculation of the Standard Model prediction for muon g-2 gets closer to the experimentally measured result, the expected magnitude of any new physics gets smaller and harder to observe directly.

The current Large Hadron Collider experiment meanwhile, has so far failed to detect any really significant new physics or anomalies other than inconclusive hints that a property of the three charged leptons: electrons, muons, and tau leptons, called "lepton universality" may not be correct. 

But this weak force driven effect, by itself, even if it is real, would be smaller than what the Theory Initiative claims for the muon g-2 anomaly, because the weak force contribution to the value of muon g-2 is small and very precisely calculated with validating measurements in other contexts.

The absence of any observed BSM physics that would affect muon g-2 at the LHC combined with an experimental value of muon g-2 consistent with the correctly calculated value of muon g-2 in the Standard Model, would make the prospects of a new physics discovery at a next generation particle collider weak indeed, which undermined the science justification for building it right away.

The lack of new physics other than the Higgs boson at the LHC and a lack of indications of new particle physics from other sources like the muon g-2 measurement is what physicists sometimes call the "nightmare scenario" because it means that they have no new physics to discover and win Nobel prizes for themselves for discovering.

The Implied Magnitude Of The Hadronic Component

Muon g-2 the product of a pure QED component (which is predominant and ultra-precisely calcuated), an electroweak force component (which is the smallest contribution but precisely calculated), and a hadronic (strong force) component.

The combined result of the experimental measurements of muon g-2 (all of the numbers that follow are in the conventional -2 and divided by two form times 10^-11) is:

116,592,061.00(4100).

The QED + EW  predicted value is:

116,584,872.53(101) 

About 99% of the combined uncertainty in this value is from the EW component.

The difference, which is the experimentally implied hadronic component value, is:

7,188.47(4101)

This has a plus or minus two sigma range of:

7,106.45 to 7,270.49

The hadronic QCD component is the sum of two parts, the hadronic vacuum polarization (HVP) and the hadronic light by light (HLbL) components.

In the Theory Initiative analysis the QCD amount is 6937(44) which is broken out as HVP = 6845(40), which is a 0.6% relative error and HLbL = 98(18), which is a 20% relative error. But the stated uncertainty in the Theory Initiative prediction is almost surely greatly understated.

Fermilab (2021): 116,592,040(54)
Brookhaven's E821 (2006): 116,592,089(63)
Combined measurement: 116,592,061(41)
Theory Initiative calculation: 116,591,810(43)
BMW calculation: 116,591,954(55)

This implies that the BMW total hadronic value is about 7082(55), a difference from the Theory Initiative calculation that is entirely or predominantly due to differences in the HVP portion of the calculation ("here we use ab initio quantum chromodynamics (QCD) and quantum electrodynamics simulations to compute the LO-HVP contribution."). The open access version of the BMW paper here states that BMW leading order HVP value is 7075(55).  The methodology of their result and approaches to improving it was discussed here in light of the new experimental muon g-2 measurement.

I'm not certain how the incorporation of non-leading order HVP contributions and the HLbL portions of the QCD contribution give rise to the BMW combined muon g-2 cited above. The following chart is from the BMW paper:


Recall also that on the day that the new muon g-2 experimental results was released a "new calculation of the hadronic light by light contribution to the muon g-2 calculation was also released on arXiv . . . (and doesn't seem to be part of the BMW calculation). This increases the contribution from that component from 92(18) x 10^-11 . . . to 106.8(14.7) x 10^-11."

This boost of 14.8 in the overall QCD component isn't as big as the BMW HVP calculation's impact on it, but the two combined narrow the gap even more.

Does An Anomaly In A Proton's Reaction To Electric Fields Reveal BSM Physics?

Overview

When a new experimental anomaly that seems to contradict the Standard Model of Particle Physics fails to secure even ambulance chasing paper writers to propose new physics to explain it, it probably isn't really new physics. 

This is the situation in the case of an anomaly described in a new article in the journal Nature regarding how quarks within protons act in electromagnetic fields of particular strengths in an overall low energy system (the anomalous effect peaks at a momentum exchange scale of about 18.7 MeV, which is about 350 MeV squared).

The money chart of the paper which shows the anomaly is Figure 4 below:

But, for a variety of reasons, there is good reason to think that the anomaly observed, whatever its source, isn't beyond the Standard Model (BSM) physics.

The New Anomaly

New experiments seem to show that the quarks respond more than expected to an electric field pulling on them, physicist Nikolaos Sparveris and colleagues report October 19 in Nature. The result suggests that the strong force isn’t quite as strong as theory predicts.

It’s a finding at odds with the standard model of particle physics, which describes the particles and forces that combine to make up us and everything around us. The result has some physicists stumped about how to explain it — or whether to even try.

At the Thomas Jefferson National Accelerator Facility in Newport News, Va., the team probed protons by firing electrons at a target of ultracold liquid hydrogen. Electrons scattering off protons in the hydrogen revealed how the protons’ quarks respond to electric fields (SN: 9/13/22). The higher the electron energy, the deeper the researchers could see into the protons, and the more the electrons revealed about how the strong force works inside protons.

For the most part, the quarks moved as expected when electric interactions pulled the particles in opposite directions. But at one point, as the electron energy was ramped up, the quarks appeared to respond more strongly to an electric field than theory predicted they would.

But it only happened for a small range of electron energies, leading to a bump in a plot of the proton’s stretch.

“Usually, behaviors of these things are quite, let’s say, smooth and there are no bumps,” says physicist Vladimir Pascalutsa of the Johannes Gutenberg University Mainz in Germany.

Pascalutsa says he’s often eager to dive into puzzling problems, but the odd stretchiness of protons is too sketchy for him to put pencil to paper at this time. “You need to be very, very inventive to come up with a whole framework which somehow finds you a new effect” to explain the bump, he says. “I don’t want to kill the buzz, but yeah, I’m quite skeptical as a theorist that this thing is going to stay.”

It will take more experiments to get theorists like him excited about unusually stretchy protons, Pascalutsa says. He could get his wish if Sparveris’ hopes are fulfilled to try the experiment again with positrons, the antimatter version of electrons, scattered from protons instead.

From Science News discussing primarily the following paper whose abstract and citations are set forth below. 

The Paper and Its Abstract

The abstract of the new article in Nature state:

The visible world is founded on the proton, the only composite building block of matter that is stable in nature. Consequently, understanding the formation of matter relies on explaining the dynamics and the properties of the proton’s bound state. A fundamental property of the proton involves the response of the system to an external electromagnetic field. It is characterized by the electromagnetic polarizabilities that describe how easily the charge and magnetization distributions inside the system are distorted by the electromagnetic field. Moreover, the generalized polarizabilities map out the resulting deformation of the densities in a proton subject to an electromagnetic field. They disclose essential information about the underlying system dynamics and provide a key for decoding the proton structure in terms of the theory of the strong interaction that binds its elementary quark and gluon constituents. Of particular interest is a puzzle in the electric generalized polarizability of the proton that remains unresolved for two decades. Here we report measurements of the proton’s electromagnetic generalized polarizabilities at low four-momentum transfer squared. We show evidence of an anomaly to the behaviour of the proton’s electric generalized polarizability that contradicts the predictions of nuclear theory and derive its signature in the spatial distribution of the induced polarization in the proton. The reported measurements suggest the presence of a new, not-yet-understood dynamical mechanism in the proton and present notable challenges to the nuclear theory.
R. Li et al. Measured proton electromagnetic structure deviates from theoretical predictions. Nature (October 19, 2022). doi: 10.1038/s41586-022-05248-1.

Background Regarding QCD

The paper also provides a general background introduction to Quantum Chromodynamics (QCD) to give the paper context before it launches into its body text. 

Explaining how the nucleons—protons and neutrons—emerge from the dynamics of their quark and gluon constituents is a central goal of modern nuclear physics. The importance of the question arises from the fact that the nucleons account for 99% of the visible matter in the universe. Moreover, the proton holds a unique role of being nature’s only stable composite building block. 
The dynamics of quarks and gluons is governed by quantum chromodynamics (QCD), the theory of the strong interaction. The application of perturbation methods renders aspects of QCD calculable at large energies and momenta— namely at high four-momentum transfer squared (Q2)—and offers a reasonable understanding of the nucleon structure at that scale. Nevertheless, to explain the emergence of the fundamental properties of nucleons from the interactions of its constituents, the dynamics of the system have to be understood at long distances (or low Q2), where the QCD coupling constant αs becomes large and the application of perturbative QCD is not possible. The challenge arises from the fact that QCD is a highly nonlinear theory, because the gluons—the carriers of the strong force—couple directly to other gluons. Here theoretical calculations can rely on lattice QCD, a space-time discretization of the theory based on the fundamental quark and gluon degrees of freedom, starting from the original QCD Lagrangian. 
An alternative path is offered by effective field theories, such as the chiral effective field theory, which use hadronic degrees of freedom and are based on the approximate and spontaneously broken chiral symmetry of QCD. Although steady progress has been made in recent years, we have yet to achieve a good understanding of how the nucleon properties emerge from the underlying dynamics of the strong interaction. To do this, the theoretical calculations require experimental guidance and confrontation with precise measurements of the system’s fundamental properties.
The conclusion states in the body text:
In conclusion, we have studied the proton’s response to an external electromagnetic field and its dependence on the distance scale within the system. We show evidence of a local enhancement in the proton’s electric generalized polarizability that the nuclear theory cannot explain. We provide a definitive answer to the existence of an anomaly in this fundamental property and we have measured with high precision the magnitude and the dynamical signature of this effect. The reported data suggest the presence of a dynamical mechanism in the system that is currently not accounted for in the theory. They pose a challenge to the chiral effective field theory, the prevalent effective theory for the strong interaction, and they serve as high-precision benchmark data for the upcoming lattice quantum chromodynamics calculations. 
The measurements of the proton’s electromagnetic generalized polarizabilities complement the counterpart of the spin-dependent generalized polarizabilities of the nucleon. Together, the two components of the generalized polarizabilities provide a puzzling picture of the nucleon’s dynamics that emerge at long-distance scales. 
Proton has the unique role of being nature’s only stable composite building block. Consequently, the observed anomaly in a fundamental system property comes with a unique scientific interest. It calls for further measurements so that the underlying dynamics can be mapped with precision and highlights the need for an improved theory so that a fundamental property of the proton can be reliably described.

Why Aren't New Physics Explanations Popular For This Kind Of Effect

Neither plain vanilla quantum electrodynamics (QED), which is the Standard Model theory of electromagnetism, nor quantum chromodynamics, which is the Standard Model theory of the strong force, which are the two principle Standard Model theories implicated in this experiment, are popular parts of fundamental physics in which to suggest beyond the Standard Model modifications. 

This is for opposite reasons. 

QED is validated at such extreme precision (often at parts per billion levels or more) with reasonably moderate amounts of calculations, which have been exhaustively tested, that there is no wiggle room for significant new physics in the context of something as ordinary as a proton in an electric field.

QCD, meanwhile, is so hard to calculate with that one of perhaps half a dozen main workable operationalizations of it must be used in practice, and the precision of those calculations is so low (general at the 10% to parts per thousand level) and is so inconsistent between methods, that it is almost more art than science to do QCD calculations that reproduce real world observations reasonably well. As a result theoretical calculation uncertainties frequently swamp any proposed new physics effects in a theory with few moving parts that can be easily manipulated by a physicist to explain observational anomalies.

So, even if there is an anomalous effect that is real and not just due to systemic error or statistical variation, it is hard to say that this is because of new physics as opposed to an oversimplification of true QCD dynamics made in the interests of actually being able to do QCD calculations.

Instead, theorists proposing new theories are far more fond of introducing new particles and forces, outside the three Standard Model forces with its dozen kinds of fundamental fermions, three massive weak force bosons, Higgs boson, photons, and gluons. 

But introducing a new particle into the dynamics of the proton, the most carefully studied composite particle made of quarks that exists, that has never been observed in any other context over many decades of ultra-precise experimental measurements of its properties is very hard to do without contradicting other experiments that should also be affected by the same new particle that weren't observed in the other experiments.

Also, if they tweak any Standard Model force, the weak force whose interactions require ten of the Standard Model's parameters to describe, is more attractive that the electromagnetic or strong forces, which have fundamentally very simple structures even when the application of those forces to physical situations is complicated. 

But this experiment is a poor candidate to demonstrate a weak force effect since it doesn't involve any kind of particle decay and because it isn't small enough for the weak force to be a good candidate to explain it.

Tweaks to General Relativity which is the state of the art theory of gravity are also popular with theorists, but this isn't part of the Standard Model, and gravitational effects are negligible at the scale of a single proton.

Tuesday, October 25, 2022

Quote Of The Day

I have been Chair of the CWRU Department of Astronomy for over seven years now. Prof. Mihos served in this capacity for six years before that. No sane faculty member wants to be Chair; it is a service obligation we take on because there are tasks that need doing to serve our students and enable our research.
- Stacy McGaugh posting at Triton Station.

Academia is an area where the urge to "move up" into the direct management level position of department chair is not strong.

This fact is widely known by those in academia (incidentally, it also applies to the position of chief judge in most courts), and little known outside it.

More Evidence For MOND Or Something Like It

The paradigm in astrophysics and cosmology is that in weak fields, general relativity can be approximated with minimal loss of accuracy with Newtonian gravity and dark matter. 

The trouble is that in circumstances where this is distinguishable from MOND, a simple phenomenological gravitational modification, what we see can be expressed entirely as a function of the distribution of ordinary matter. It has made many ex ante predictions that have been supported by later observations, while dark matter particle theories have generally failed to do so. And, where there is a distinction to be made, MOND tends to fit observations better than dark matter subject to Newtonian gravity at galactic scales. MOND underestimates the magnitude of "dark matter phenomena" in some galactic cluster contexts, but a new paper published in MNRAS shows MOND outperforming the paradigm even in open clusters.
After their birth a significant fraction of all stars pass through the tidal threshold (prah) of their cluster of origin into the classical tidal tails. The asymmetry between the number of stars in the leading and trailing tails tests gravitational theory. All five open clusters with tail data (Hyades, Praesepe, Coma Berenices, COIN-Gaia 13, NGC 752) have visibly more stars within dcl = 50 pc of their centre in their leading than their trailing tail. Using the Jerabkova-compact-convergent-point (CCP) method, the extended tails have been mapped out for four nearby 600-2000 Myr old open clusters to dcl>50 pc. These are on near-circular Galactocentric orbits, a formula for estimating the orbital eccentricity of an open cluster being derived. 
Applying the Phantom of Ramses code to this problem, in Newtonian gravitation the tails are near-symmetrical. In Milgromian dynamics (MOND) the asymmetry reaches the observed values for 50 < dcl/pc < 200, being maximal near peri-galacticon, and can slightly invert near apo-galacticon, and the Küpper epicyclic overdensities are asymmetrically spaced. Clusters on circular orbits develop orbital eccentricity due to the asymmetrical spill-out, therewith spinning up opposite to their orbital angular momentum. 
This positive dynamical feedback suggests Milgromian open clusters to demise rapidly as their orbital eccentricity keeps increasing. Future work is necessary to better delineate the tidal tails around open clusters of different ages and to develop a Milgromian direct n-body code.
Pavel Kroupa, et al. (17 authors), "Asymmetrical tidal tails of open star clusters: stars crossing their cluster's prah challenge Newtonian gravitation" arXiv:2210.13472 (October 24, 2022) (published in MNRAS).

Progress is also being made on MOND in galaxy clusters. It suggests that some of the deficit in the magnitude of dark matter phenomena predicted by MOND in clusters is due to poor modeling of the distribution of the ordinary matter in the clusters.
A specific modification of Newtonian dynamics known as MOND has been shown to reproduce the dynamics of most astrophysical systems at different scales without invoking non-baryonic dark matter (DM). There is, however, a long-standing unsolved problem when MOND is applied to rich clusters of galaxies in the form of a deficit (by a factor around two) of predicted dynamical mass derived from the virial theorem with respect to observations. 
In this article we approach the virial theorem using the velocity dispersion of cluster members along the line of sight rather than using the cluster temperature from X-ray data and hydrostatic equilibrium. Analytical calculations of the virial theorem in clusters for Newtonian gravity+DM and MOND are developed, applying pressure (surface) corrections for non-closed systems. Recent calibrations of DM profiles, baryonic ratio and baryonic (β model or others) profiles are used, while allowing free parameters to range within the observational constraints. It is shown that solutions exist for MOND in clusters that give similar results to Newton+DM -- particularly in the case of an isothermal β model for β=0.55−0.70 and core radii rc between 0.1 and 0.3 times r(500) (in agreement with the known data). 
The disagreements found in previous studies seem to be due to the lack of pressure corrections (based on inappropriate hydrostatic equilibrium assumptions) and/or inappropriate parameters for the baryonic matter profiles.
M. Lopez-Corredoira, et al., "Virial theorem in clusters of galaxies with MOND" arXiv:2210.13961 (October 25, 2022) (accepted for publication in MNRAS).

European Genomes Show Evidence Of Black Death Driven Selection

Genes that increased someone's likelihood of surviving the Black Death in Europe were strongly selected for, and have been identified. The downside of the selection in favor of these genes is that traits which are good at preventing the Black Death also enhance one's risk for auto-immune diseases.
Infectious diseases are among the strongest selective pressures driving human evolution. This includes the single greatest mortality event in recorded history, the first outbreak of the second pandemic of plague, commonly called the Black Death, which was caused by the bacterium Yersinia pestis. This pandemic devastated Afro-Eurasia, killing up to 30–50% of the population.
To identify loci that may have been under selection during the Black Death, we characterized genetic variation around immune-related genes from 206 ancient DNA extracts, stemming from two different European populations before, during and after the Black Death. Immune loci are strongly enriched for highly differentiated sites relative to a set of non-immune loci, suggesting positive selection. 
We identify 245 variants that are highly differentiated within the London dataset, four of which were replicated in an independent cohort from Denmark, and represent the strongest candidates for positive selection. The selected allele for one of these variants, rs2549794, is associated with the production of a full-length (versus truncated) ERAP2 transcript, variation in cytokine response to Y. pestis and increased ability to control intracellular Y. pestis in macrophages. 
Finally, we show that protective variants overlap with alleles that are today associated with increased susceptibility to autoimmune diseases, providing empirical evidence for the role played by past pandemics in shaping present-day susceptibility to disease.
Jennifer Klunk et al., "Evolution of immune genes is associated with the Black Death" Nature (October 19, 2022) DOI: https://doi.org/10.1038/s41586-022-05349-x Hat tip to Razib Khan.

The Meson That Wasn't There

Spoiler alert: A paper describing a predicted kind of meson resonance that isn't observed doesn't provide any solid hypotheses to explain why this is the case, so don't expect a feel good conclusion at the end of this post.

Quantum chromodynamics, the Standard Model theory of the strong force, in at least some ways of operationalizing the theory to a level where one can do calculations with it, predicts that mesons with a quark and an antiquark of valence quarks generally have related states with higher masses whose properties can be predicted theoretically. 

But efforts to operationalize QCD struggle to understand scalar mesons (spin-0, even parity), axial vector mesons (spin-1, even parity) a.k.a. pseudo-vector mesons, and certain higher spin mesons, which are observed but don't flow naturally and obviously from a simple minded valence quark model.

The charged rho anti-meson which is a vector (i.e. spin-1) meson an anti-up quark and a down quark as valence quarks and a measured mass of 775.11 ± 0.34 MeV has several excited states that are predicted. 

One of these predicted states related to it is a spin-2 (i.e. tensor) meson with negative charge and odd parity but a higher mass than the 775 MeV ground state that is known as the ρ2 meson.  The authors of this paper calculate that it should have a mass of about 1663 MeV. 

But, this predicted rho meson state hasn't been observed, even though its predicted mass is at an energy scale that has been studied by many experiments. 

Indeed, there is a crowd of well established meson resonances in the right mass range that have been observed, not all of which are well understood, but none of the observed resonances is an obvious match to the predicted ρ2 meson that the researchers are trying to reconcile with the experimental evidence. 

Two candidate resonances with the right quantum numbers have been identified, but they are significantly more massive than the predicted values.

As the paper below explains in its introduction:
PDG contains various mesons denoted with the letter ρ . These are the isovector resonances with quantum number of isospin (I = 1), of parity (P = −1), and of charge conjugation (C = −1). For instance, the vector mesons ρ(770) with quantum number JPC = 1 −−, the excited vector mesons ρ(1450), ρ(1700), and the tensor meson ρ3 (1690) with quantum number JPC = 3 −−. 

Despite the prediction of the ρ2 in the Relativistic Quark model, it is still missing experimentally. We only have the following data which were observed from different experimental groups and listed as “further states” in PDG: ρ2 (1940) and ρ2 (2225) with the total decay widths Γ tot ρ2 (1940) ≃ 155±40 MeV and Γ tot ρ2 (2225) ≃ 335+100 −50 MeV accordingly. 

Axial tensor mesons are studied in recent LQCD [Lattice QCD] simulations, where the authors consider the mass of ρ2 is about 1.7 GeV as the ρ3 (1690). We present the results about the missing ρ2 within a chiral effective model which is so-called the extended Linear Sigma Model (eLSM).
Part of the problem is that different ways to operationalizing QCD so that QCD calculations can be done produce significantly different predicted decay products for the ρ2 meson as the paper explains with this chart:


Still, despite these discrepancies, all of the predicted decays of the ρ2 meson are very different from the predicted and observed decays of the well established ρ3(1690) meson, so confusion between the two isn't likely.

The conclusion of the paper states:
We have studied ρ2 axial-tensor meson, chiral partner of the tensor meson ρ2 (1320) in the framework of a chiral model for low-energy QCD. We predict its mass to be around 1.663 GeV. A phenomenological note on the missing ρ2 meson similar to the Relativistic Quark model prediction. Because of the chiral symmetry, the parameter determined in the tensor sector allows to make predictions for unknown ground-state axial-tensor resonance. The effective model fitting to the LQCD results is also presented.
The paper and its abstract are:
The ρ2 meson is the missing isovector member of the meson nonet with the quantum numbers JPC=2−−. It belongs to the class of ρ-mesons such as the vector meson ρ(770), the excited vector ρ(1700) and the tensor ρ3(1690). Yet, despite the rich experimental and theoretical studies for other ρ-meson states, no resonance that could be assigned to the ρ2 meson has been measured. In this note, we present the results for the mass and dominant decay channels of the ρ2 meson within the extended Linear Sigma Model.
Shahriyar Jafarzade, "A phenomenological note on the missing ρ2 meson" arXiv:2210.12421 (October 22, 2022) (Contribution to the 41st International Conference on High Energy Physics - ICHEP2022,6-13 July, 2022, Bologna, Italy).

Particle Dark Matter Parameter Fits

A major collaboration in a presentation at a dark matter conference rules out Dirac fermion dark matter for all dark matter particle masses below 100 GeV, even after considering only some of the relevant constraints on that parameter space. 

It doesn't rule out Dirac fermion dark matter at higher masses, but this parameter space is much more heavily constrained than the paper suggests due to an absence of quantum effects present in warm dark matter theories, fuzzy dark matter theories, and axion-like particle theories, due to galaxy scale dynamics, and due to microlensing limitations, among other considerations.

This said, their analysis isn't terribly convincing because it seems to be actually limited to thermal freeze out dark matter and because it doesn't adequately account for dark matter that has no non-gravitational cross-section with ordinary matter that also doesn't annihilate. A dark matter candidate of this kind evades all six of the parameters considered and is basically the paradigmatic case of a LambdaCDM dark matter particle.

The chart below compares the new physics scale Lambda and the dark matter particle mass m(χ).
In this proceeding, we present results from a global fit of Dirac fermion dark matter (DM) effective field theory (EFT) based on arXiv:2106.02056 using the GAMBIT framework. Here we show results only for the dimension-6 operators that describe the interactions between a gauge-singlet Dirac fermion and Standard Model quarks. Our global fit combines the latest constraints from Planck, direct and indirect DM detection, and the LHC. For DM mass below 100 GeV, it is impossible to simultaneously satisfy all constraints while maintaining the EFT validity at high energies. For higher masses, however, large regions of parameter space remain viable where the EFT is valid and saturates the observed DM abundance.
Ankit Beniwal (on behalf of the GAMBIT Collaboration), "Global Fits of Dirac Dark Matter Effective Field Theories" arXiv:2210.12172 (October 21, 2022) (Contribution to the proceeding of 14th International Conference on Identification of Dark Matter, Vienna, Austria (18-22 July, 2022). Submitted to SciPost Physics Proceedings).

The constraints considered are as follows:
1. Direct detection: The Wilson coefficients are evaluated at µ = 2 GeV using DirectDM and matched onto a set of non-relativistic EFT operators. These are used in DDCalc v2.2.0 to compute predicted events rates and corresponding likelihoods for XENON1T, LUX (2016), PandaX (2016) and (2017), CDMSlite, CRESST-II and CRESST-III, PICO-60 (2017) and (2019), and DarkSide-50 experiments. 

2. Relic density: Using CalcHEP v3.6.27, GUM and DarkSUSY v6.2.2, we compute the DM relic density via a thermal freeze-out scenario. Both cases where χ makes up all ( fχ ≡ Ωχ/0.12 ≈ 1) or a sub-component ( fχ ≤ 1) of the total DM abundance are studied. 

3. Fermi-LAT searches for gamma rays: Observations of dwarf spheroidal galaxies of the Milky Way place strong constraints on the DM annihilation rate. Using Fermi-LAT searches for gamma rays from DM annihilation in dwarfs, we use the gamLike v1.0.1 package within DarkBit to compute the resulting likelihood function. 

4. Solar capture: Neutrinos from DM annihilation in the Sun can be detected at the IceCube experiment. Using Capt’n General, we compute the DM capture rate in the Sun and utilise the nulike package to obtain an event-by-event level likelihood for the 79-string IceCube data.

5. Energy injection bounds: Using the CosmoBit  module of GAMBIT, we compute bounds on our model based on predicted rates of DM annihilation in the early universe. These annihilations lead to energy injection and observable effects in the cosmic microwave background. 

6. ATLAS and CMS monojet searches: By combining the ColliderBit module of GAMBIT with FeynRules v2.0, MadGraph_aMC@NLO v2.6.6, Pythia v8.1 and Delphes v3.4.2, we compute a likelihood based on monojet searches performed at the ATLAS and CMS experiments.

Friday, October 21, 2022

Another Supposed Hint Of BSM Physics That Isn't One

Like every arguable experimental anomaly these days, the large number of high energy photons from GRB 221009A observed by LHAASO has given rise to many ambulance chasing papers purporting to explain these observations with "new physics" (which I do not dignify with citations at this blog). 

But, as this paper explains, these astronomy observations can be explained without physics beyond the core theory of the Standard Model and General Relativity. These results are at most, a slightly more than two sigma statistical fluke in circumstances where the reduction for look elsewhere effects needs to be significant.

Papers like this one that work hard to find a plausible explanation for seeming anomalies with well-established and proven "old physics" should be encouraged and should weigh far more heavily in hiring and promotions than ambulance chasing papers (although the poor English grammar in the title written by non-native English speakers should be remedied prior to publication).
It is reported that the Large High Altitude Air Shower Observatory (LHAASO) observed thousands of very-high-energy photons up to ∼18 TeV from GRB 221009A. We study the survival rate of these photons by considering the fact that they are absorbed by the extragalactic background light. 
By performing a set of 10^6 Monte-Carlo simulations, we explore the parameter space allowed by current observations and find that the probability of predicting that LHAASO observes at least one photons of 18 TeV from GRB 221009A within 2000 seconds is 4-5%. 
Hence, it is still possible for the standard physics to interpret LHAASO's observation in the energy range of several TeV. Our method can be straightforwardly generalized to study more data sets of LHAASO and other experiments in the future.
Zhi-Chao Zhao, Yong Zhou, Sai Wang, "Standard physics is still capable to interpret ∼18 TeV photons from GRB~221009A" arXiv:2210.10778 (October 16, 2022).

Thursday, October 20, 2022

Genetic Inheritance Patterns Vary Mostly In Just Two Ways

The genetic inheritance patterns of complex traits can be summarized with just just two numbers per complex trait. 

Genome-wide association studies have revealed that the genetic architectures of complex traits vary widely, including in terms of the numbers, effect sizes, and allele frequencies of significant hits. However, at present we lack a principled way of understanding the similarities and differences among traits. Here, we describe a probabilistic model that combines mutation, drift, and stabilizing selection at individual sites with a genome-scale model of phenotypic variation. In this model, the architecture of a trait arises from the distribution of selection coefficients of mutations and from two scaling parameters. We fit this model for 95 diverse, highly polygenic quantitative traits from the UK Biobank. Notably, we infer similar distributions of selection coefficients across all these traits. This shared distribution implies that differences in architectures of highly polygenic traits arise mainly from the two scaling parameters: the mutational target size and heritability per site, which vary by orders of magnitude across traits. When these two scale factors are accounted for, the architectures of all 95 traits are nearly identical.
Yuval B. Simons, et al., "Simple scaling laws control the genetic architectures of human complex traits" bioRxiv 2022.10.04.509926 (October 7, 2022).

Another Experiment Disfavors Sterile Neutrinos

As attractive as sterile neutrinos are as an easy explanation for any anomaly that crops up in a neutrino oscillation measurement, it doesn't hold water on closer examination.

Most of the LSDN anomaly parameter space for a sterile neutrino is ruled out (except from about 0.2 to 2 eV sterile neutrinos with sine squared two theta parameters of 0.01 to 0.001 in scenarios that are not "appearance only"), some of gallium anomaly sterile neutrino parameter space (almost all of it in a "disappearance only" scenario) and some of the already tiny remaining Neutrino-4 parameter space (which has to be an almost exactly 2.8 eV sterile neutrino that can't have a sine squared two theta mixing parameter more than about 0.3), is also excluded.

The sterile neutrino parameter space suggested by the reactor anomaly does not overlap with the sterile neutrino parameter space suggested by the Neutrino-4 anomaly, disfavoring both a hints of new physics.

The conclusion of the paper notes that: "the MicroBooNE BNB Run 1–3 data show no evidence of sterile neutrino oscillations and are found to be consistent with the 3ν hypothesis within 1σ significance."

The introduction to the paper explains the state of the research to date related to the anomalies at other experiments that have given rise to sterile neutrino explanations:
The discoveries of solar and atmospheric neutrino oscillations have motivated a broad experimental program dedicated to studying neutrino mixing. While most measurements are consistent with three-flavor (3ν) neutrino oscillations as described by the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) formalism, several experimental anomalies hint at the existence of a sterile neutrino with a mass at the eV scale. 
The SAGE and GALLEX experiments, and more recently, the BEST experiment, have observed lower than expected ν(e) rates from radioactive sources, which is known as the gallium anomaly. 
Reactor neutrino experiments have measured lower anti-ν(e) rates than the expectation based on reactor anti-neutrino flux calculations. This observation is referred to as the reactor anomaly. 
An oscillation signal in the reactor anti-ν(e) energy spectrum over distances of a few meters was reported by the Neutrino-4 collaboration. 
In addition to these observed anti-ν(e) deficits, excesses of anti-ν(e)-like events were also observed in some anti-ν(µ) dominated accelerator neutrino experiments. The LSND collaboration observed an anomalous excess of anti-ν(e)-like events, and the MiniBooNE collaboration observed an excess of low-energy electron-like events. 

These anomalies are in strong tension with other experimental results within the 3(active) + 1(sterile) oscillation framework as seen in a global fit of the data. 
In addition, recent experimental measurements and improvements of the reactor anti-neutrino flux calculation lead to a plausible resolution of the reactor anti-neutrino anomaly. 
The Neutrino-4 anomaly is largely excluded by the results from other very short baseline reactor neutrino experiments, for example, PROSPECT, STEREO, DANSS, NEOS, although it is consistent with the gallium anomaly.

The parameter space to the right of the red lines are excluded by the latest MicroBooNE data reported in the paper below at a 95% confidence level. The shaded area are parameters for a sterile neutrino in a 3-1 model that are not ruled out by the referenced prior experiments due to anomalies in their data.

We present a search for eV-scale sterile neutrino oscillations in the MicroBooNE liquid argon detector, simultaneously considering all possible appearance and disappearance effects within the 3+1 active-to-sterile neutrino oscillation framework. We analyze the neutrino candidate events for the recent measurements of charged-current νe and νμ interactions in the MicroBooNE detector, using data corresponding to an exposure of 6.37×10^20 protons on target from the Fermilab booster neutrino beam. We observe no evidence of light sterile neutrino oscillations and derive exclusion contours at the 95% confidence level in the plane of the mass-squared splitting Δm241 and the sterile neutrino mixing angles θμe and θee, excluding part of the parameter space allowed by experimental anomalies. Cancellation of νe appearance and νe disappearance effects due to the full 3+1 treatment of the analysis leads to a degeneracy when determining the oscillation parameters, which is discussed in this paper and will be addressed by future analyses.
MicroBooNE collaboration, "First constraints on light sterile neutrino oscillations from combined appearance and disappearance searches with the MicroBooNE detector" arXiv:2210.10216 (October 18, 2022).

Monday, October 17, 2022

Wide Binary Star Tests Of Modified Gravity

Wide binary systems (i.e. stars in gravitationally bound binary star systems with a large separation) should behave in a non-Newtonian fashion in modified gravity theories. But, in LambdaCDM and other dark matter particle halo theories, the dark matter halos should be too large to materially change the dynamics of wide binary systems. 

So, it is a good data set to distinguish the two approaches to phenomena normally attributed to dark matter, and there are at least 9,000 such binary systems that have been detected by GAIA. But the data aren't yet sufficiently precise because it can't rule out that what appear to be binary systems are really three or four star systems in which not all of the stars have been seen. Fortunately, progress is being made in distinguishing false positive binary systems in order to allow the dynamics of wide binaries to be rigorously studies with a large data set.
Several recent studies have shown that velocity differences of very wide binary stars, measured to high precision with GAIA, can potentially provide an interesting test for modified-gravity theories which attempt to emulate dark matter; in essence, MOND-like theories (with external field effect included) predict that wide binaries (wider than ∼7 kAU) should orbit ∼15% faster than Newtonian for similar orbit parameters; such a shift is readily detectable in principle in the sample of 9,000 candidate systems selected from GAIA EDR3 by Pittordis and Sutherland (2022). However, the main obstacle at present is the observed ``fat tail" of candidate wide-binary systems with velocity differences at ∼1.5−6× circular velocity; this tail population cannot be bound pure binary systems, but is likely to be dominated by triple or quadruple systems with unresolved or undetected additional star(s).

While this tail can be modelled and subtracted, obtaining an accurate model for the triple population is crucial to obtain a robust test for modified gravity. Here we explore prospects for observationally constraining the triple population: we simulate a population of hierarchical triples ``observed" as in PS22 at random epochs and viewing angles; then evaluate various possible methods for detecting the third star, including GAIA astrometry, RV drift, and several imaging methods from direct Rubin images, speckle imaging and coronagraphic imaging. Results are encouraging, typically 90 percent of the triple systems in the key regions of parameter space are detectable; there is a moderate ``dead zone" of cool brown-dwarf companions at ∼25−100 AU separation which are not detectable with any of our baseline methods. A large but feasible observing campaign can clarify the triple/quadruple population and make the gravity test decisive.
Dhruv Manchanda, Will Sutherland, Charalambos Pittordis, "Wide Binaries as a Modified Gravity test: prospects for detecting triple-system contamination" arXiv:2210.07781 (October 14, 2022) (Submitting to Open Journal of Astrophysics).

The introduction to the paper explains:
A number of recent studies have shown that velocity differences of wide stellar binaries offer an interesting test for modified-gravity theories similar to MoND, which attempt to eliminate the need for dark matter (see e.g. Hernandez et al. (2012a), Hernandez et al. (2012b) Hernandez et al. (2014), Matvienko & Orlov (2015), Scarpa et al. (2017) and Hernandez (2019)). Such theories require a substantial modification of standard GR below a characteristic acceleration threshold a0 ∼ 1.2×10−10 m s−2 (see review by Famaey & McGaugh (2012)). A key advantage of wide binaries is that at separations > 7 kAU, the relative accelerations are below this threshold, so MoND-like theories predict significant deviations from GR; while wide binaries should contain negligible dark matter, so DM theories predict no change from GR/Newtonian gravity. Thus in principle the predictions of DM vs modified gravity in wide binaries are unambiguously different, unlike the case for galaxy-scale systems where the DM distribution is uncertain. 
Wide binaries in general have been studied since the 1980s ((Weinberg et al. 1987; Close et al. 1990)), but until recently the precision of ground-based proper motion measurements was a serious limiting factor: wide binaries could be reliably selected based on similarity of proper motions, see e,g, Yoo et al. (2004), L´epine & Bongiorno (2007), Kouwenhoven et al. (2010), Jiang & Tremaine (2010), Dhital et al. (2013), Coronado et al. (2018). However, the typical proper motion precision ∼ 1 mas yr−1 from ground-based or Hipparcos measurements was usually not good enough to actually measure the internal velocity differences, except for a limited number of nearby systems. 
The launch of the GAIA spacecraft (Gaia Collaboration 2016) in 2014 offers a spectacular improvement in precision; the proper motion precision of order 30 µas yr−1 corresponds to transverse velocity precision 0.0284 km s−1 at distance 200 parsecs, around one order of magnitude below wide-binary orbital velocities, so velocity differences can be measured to good precision over a substantial volume; and this will steadily improve with future GAIA data extending eventually to a 10-year baseline. Recent studies of WBs from GAIA include e.g. El-Badry et al. (2021) and Hernandez et al. (2022). 
In earlier papers in this series, Pittordis & Sutherland (2018) (hereafter PS18) compared simulated WB orbits in MoND versus GR, to investigate prospects for the test in advance of GAIA DR2. This was applied to a sample of candidate WBs selected from GAIA DR2 data by Pittordis & Sutherland (2019) (hereafter PS19), and an expanded sample from GAIA EDR3 by Pittordis & Sutherland (2022) (hereafter PS22). To summarise results, simulations show that (with MoND external field effect included), wide binaries at & 10 kAU show orbital velocities typically 15 to 20 percent faster in MOND than GR, at equal separations and masses. This leads to a substantially larger fraction of “faster” binaries with observed velocity differences between 1.0 to 1.5 times the Newtonian circular-orbit value. In Newtonian gravity, changing the eccentricity distribution changes the shape of the distribution mainly at lower velocities, but has little effect on the distribution at the high end from 1.0 to 1.5 times circular velocity. Therefore, the predicted shift from MOND is distinctly different from changing the eccentricity distribution within Newtonian gravity; so given a large and pure sample of several thousand WBs with precise 2D velocity difference measurements, we could decisively distinguish between GR and MOND predictions. 
The main limitation at present is that PS19 and PS22 showed the presence of a “fat tail” of candidate binaries with velocity differences ∼ 1.5 to 6× the circular-orbit velocity; these systems are too fast to be pure bound binaries in either GR or MOND, and a likely explanation (Clarke 2020) is higher-order multiples e.g. triples where either one star in the observed “binary” is itself an unresolved closer binary, or the third star is at resolvable separation but is too faint to be detected by GAIA; the third star on a closer orbit thus substantially boosts the velocity difference of the two observed stars in the wide “binary”
In PS22 we made a simplified model of this triple population, then fitted the full distribution of velocity differences for WB candidates using a mix of binary, triple and flyby populations. These fits found that GR is significantly preferred over MOND if the rather crude PS22 triple model is correct, but we do not know this at present. Allowing much more freedom in the triple modelling is computationally expensive due to many degrees of freedom, and is likely to lead to significant degeneracy between gravity modifications and varying the triple population. Therefore, observationally constraining the triple population, or eliminating most of it by additional observations, is the next key step to make the WB gravity test more secure. 
In this paper we explore prospects for observationally constraining the triple population: we generate simulated triple systems “observed” at random epochs, inclinations and viewing angles, and then test whether the presence of the third star is detectable by any of various methods including direct, speckle or coronagraphic imaging; radial velocity drift; or astrometric non-linear motion in the future GAIA data; we see below that prospects are good, in that 80 to 95% of triple systems in the PS22 sample should be potentially detectable as such by at least one of the methods.

Should Wide Binaries Be Different In Deur's Analysis?

Quoting from the sidebar:

How strong are the gravitational self-interaction? 

This is a function, roughly speaking, of system mass and system size:

Near a proton GMp/rp=4×10-38 with Mp the proton mass and rp its radius. ==>Self-interaction effects are negligible. . . . 

For a typical galaxy: Magnitude of the gravity field is proportionate to GM/sizesystem which is approximately equal to 10-3.
A binary star system has about 1011 times fewer stars than the Milky Way galaxy which can be a proxy for mass.

There are about 62,340 astronomical units (AUs) in a light year. A binary star system is defined as a separation of 7kAU (kilo-astronomical units) or more, which is about 0.11 light years. The Milky Way galaxy is about 87,400 ± 3,590 light years, so the separation of wide binary stars is up to about 10times smaller than the Milky Way galaxy.

So, the gravitational self-interaction in a minimum distance defined binary star system is about 10-8 compared to about 10-3  in a typical galaxy. 

Even allowing for stars ten times a massive as average at this separation and this would be about 10-7 which would be 10,000 times weaker than in a typical galaxy and the effect would be weaker in binary star systems wider than 7kAU. 

So Deur's approach would not predict a noticeable effect in a binary star system unless its two point geometry similar to that which is dominant in clusters and the observable universe as a whole, as opposed to spiral geometry had that effect. According to this source:


Even if that geometry increased the self-interaction effect by a hundred fold (more than the roughly 40x difference between inferred dark matter proportions in galaxy clusters v. spiral galaxies) and the stars in question had masses ten times those of the average Milky Way galaxy stars, the self-interaction of gravitational fields effect would still be only about one percent of the effect size in galaxies (about 0.1% to 0.15% in the quantity measured in the proposed analysis, which is much less than the measurement errors involved).  

So, unlike MOND which predicts a significant wide binary effect from gravity modification, Deur's approach does not appear to predict significant non-Newtonian behavior from the self-interaction of gravitational fields in wide binary star systems. Deur's model performs better than MOND in the tentative data so far.

The outcome of the study of wide binary stars should be a good way to distinguish between the predictions of MOND and the predictions of Deur's approach.

Paleo-Formosans

A new study has found human remains in Taiwan from 6,000 years ago the support the existence of Paleo-Formosan Negrito people on the island who were very similar to Northern Philippine Negrito people, who may have persisted in relict populations in remote mountains on the island until the late 1800s. 

This doesn't precisely contradict the existing paradigm for the narrative of Formosan pre-history, although it does expand it and solidify it.

The study did not recover ancient DNA. This post provides some context to the find and then provides the abstract to the article and a citation to it.

Modern Taiwanese People

Most people in Taiwan (on the island of Formosa) are ethnically Chinese. Many of the Chinese people who live there are Nationalists who were politically to the right of communists and migrated there when the Chinese Community Party won the civil war between the Nationalists and the Communists of the years 1946-1949 in 1949, somewhat like the old regime Cubans who migrated to Florida when Castro turned Cuban in a communist regime.

The ethnically Chinese people of Formosa marginalized and demographically overwhelmed the indigenous farmers and fishers of Formosa which include an ethnic grouping ancestral to the Austronesian people.

The Austronesian People

In historical linguistics and anthropology the focus is on the "indigenous" people of Taiwan from several different linguistic groups, one of which is the parent language of the Austronesian languages spoken in Oceania, much of Southeast Asia, and Madagascar in an expansion that began around 2000 BCE. (The expansion had run its course by about 1000 CE with many intermediate steps, with the linguistically Austronesian people of Madagascar, for example, deriving from a specific narrow geographic area of the island of Borneo in which is now Indonesia.) 

In island Southeast Asia and Oceania, as well as in Madagascar, the first wave of farmers were linguistically Austronesian, a language family that has been traced to a particular tribe of indigenous post-Negrito Formosans, although Austronesian languages are also spoken in some coastal areas. Austronesians encountered, sometimes replaced and sometimes admixed with, previous Papuan and Negrito Southeast Asians.

These indigenous people of Taiwan were farming and fishing people who migrated there from what is now Southern China in the late Neolithic era or the Bronze Age. Thus, the proto-Austronesians in turn originate in the South Chinese Neolithic rice farming revolution, as do essentially all other pre-European colonial era waves of migration to Southeast Asia.

The Austroasiatic People And Subsequent Waves Of Migration To Southeast Asia

In Southern mainland China, mainland Southeast Asia, and northern India (the Munda people), the first wave of farmers were linguistically Austroasiatic, a language family that originates in the South Chinese rice farming Neolithic revolution. There were also later waves of migration of farmers from South China, most recently the Thai people.

The Austroasiatic people (and possibly the Austronesian people as well) expanded away from South China, in part, due to southward migration from the original North Chinese millet farming Neolithic revolution which is the source for the Han Chinese ethnicity that has since become predominant in most of mainland China with substrate influences from Southern Chinese people who were mostly peacefully integrated into the Han Chinese political and linguistic sphere.

The Negrito People Of Asia

But before this indigenous people arrived, there was a Negrito population in Taiwan closely related to the Negrito hunter-gatherer populations of Southeast Asia. Taiwanese Negrito people were particularly similar to the Negrito people who still exist in relict population of the Northern Philippines. 

Negritos are dark skinned (hence the name), and short in stature, with at least ancestral roots in terrestrial hunter-gatherer populations, but are fully modern human and are no more closely related to dark skinned Africans than any other Eurasian or New World population. Their dark skin and small size are examples of convergent evolution since those traits were selective fitness enhancing in the tropical Southeast Asian places that they lived. In mainland Southeast Asia, the prehistoric peoples who gave rise to modern Negritos are called "Hoabinhian hunter-gatherers". 

Negritos have genetic affinities with the Onge hunter-gatherers of the Andaman Islands and with the pre-Neolithic people of Southern India who ancestry is an important component of the genetic makeup of the modern people of Southern India (with the proportion being largest those of lower caste or "untouchables" or "tribal" ancestry who are socio-economically below lower caste Indians) although almost no one in Southern India today has this ancestry exclusively due to Bronze Age admixture with "Ancestral North Indians".

In the Neolithic era and metal ages, most Negritos were replaced in Formosa, mainland China, Southeast Asia and India's prime territory for human habitation by food producing farmers (a fate shared by hunter-gatherer populations almost everywhere), with relict populations persisting in marginal deep jungles and in mountains that weren't as suitable for herding and farming, and many transitioned from pure terrestrial hunting and gathering to herding.

The Negrito people are very distinct physically from the indigenous Jomon people of Japan who were present there before mainland farmers and herders arrived from Korea (although people similar to the Jomon people may have been present in Korea as well around that time ca. 1000 BCE). But there is some reason to think that they may have common genetic origins.

A TreeMix analysis places the Jomon as an offshoot of the Hoabinhian people (a Mesolithic wave of people in Southeast Asia and Southern China ca. 12,000 to 10,000 BCE), with the Kusunda people (who are hunter-gathers in Western Nepal who historically spoke a language that is an isolate and were animistic religiously) as an intermediate population.

Y-DNA haplogroup D has a cryptic distribution found in isolated pockets across Asia including Siberia and Tibet that tends to favor a Northern route origin.

The mtDNA haplogroups N9b and M7a also tell story so deep in history (both are very basal in the Eurasian mtDNA tree and derived from African mtDNA haplogroup L3) that it is hard to reconstruct. Both mtDNA M and mtDNA N show distributions that tend to favor a Southeast Asian route to Japan, but perhaps this is because the northern bearers of this haplogroup went extinct, and were then almost fully replaced in the Last Glacial Maximum.

On the other hand, there have continuously been modern humans in Japan since before the 12,000 BCE migration associated with the Hoabinhian people, so at least some Jomon ancestry probably precedes that wave of migration. But different selective pressures in Japan and Southeast Asia could also lead to selection for a different physical appearance.

But, at least one linguist has proposed that the language of the Jomon people (the only extant survivor of this language family is the language of the Ainu people who are now found only in Hokkaido, and Ainu's membership in this language family or families is mildly disputed), is part of an "Austric" megafamily root in South China, rather than being a language isolate.

Negritos Are Modern Humans Not Archaic Hominins

Negritos are completely distinct from archaic hominins who preceded modern humans in the region such as Denisovans (a sister clade of archaic hominins to Neanderthals in eastern Eurasia less basal than Homo erectus), to Homo erectus (who arrived in what is now Java, Indonesia from Africa around 2 million years ago and were found throughout Southeast Asia and East Asia), to Homo floresiensis (a.k.a. hobbits due to their physical size and build, remains of whom are found on the island of Flores, Indonesia; 2017 study concludes by phylogenetic analysis that H. floresiensis is an early species of Homo, a sister species of Homo habilis, which is more basal than Homo Erectus, although the time of their migration out of Africa is unclear) to diminutive archaic hominins of the Philippines somewhat similar to H. floresiensis

While all non-Africans have a small percentage of archaic Neanderthal admixture, modern humans only have archaic Denisovan admixture to the extent that they have Papuan and Australian Aboriginal ancestry. Among mainland Southeast Asians, East Asians, and island Southeast Asians who are from the west side of the Wallace Line, Denisovan ancestry is barely detectible relative to that proportion of Denisovan ancestry in Papuans and Australian Aborigines. 

This suggests that Negritos are either a second wave of modern human migrants to Southeast Asia and beyond with a small percentage of ancestry from a first wave of modern humans in these areas that admixed with Denisovans, or that the Negrito wave of migration in Southeast Asia and beyond was so much more numerous than in island Southeast Asia, that the few instances of Denisovan admixture were heavily diluted.

The overall percentage of Denisovan ancestry in modern Tibetan people is negligible and essentially impossible to detect, but fitness enhancing genes in Tibetans associated with tolerance for very high altitude environments are Denisovan in origin, indicating that some of their remote ancestors in the region must have had local Denisovan admixture.

No Y-DNA or mtDNA in any modern human is believed to be Denisovan or Neanderthal in origin.

Negritos Were Not The First Wave Of Modern Humans

Negritos were an  early wave of modern humans in a region that spanned from India to Taiwan including much of Southeast Asia, but probably not the first wave. 

The first wave of modern humans in Asia were the ancestors of modern Australian Aborigines and indigenous Papuans, and admixed with Denisovan archaic hominins. Archaeological evidence suggests that they first arrived around the time of the Toba volcanic mega-eruption around 70,000 years ago.

The Paper and Its Abstract
Taiwan is known as the homeland of the Austronesian-speaking groups, yet other populations already had lived here since the Pleistocene. Conventional notions have postulated that the Palaeolithic hunter-gatherers were replaced or absorbed into the Neolithic Austronesian farming communities. Yet, some evidence has indicated that sparse numbers of non-Austronesian individuals continued to live in the remote mountains as late as the 1800s. 
The cranial morphometric study of human skeletal remains unearthed from the Xiaoma Caves in eastern Taiwan, for the first time, validates the prior existence of small stature hunter-gatherers 6000 years ago in the preceramic phase. This female individual shared remarkable cranial affinities and small stature characteristics with the Indigenous Southeast Asians, particularly the Negritos in northern Luzon
This study solves the several-hundred-years-old mysteries of ‘little black people’ legends in Formosan Austronesian tribes and brings insights into the broader prehistory of Southeast Asia.
Hsiao-chun Hung, et al., "Negritos in Taiwan and the wider prehistory of Southeast Asia: new discovery from the Xiaoma Caves" World Archaeology (October 4, 2022) https://doi.org/10.1080/00438243.2022.2121315. Hat tip to DDeden in the comments.