Thursday, December 28, 2023

The Internal Structure Of A Scalar Meson Finally Understood

The structure of the f0(980) scalar boson has been a mystery for decades. 

A new study, however, appears to show that it is basically a quark-antiquark meson with a valance strange quark and anti-strange quark. In other words, it is strange quark quarkonia. The new study largely rules out tetraquark, meson molecule, and quark-antiquark-gluon hybrid particle alternatives.

Tuesday, December 26, 2023

Newton's Reign

It is sometimes hard to realize how far Issac Newton was ahead of his time. He was born 381 years ago. His scientific contributions were mostly in the time period from 1664, when he was 20 years old, until 1707, when he was 63 years old.

Only 13 of the chemical elements had been discovered at the time (one more would be discovered before his death), and the periodic table of the elements was about two centuries in the future. The germ theory of disease and modern genetics were centuries away. A theory of evolution, even Lamarkian evolution, was more than a century away. The Industrial Revolution was still more than a century away. Electromagnetism and thermodynamics hadn't been worked out in his lifetime. Telescopes (he invented the first practical reflecting telescope) and printing presses were relatively recent inventions. They were still burning witches. Gunpowder had been known in Europe for about four hundred years in his lifetime, but was just starting to become decisive militarily in his lifetime. He lived through the brief interregnum called the Common wealth of England, Scotland and Ireland, during which the British Isles was a republic without a reigning monarch. 

Newtonian mechanics, Newtonian gravity, and Newton's observations about optics went unchallenged and unamended for about 250 years. Scientists and engineers still use them on a daily basis, despite knowing that general relativity, special relativity, and quantum mechanics limit the range of their applicability.

The laws of physics he invented are still taught in high school and freshman college level physics classes. He co-invented calculus, which is still taught in high school and freshman and sophomore level college classes (although the notation of the independent co-inventor of calculus, Leibniz, rather than his own clunky notation, is used today).

Not all parts of Newton's legacy were equally illustrious. He was also a Unitarian theologian and an alchemist, and devoted almost as much time in his life to those ultimately fruitless projects, as he did to science and mathematics. He was also a member of Parliament, ran the Royal Mint for many years, was knighted, and led the Royal Society for twenty-four years. He never married and is not reputed to have had any children.

When Were Elements Discovered?

More than 14 chemical elements have been discovered in my lifetime. That's more than were discovered in the entire history of humanity through the year 1734 CE. The Standard Model of Particle Physics was also developed in my lifetime.


Copper 9000 BCE - Older than agriculture
Lead 7000 BCE
Gold 6000 BCE
Iron 5000 BCE
Silver 5000 BCE
Carbon 3750 BCE
Tin 3500 BCE
Sulfur 2000 BCE - Drought leads to massive spread of Indo-Europeans
Mercury 1500 BCE
Zinc 1000 BCE - After Bronze Age collapse


Antimony 815 CE
Arsenic 815 CE
Bismuth 1000 CE - Lief Erikson reaches North America

Early Modern and Early Industrial Revolution

Phosphorous 1669 CE
Cobalt 1735 CE
Platinum 1735 CE
Nickel 1751 CE
Magnesium 1755 CE
Hydrogen 1766 CE
Oxygen 1771 CE
Nitrogen 1772 CE
Barium 1772 CE
Chlorine 1774 CE
Manganese 1774 CE
Molybdenum 1782 CE
Tellurium 1782 CE
Tungsten 1783 CE
Strontium 1787 CE
Uranium 1789 CE - U.S. Constitution adopted; French Revolution
Zirconium 1789 CE
Titanium 1791 CE - U.S. Bill Of Rights adopted.
Yttrium 1794 CE - First element named after Ytterby, Sweden
Chromium 1794 CE
Beryllium 1798 CE

19th Century

Niobium 1801 CE
Vanadium 1801 CE
Palladium 1802 CE
Tantalum 1802 CE
Iridium 1803 CE
Osmium 1803 CE
Cerium 1803 CE
Rhodium 1804 CE
Potassium 1807 CE
Sodium 1807 CE
Calcium 1808 CE
Boron 1808 CE
Fluorine 1810 CE
Iodine 1811 CE
Selenium 1817 CE
Cadmium 1817 CE
Lithium 1817 CE
Silicon 1818 CE
Aluminum 1824 CE - a.k.a. Aluminium
Bromine 1825 CE
Thorium 1829 CE
Lanthanum 1838 CE
Terbium 1843 CE
Erbium 1843 CE - Second element named after Ytterby
Ruthenium 1844 CE
Cesium 1860 CE
Thallium 1861 CE - U.S. Civil War starts
Rubidium 1861 CE
Indium 1863 CE
Helium 1868 CE - 14th Amendment to U.S. Constitution adopted
Holmium 1878 CE
Ytterbium 1878 CE - Third element named after Ytterby
Samarium 1879 CE
Thulium 1879 CE
Scandium 1879 CE
Gadolinium 1880 CE
Neodymium 1885 CE
Praseodymium 1885 CE
Dysprosium 1886 CE
Germanium 1886 CE
Argon 1894 CE
Europium 1896 CE
Neon 1898 CE
Xenon 1898 CE
Krypton 1898 CE
Radium 1898 CE
Polonium 1898 CE
Radon 1899 CE

20th Century

Actinium 1901 CE
Lutetium 1906 CE
Protactinium 1913 CE
Hafnium 1922 CE
Rhenium 1925 CE
Technetium 1937 CE - First artificially produced element
Francium 1939 CE - World War II begins
Neptunium 1940 CE
Astatine 1940 CE
Plutonium 1940 CE
Curium 1944 CE
Americium 1944 CE
Promethium 1945 CE - World War II ends
Berkelium 1949 CE
Californium 1950 CE
Fermium 1952 CE
Einsteinium 1952 CE
Mendelevium 1955 CE
Lawrencium 1961 CE
Nobelium 1966 CE
Rutherfordium 1969 CE
Dubnium 1970 CE - I am born
Seaborgium 1974 CE
Bohrium 1981 CE
Meitnerium 1982 CE
Hassium 1984 CE
Roentgenium 1994 CE - I get married
Darmstadtium 1994 CE
Copernicium 1996 CE
Flerovium 1998 CE
Livermorium 2000 CE

21st Century

Oganesson 2002 CE
Moscovium 2003 CE
Nihonium 2003 CE
Tennessine 2009 CE


Thursday, December 21, 2023

Improving Our Understanding Of The Bantu Expansion

A new paper confirms and more specifically describes the paradigmatic understanding of Bantu expansion.

a,b, Putative migration routes of BSP inferred using pairwise FST values (a) and after removing the Zambian Lozi population from the analyses (b). Arrow colours correspond to north-western Bantu speakers 2 (NW-BSP 2; brown; one arrow between Cameroon and CAR), west-western Bantu speakers (WW-BSP; green), south-western Bantu speakers (SW-BSP; dark blue) and eastern Bantu speakers (E-BSP; red). c, Spatial visualization of effective migration rates (EEMS software) estimated with the masked Only-BSP dataset. log(m) denotes the effective migration rate on a log10 scale, relative to the overall migration rate across the habitat. Populations are coloured according to each Bantu-speaking linguistic group (brown, green, dark blue and red dots). d, GenGrad analysis using FST as the genetic distance for the admixture-masked BSP dataset. Hexagons of the grid were plotted with a colour scale representing the FST gradient (key).
The expansion of people speaking Bantu languages is the most dramatic demographic event in Late Holocene Africa and fundamentally reshaped the linguistic, cultural and biological landscape of the continent. 
With a comprehensive genomic dataset, including newly generated data of modern-day and ancient DNA from previously unsampled regions in Africa, we contribute insights into this expansion that started 6,000–4,000 years ago in western Africa. We genotyped 1,763 participants, including 1,526 Bantu speakers from 147 populations across 14 African countries, and generated whole-genome sequences from 12 Late Iron Age individuals. 
We show that genetic diversity amongst Bantu-speaking populations declines with distance from western Africa, with current-day Zambia and the Democratic Republic of Congo as possible crossroads of interaction
Using spatially explicit methods and correlating genetic, linguistic and geographical data, we provide cross-disciplinary support for a serial-founder migration model. We further show that Bantu speakers received significant gene flow from local groups in regions they expanded into
Our genetic dataset provides an exhaustive modern-day African comparative dataset for ancient DNA studies and will be important to a wide range of disciplines from science and humanities, as well as to the medical sector studying human genetic variation and health in African and African-descendant populations.
Cesar A. Fortes-Lima, et al., "The genetic legacy of the expansion of Bantu-speaking peoples in Africa" Nature (November 29, 2023).

Background from the introductory part of the body text:
African populations speaking Bantu languages (Bantu-speaking populations (BSP)) constitute about 30% of Africa’s total population, of which about 350 million people across 9 million km2 speak more than 500 Bantu languages. Archaeological, linguistic, historical and anthropological sources attest to the complex history of the expansion of BSP across subequatorial Africa, which fundamentally reshaped the linguistic, cultural and biological landscape of the continent. There is a broad interdisciplinary consensus that the initial spread of Bantu languages was a demic expansion and ancestral BSP migrated first through the Congo rainforest and later to the savannas further east and south. However, debates persist on the pathways and modes of the expansion.

Whereas most recent human expansions involved latitudinal movements through regions with similar climatic conditions, the expansion of the BSP is notable for its primarily longitudinal trajectory, traversing regions with highly diverse climates and biomes, including the highlands of Cameroon, central African rainforests, African savannas and arid south-western Africa

Tuesday, December 12, 2023

Notable New Papers About Gravity

One new paper finds a new form of significant tension between astronomy observations and the ΛCDM model.
We present the first measurement of the Weyl potential at four redshifts bins using data from the first three years of observations of the Dark Energy Survey (DES). The Weyl potential, which is the sum of the spatial and temporal distortions of the Universe's geometry, provides a direct way of testing the theory of gravity and the validity of the ΛCDM model. We find that the measured Weyl potential is 2.3σ, respectively 3.1σ, below the ΛCDM predictions in the two lowest redshift bins. We show that these low values of the Weyl potential are at the origin of the σ8 tension between Cosmic Microwave Background (CMB) measurements and weak lensing measurements. Interestingly, we find that the tension remains if no information from the CMB is used. DES data on their own prefer a high value of the primordial fluctuations, followed by a slow evolution of the Weyl potential. A remarkable feature of our method is that the measurements of the Weyl potential are model-independent and can therefore be confronted with any theory of gravity, allowing efficient tests of models beyond General Relativity.
Isaac Tutusaus, Camille Bonvin, Nastassia Grimm, "First measurement of the Weyl potential evolution from the Year 3 Dark Energy Survey data: Localising the σ8 tension" arXiv:2312.06434 (December 11, 2023).

The other new paper proposes a new gravitational theory to explain dark matter phenomena, by emphasizing Mach's principle, which is the idea that inertia is due to the cumulative gravitational pull of everything in the universe on massive objects.
The general theory of relativity (GR) has excelled in explaining gravitational phenomena at the scale of the solar system with remarkable precision. However, when extended to the galactic or cosmological scale, it requires dark matter and dark energy to explain observations. In our previous article arXiv:2308.04503, we've formulated a gravity theory based in Mach's principle, known as Machian gravity. We demonstrated that the theory successfully explains galactic velocity profiles without requiring additional dark matter components. In previous studies, for a selected set of galaxy clusters, we also showed its ability to explain the velocity dispersion in the clusters without extra unseen matter components. This paper primarily explores the mass profiles of galaxy clusters. We test the Machian Gravity acceleration law on two distinct sets comprising approximately 150 galaxy clusters sourced from various studies. We fitted the dynamic mass profiles using the Machian gravity model. The outcomes of our study show exceptional agreement between the theory and observational results.
Santanu Das, "Aspects of Machian Gravity (III): Testing Theory against Galaxy Cluster mass" arXiv:2312.06312 (December 11, 2023).

Tuesday, December 5, 2023

The Pre-History Of California

A new study looks at genetics, archeology, and linguistics to explain linguistic diversity in pre-Columbian California. 

Before the colonial period, California harboured more language variation than all of Europe, and linguistic and archaeological analyses have led to many hypotheses to explain this diversity. 
We report genome-wide data from 79 ancient individuals from California and 40 ancient individuals from Northern Mexico dating to 7,400–200 years before present (BP). Our analyses document long-term genetic continuity between people living on the Northern Channel Islands of California and the adjacent Santa Barbara mainland coast from 7,400 years BP to modern Chumash groups represented by individuals who lived around 200 years BP. 
The distinctive genetic lineages that characterize present-day and ancient people from Northwest Mexico increased in frequency in Southern and Central California by 5,200 years BP, providing evidence for northward migrations that are candidates for spreading Uto-Aztecan languages before the dispersal of maize agriculture from Mexico. 
Individuals from Baja California share more alleles with the earliest individual from Central California in the dataset than with later individuals from Central California, potentially reflecting an earlier linguistic substrate, whose impact on local ancestry was diluted by later migrations from inland regions. 
After 1,600 years BP, ancient individuals from the Channel Islands lived in communities with effective sizes similar to those in pre-agricultural Caribbean and Patagonia, and smaller than those on the California mainland and in sampled regions of Mexico.

Two Quantum Gravity Papers For Future Reference

I haven't read either of them, but the papers seem to make pretty big claims. If I get I chance, I'll blog them and write more.
The effort to discover a quantum theory of gravity is motivated by the need to reconcile the incompatibility between quantum theory and general relativity. 
Here, we present an alternative approach by constructing a consistent theory of classical gravity coupled to quantum field theory. The dynamics is linear in the density matrix, completely positive and trace preserving, and reduces to Einstein's theory of general relativity in the classical limit. Consequently, the dynamics doesn't suffer from the pathologies of the semiclassical theory based on expectation values. 
The assumption that general relativity is classical necessarily modifies the dynamical laws of quantum mechanics -- the theory must be fundamentally stochastic in both the metric degrees of freedom and in the quantum matter fields. This allows it to evade several no-go theorems purporting to forbid classical-quantum interactions. 
The measurement postulate of quantum mechanics is not needed -- the interaction of the quantum degrees of freedom with classical space-time necessarily causes decoherence in the quantum system. 
We first derive the general form of classical-quantum dynamics and consider realisations which have as its limit deterministic classical Hamiltonian evolution. The formalism is then applied to quantum field theory interacting with the classical space-time metric. 
One can view the classical-quantum theory as fundamental or as an effective theory useful for computing the back-reaction of quantum fields on geometry. We discuss a number of open questions from the perspective of both viewpoints.
Jonathan Oppenheim, "A postquantum theory of classical gravity?", Physical Review X (2023). … 584bc2567e68f9f76c1e. On arXiv: DOI: 10.48550/arxiv.1811.03116 (Comments: "It's very difficult to find a black cat in a dark room, especially if there is no cat.")
We consider two interacting systems when one is treated classically while the other system remains quantum. Consistent dynamics of this coupling has been shown to exist, and explored in the context of treating space-time classically. 
Here, we prove that any such hybrid dynamics necessarily results in decoherence of the quantum system, and a breakdown in predictability in the classical phase space. We further prove that a trade-off between the rate of this decoherence and the degree of diffusion induced in the classical system is a general feature of all classical quantum dynamics; long coherence times require strong diffusion in phase-space relative to the strength of the coupling. 
Applying the trade-off relation to gravity, we find a relationship between the strength of gravitationally-induced decoherence versus diffusion of the metric and its conjugate momenta. This provides an experimental signature of theories in which gravity is fundamentally classical. 
Bounds on decoherence rates arising from current interferometry experiments, combined with precision measurements of mass, place significant restrictions on theories where Einstein’s classical theory of gravity interacts with quantum matter. We find that part of the parameter space of such theories are already squeezed out, and provide figures of merit which can be used in future mass measurements and interference experiments.
Jonathan Oppenheim et al, "Gravitationally induced decoherence vs space-time diffusion: testing the quantum nature of gravity", Nature Communications (2023). DOI: 10.1038/s41467-023-43348-2 (open access).

Monday, December 4, 2023

The Imperial Chinese Harem System

The Imperial Chinese Harem System persisted with only brief interruptions over almost all of China's dynasties (when there were splits, most or all of the factions had them) from 220 BCE to 1908 CE, about 2128 years! 

This is a longer period of time, for example, than the entire history of Christianity, and the institution changed far more modestly during that time period than Christianity did.

During last days of China's final Qing Dynasty, which formally ended in 1912 with an Emperor's abdication (with a brief restoration that was not widely recognized later in the nineteen teens), however, it had already started to peter out:
The Kangxi Emperor (r. 1661–1722) holds the record for having the most imperial consorts [in the Qing Dynasty] with 79, while the Guangxu Emperor (r. 1875–1908) holds the record for having the fewest, with one empress and two consorts—a total of just three imperial consorts.

Functionally, this system, somewhat like the Saudi Arabian monarchy's succession system, insured that the hereditary emperorship would not end for lack of an heir. It also provided a pool of potential heirs from which a worthy successor could be chosen, mitigating, although not entirely eliminating, the harm caused by the occasional "mad king". 

Graviton-Sized Dark Matter Particle Theories

Generally speaking, gravity modification theories are better explanations of dark matter phenomena than dark matter particle theories. In gravity modification theories (which sometimes have scalar and/or vector gravitons or massive gravitons in addition to massless tensor gravitons), gravitons are what give rise to dark matter phenomena.

The theories discussed in the pre-print below involve dark matter particles, nominally unrelated to gravity, that have wave-like behavior and per particle mass-energies reasonable close to those of gravitons in a vanilla quantum gravity realization of General Relativity or a modest modification of it. The convergence of dark matter particle theories on this class of dark matter particles, as astronomy observations increasingly rule out or disfavor the alternatives, is itself interesting.
The Scalar Field Dark Matter model has been known in various ways throughout its history; Fuzzy, BEC, Wave, Ultralight, Axion-like Dark Matter, etc. 
All of them consist in proposing that the dark matter of the universe is a spinless field Φ that follows the Klein-Gordon (KG) equation of motion ◻Φ−dV/dΦ=0, for a given scalar field potential V. The difference between different models is sometimes the choice of the scalar field potential V. 
In the literature we find that people usually work in the nonrelativistic, weak-field limit of the KG equation where it transforms into the Schrödinger equation and the Einstein equations into the Poisson equation, reducing the KG-Einstein system, to the Schrödinger-Poisson system. 
In this paper, we review some of the most interesting achievements of this model from the historical point of view and its comparison with observations, showing that this model could be the last answer to the question about the nature of dark matter in the universe.
Tonatiuh Matos, Luis A. Ureña-López, Jae-Weon Lee, "Short Review of the main achievements of the Scalar Field, Fuzzy, Ultralight, Wave, BEC Dark Matter model" arXiv:2312.00254 (November 30, 2023).

Thursday, November 30, 2023

Wide Binaries Are Basically Newtonian in Moffat's MOG Theory

Like Deur's approach, Moffat's modified gravity theory does not predict non-Newtonian behavior that is discernible in wide binary stars.
Wide binary stars are used to test the modified gravity called Scalar-Tensor-Vector Gravity or MOG. This theory is based on the additional gravitational degrees of freedom, the scalar field G=GN(1+α), where GN is Newton's constant, and the massive (spin-1 graviton) vector field ϕμ. The wide binaries have separations of 2-30 kAU. The MOG acceleration law, derived from the MOG field equations and equations of motion of a massive test particle for weak gravitational fields, depends on the enhanced gravitational constant G=GN(1+α) and the effective running mass μ. The magnitude of α depends on the physical length scale or averaging scale ℓ of the system. The modified MOG acceleration law for weak gravitational fields predicts that for the solar system and for the wide binary star systems gravitational dynamics follows Newton's law.
John W. Moffat, "Wide Binaries and Modified Gravity (MOG)" arXiv:2311.17130 (November 28, 2023).

Monday, November 20, 2023

A Newly Discovered Milky Way Satellite Star Cluster

Calling it a satellite galaxy may be something of an overstatement, it is really just a satellite of the Milky Way that is a gravitationally bound star cluster with a mass of 11 to 22 times the Sun.

Simon E. T. Smith, et al., "The discovery of the faintest known Milky Way satellite using UNIONS" arXiv:2311.10147 (November 16, 2023).

Wednesday, November 15, 2023

Are Sunspots Driven By The Gravitational Pull Of The Planets?

If correct, this hypothesis would be a major paradigm change in our understanding of how the Sun works, although the seeming lack of influence from any of the other planets with significant gravitational pulls on the Sun, relative to the Earth and Jupiter, is suspicious.

The average strength of the gravitational pull of the planets on the Sun, normalized so that Earth's pull on the Sun is equal to one, to three significant digits, is as follows:

* Jupiter 11.7
* Venus 1.56
* Saturn 1.04
* Mercury 0.369
* Mars 0.0463
* Uranus 0.0396
* Neptune 0.0188
* Pluto 0.00000140

Given this, one would expect Venus, Saturn, and Mercury's orbits to have non-negligible effects as well. 

Venus has an almost perfectly circular orbit, so that might explain a lack of a sunspot cycle effect from its gravitational pull, but this is not true of Mercury or Saturn. 

Mercury's 88 day period and strongly elliptical orbit ought to be very measurable in the data as well in this hypothesis even if its small size reduces the magnitude of its impact. 

Saturn's 29.4 year orbit and moderate elliptical orbit also ought to be discernible, but is long enough that the small sample size of Saturn's orbits in a three hundred year old data set whose quality declines in the older data could reduce the statistical significance of this signal.
The sunspot number record covers over three centuries.These numbers measure the activity of the Sun. This activity follows the solar cycle of about eleven years. 
In the dynamo-theory, the interaction between differential rotation and convection produces the solar magnetic field. On the surface of Sun, this field concentrates to the sunspots. The dynamo-theory predicts that the period, the amplitude and the phase of the solar cycle are stochastic. 
Here we show that the solar cycle is deterministic, and connected to the orbital motions of the Earth and Jupiter. This planetary-influence theory allows us to model the whole sunspot record, as well as the near past and the near future of sunspot numbers. We may never be able to predict the exact times of exceptionally strong solar flares, like the catastrophic Carrington event in September 1859, but we can estimate when such events are more probable. Our results also indicate that during the next decades the Sun will no longer help us to cope with the climate change. The inability to find predictability in some phenomenon does not prove that this phenomenon itself is stochastic.
Lauri Jetsu, "Sunspot cycles are connected to the Earth and Jupiter" arXiv:2311.08317 (November 14, 2014).

A 2022 paper includes Venus as well. There is also a 1975 paper purporting to rule out this relationship (and a 2022 paper as well) with a 2022 rebuttal (which is related to the 2022 paper).

Skimming the literature, it does seem that more accurate modeling of the shape of planetary orbits, the actual locations of planets on those orbits, and inclusion of more planets, does produce reasonably good fits to the Sun spot data, although it isn't a conclusive result.

Friday, November 10, 2023

A Nifty New Telescope

The latest Earth based telescope, the ILMT, is pretty cool.

Nestled in the mountains of Northern India, is a 4-metre rotating dish of liquid mercury. Over a 10-year period, the International Liquid Mirror Telescope (ILMT) will survey 117 square degrees of sky, to study the astrometric and photometric variability of all detected objects. . . .  
Baldeep Grewal, et al., "Survey of Variables with the ILMT" arXiv:2311.05620 (November 8, 2023). 

There is another description of the new telescope at arXiv:2311.05615 which explains the benefits of using a liquid mirror:
A perfect reflective paraboloid represents the ideal reference surface for an optical device to focus a beam of parallel light rays to a single point. This is how astronomical mirrors form images of distant stars in their focal plane. In this context, it is amazing that the surface of a  liquid rotating around a vertical axis takes the shape of a paraboloid under the constant pull of gravity and centrifugal acceleration, the latter growing stronger at distances further from the  central axis. The parabolic surface occurs because a liquid always sets its surface perpendicular to the net acceleration it experiences, which in this case is increasingly tilted and enhanced with distance from the central axis. The focal length F is proportional to the gravity acceleration g and inversely proportional to the square of the angular velocity ω. In the case of the ILMT, the angular velocity ω is about 8 turns per minute, resulting in a focal length of about 8m. Given the action of the optical corrector, the effective focal length f of the D=4m telescope is about 9.44m, resulting in the widely open ratio f/D∼2.4. In the case of the ILMT,  a thin rotating layer of mercury naturally focuses the light from a distant star at its focal point located at ∼8m just above the mirror, with the natural constraint that such a telescope always observes at the zenith. 

Thanks to the rotation of the Earth, the telescope scans a strip of sky centred at a declination equal to the latitude of the observatory (+29◦21′41.4′′ for the ARIES Devasthal observatory). The angular width of the strip is about 22′, a size limited by that of the detector (4k×4k) used in the focal plane of the telescope. Since the ILMT observes the same region of the sky night after night, it is possible either to co-add the images taken on different nights in order to improve the limiting magnitude or to subtract images taken on different nights to make a variability study of the corresponding strip of sky. Consequently, the ILMT is very well-suited to perform variability studies of the strip of sky it observes. While the ILMT mirror is rotating, the linear speed at its rim is about 5.6km/hr, i.e., the speed of a walking person.

A liquid mirror naturally flows to the precise shape paraboloid shape needed because it is a liquid under these conditions. And, since its surface is always dynamically readjusting itself to this shape, rather than being fixed in place just once as a solid mirror would be, any slight imperfections in its surface that do deviate from its paraboloid shape don't stay in exactly the same place. Instead, distortions from slight imperfections in the shape of the liquid mirror average out over multiple observations of the same part of the sky, to an average shape at any one location that is much closer to perfect than a solid mirror cast ultra-precisely just once. Thus, a liquid mirror reduces one subtle source of potential systemic errors that can arise from slight imperfections in the mirror's shape at particular locations that recur every time a particular part of the sky is viewed when a solid mirror is used.

The inauguration of the 4m International Liquid Mirror Telescope (ILMT) took place in Devasthalm, Uttarakhand, India on March 21, 2023. The observatory is situated in the Kumaon Himalayas at an altitude of 2450 meters (8,038 feet). 

The coordinates are 29°21′42″N 79°41′06″E, which matter in astronomy, because the latitude impacts which part of the sky it can see.

A New Top Quark Pole Mass Analysis And A Few Wild Conjectures Considered

A new paper makes an re-analysis of existing data to determine the top quark pole mass. It comes up with:

which is consistent with, but at the low end of the range of the Particle Data Group's estimate based upon indirect cross-section measurements (bringing these into the same amount of precision as its direct measurements of the top quark mass):

Recent previous coverage at this blog regarding new top quark mass measurements can be found in a February 7, 2023 post, a November 4, 2022 post and a September 1, 2022 post.

Why Care?

Since this new paper produces cross-section based top quark pole mass estimates that closely confirm direct measurements of top quark pole mass measurements with comparable precision, the fact that both measurements are substantially independent of each other makes our confidence in each of  these measurements greater and makes the best fit measurements produced more robust. It also makes combining results from the two methods statistically to produce a single global best fit measurement with a lower combined uncertainty, incorporating all measurement methods, appear more legitimate and appropriate.

In relative terms, the top quark pole mass is already one of the more precisely known physical constants in the Standard Model. It is more precisely determined in relative terms than the other five quark masses, the three neutrino masses, the four CKM matrix parameters, the four PMNS matrix parameters, and the strong force coupling constant. But, it is less precisely determined in relative terms than the three charged lepton masses, the three fundamental boson masses, the electromagnetic and weak force coupling constants, and Newton's constant.

But, because it is the largest mass which is a fundamental constant of the Standard Model, in absolute terms, the magnitude of the uncertainty in the top quark pole mass is huge. At the Particle Data Group cross-section measurement uncertainty, it is about five times the mass of the pion, and more than the masses of eight of the eleven other fundamental fermion masses in the Standard Model. Only the bottom quark mass, the charm quark mass, and the tau lepton mass are greater than the absolute magnitude in the uncertainty alone in the top quark mass from cross-section measurements.

Also, because the top quark mass is so large, precision in this measurement is important for higher loop adjustments to all sorts of calculations in the Standard Model. 

Relevance To Theory Building

Precision in the top quark mass is also critical for assessing global properties of the Standard Model like a proposed LC&P relationship between the Higgs field vacuum expectation value and the fundamental particle masses in the Standard Model:
The global LC&P relationship (i.e. that the sum of the squares of the fundamental SM particle masses is equal to the square of the Higgs vev, which is equivalent to saying that the sum of the SM particle Yukawas is exactly 1), which is about 0.5% less than the predicted value (2 sigma in top quark and Higgs boson mass, implying a theoretically expected top quark mass of 173,360 MeV and a theoretically expected Higgs boson mass of 125,590 MeV if the adjustments were proportioned to the experimental uncertainty in the LC&P relationship given the current global average measurements, in which about 70% of the uncertainty is from the top quark mass uncertainty and about 29% is from the Higgs boson mass uncertainty). . . . 

But, the LC&P relationship does not hold separately for fermions and bosons, which would be analogous in some ways to supersymmetry. This is only an approximate symmetry. The sum of the squares of the SM fundamental boson masses is more than half of the square of the Higgs vev by about 0.5% (3.5 sigma of Higgs boson mass, implying a theoretically expected Higgs boson mass of about 124,650 MeV), while the sum of the squares of the SM fundamental fermion masses is less than half of the square of the Higgs vev by about 1.5% (about 2.8 sigma of top quark mass, implying a theoretically expected top quark mass of about 173,610 MeV). The combined deviation from the LC&P relationship for both fermions and bosons independently is 4.5 sigma, which is a very strong tensions that is nearly conclusively ruled out. One wonders if the slightly bosonic leaning deviation from this symmetry between fundamental fermion masses and fundamental boson masses has some deeper meaning or source.

The result in this new paper, by largely corroborating direct measurements of the top quark mass at similar levels of precision, continues to favor a lower top quark mass than the one favored by LP&C expectations. 

Earlier, lower energy Tevatron measurements of the top quark mass (where the top quark was discovered) supported higher values for the top quark mass than the combined data from either of the Large Hadron Collider (LHC) experiments do and was closely in line with the LP&C expectation. 

But there is no good reason to think that both of the LHC experiments measuring the top quark mass have greatly understated the systemic uncertainties in their measurements (which combine measurements from multiple channels with overlapping but not identical sources of systemic uncertainty). Certainly, the LHC experiments have much larger numbers of top quark events to work with than the two Tevatron experiments measuring the top quark mass did, so the relatively low statistical uncertainties of the LHC measurements of the top quark mass are undeniably something that makes their measurements more precise relative to Tevatron.

Could This Hint At Missing Fundamental Particles?

If I were a phenomenologist prone to proposing new particles, I'd say that this close but not quite right fit to the LP&C hypothesis was a theoretical hint that the Standard Model might be missing one or more fundamental particles, probably fermions (which deviate most strongly from the expected values).

I'll explore some of those possibilities, because readers might find them to be interesting. But, to be clear, I am not prone to proposing new particles, indeed, my inclinations are quite the opposite. I don't actually think that any of these proposals is very well motivated.

In part, this is because I think that dark matter phenomena and dark energy phenomena are very likely to be gravitational issues rather than due to dark matter particles or dark energy bosonic scalar fields, I don't think we need need particles to serve that purpose. Dark matter phenomena would otherwise be the strong motivator for a beyond the Standard Model new fundamental particle. 

I am being lenient in not pressing some of the more involved arguments from the data and theoretical structure of fundamental physics that argue against the existence of many of these proposed beyond the Standard Model fundamental particles. 

This is so, even though the LP&C hypothesis is beautiful, plausible, and quite close to the experimental data, so it would be great to be able to salvage it somehow if the top quark mass and Higgs boson mass end up being close to or lower than their current best fit estimated values.

Beyond The Standard Model Fundamental Fermion Candidates

If the missing fermion were a singlet, LP&C would imply an upper bound on its mass of about 3 GeV. 

This could be a good fit for a singlet sterile neutrino that gets its mass via the Higgs mechanism and then transfers its own mass to the three active neutrinos via a see-saw mechanism. 

It could also be a good fit for a singlet spin-3/2 gravitino in a supersymmetry inspired model in which only the graviton, and not the ordinary Standard Model fermions and bosons have superpartners.

A largely sterile singlet gravitino, and a singlet sterile neutrino, have both been proposed as cold dark matter candidates and at masses under 3 GeV the bounds on their cross-sections of interaction (so that they could have some self-interaction or weak to feeble interaction with ordinary matter since purely sterile dark matter that only interacts via gravity isn't a good fit for the astronomy data) aren't as tightly constrained as heavier WIMP candidates. And, the constraints on a WIMP dark matter particle cross section of interaction from direct dark matter detection experiments gets even weaker fairly rapidly between 3 GeV and 1 GeV, which is what the LP&C conjecture would hint at if either the current best fit value for the top quark mass or the current best fit value for the Higgs boson mass were a bit light.

The LP&C conjecture isn't a useful hint towards (or against) a massive graviton, however, because the experimental bounds on those are on the order of 32 orders of magnitude or more too small to be discernible by that means.

If there were three generations of missing fermions, you'd expect them to have masses about two-thirds higher than the charged lepton masses, with the most massive one still close to 3 GeV, the second generation one at about 176 MeV, and the first generation one at about 0.8 MeV. But these masses could be smaller if the best fit values for the top quark mass and/or Higgs boson mass ended rising somewhat as the uncertainties in those measurements fall.

These masses for a missing fermion triplet might fit a leptoquark model of the kind that has been proposed to explain B meson decay anomalies. The experimental motivation for leptoquarks was stronger before the experimental data supporting violations of the Standard Model principle of charged lepton universality (i.e. that the three charged leptons have precisely the same properties except their masses) evaporated in the face of factors in the data analysis of the seemingly anomalous experimental results from B meson decays. But there are still some lesser and subtle anomalies in B meson decays that didn't go away with this improved data analysis that could motivate similar leptoquark models.

If there were two missing fermions of similar masses (or two missing fermion triplets with their sum of square masses dominated by the third-generation particle of each triplet), this would suggest a missing fundamental particle mass on the order of up to about 2 GeV each.

A model with two missing fermion triplets might make sense if there were two columns of missing leptoquarks, instead of one, just as there are two columns of quarks (up type and down type) and two columns of leptons (charged leptons and neutrinos), in the Standard Model.

Beyond The Standard Model Fundamental Boson Candidates

If the Standard Model fermions are a complete set, and the LP&C conjecture is correct, then we'd be looking for one or more beyond the Standard Model massive fundamental bosons with masses of less than 3 GeV for a singlet, less than about 2 GeV for two similar mass missing bosons, and masses of less than about 1.73 GeV (a tad less than the tau lepton mass) for three similar mass massing bosons.

Probably the most plausible well-explored proposed beyond the Standard Model particle for missing fundamental bosons in this mass range would be an additional electromagnetically neutral Higgs boson in a two Higgs doublet model - either a scalar light Higgs boson "h" (as opposed to a Standard Model heavier scalar Higgs boson "H"), or a pseudoscalar Higgs boson "A", or both. The problem with a two Higgs doublet models, though, is that it fives us four new massive fundamental bosons, even though we only need one to respond to the LP&C hint.

At least one of the two new electromagnetically neutral extra Higgs bosons in a two Higgs doublet model could be short lived and give rise to the neutrino masses, in lieu of existing see-saw models or Majorana mass models for neutrino mass. A mass in the range of perhaps 0.1-3 GeV, more or less, could serve this purpose and we might expect a boson that gives rise to neutrino masses to be significantly smaller than the Standard Model Higgs boson that gives rise to order of magnitude larger quark and charged lepton masses.

Another of these extra Higgs bosons could be stable or be created and destroyed at precisely the same rates, have only weak or feeble interactions with Standard Model particles, and could provide a dark bosonic dark matter candidate, which in  the face of indications that dark matter seems to be more wave-like than particle-like, might be a better fit to the astronomy data than a WIMP.  If the dark matter candidate extra Higgs boson were less massive than the least massive neutrino mass eigenstate (i.e. probably less than 1 meV and perhaps much less), its stability could be explained because it couldn't decay into anything else, since there were no massive particles less massive than it.

The biggest problem with using particles from a two Higgs doublet paradigm to reconcile the shortfall of fundamental particle masses suggested by the LP&C conjecture, however, is that it would imply in a fairly vanilla two Higgs double model, a positively charged and negatively charged Higgs boson (H+ and H-) of identical masses, something which can be pretty definitively ruled out for the masses of up to 1.73 GeV to 2 GeV that the LP&C conjecture could support.

The Particle Data Group notes that charged Higgs bosons have been ruled out for masses of less than 80 GeV (more like 155 GeV if you look at the underlying studies referenced through the year 2015) and for masses between the top quark mass and about 1,103 GeV (looking at studies through 2018). And a number of new results from the LHC since the Particle Data Group values were last updated make that exclusion even stronger. 

There is really no plausible way that particle physicists could have missed an electromagnetically charged fundamental particle (either a fermion or a boson) in the 0.1 GeV (about the same as the muon and strange quark) to 3 GeV (between the tau lepton and charm quark on one hand and the bottom quark on the other) mass range suggested by the LP&C conjecture and current data, even if it was produced very infrequently, in very rare processes that interact at all with any Standard Model particles non-gravitationally. Particle collider particle detectors are exquisitely sensitive to electromagnetically charged particles in that mass range, no matter how short-lived they may be (and stable charged fundamental particles in that mass window would be found, if not in colliders, by other means).

Of course, as a theoretical physicists proposing beyond the Standard Model physics, you can propose any new particles that you like and need not constrain yourself to well-explored proposals like a two Higgs doublet model if you don't want to do so.

Given those constraints, a singlet electromagnetically neutral neutrino mass imparting boson analogous to the Higgs boson for other fundamental Standard Model particles, outside of the two Higgs doublet model, might be a better candidate for a new fundamental boson that fills out the missing fundamental particle mass suggested by the LP&C conjecture. The source of neutrino mass is still an unsolved problem, so that provides at least some motivation for it. 

If this fundamental boson was stable, or produced and destroyed at strictly identical rates, it could also be a dark matter candidate, solving two issues with on BSM particle.

The LP&C conjecture provides no hint for or against a 17 MeV missing fundamental boson, which has been proposed by a single experiment to explain some subtle decay angle issues in nuclear physics, because 17 MeV squared in only about 0.1% of the uncertainty in the sum of the square of the fundamental particle masses, so the existence or non-existence of such a particle would be impossible to determine for the foreseeable future using the LP&C conjecture if it were true. 

Beyond The Standard Model Fundamental Fermion And Boson Set Candidates

Self-interacting dark matter (SIDM) models generally propose one predominant stable, electromagnetically neutral, fermionic dark matter candidate (possibly part of a three generation fermion triplet with more massive but unstable second and third generation counterparts), and one unstable, electromagnetically neutral, bosonic "dark photon" that carries a self-interaction force between dark matter fermions.

Since the mass of an unstable boson is functionally related to the range of the force it carries, which can be estimated from the inferred dynamics of fermionic dark matter particles in SIDM models, we can estimate that the sweet spot in terms of mass for a dark photon that carries the self-interaction force between dark matter fermion particles is about 100 MeV.

If both the dark matter fermion and the dark photon receive their masses via a Standard Model Higgs mechanism, then a dark matter fermion in the 0.1 GeV to 3 GeV range as suggested by the LP&C conjecture with the exact mass ultimately determined masses of the top quark and Higgs boson demand. And, if one had a dark matter fermion triplet, the third-generation unstable dark matter fermion could be at the high end of this mass range, the second-generation unstable dark matter fermion could be in the low hundreds of MeV in mass, and the lightest and stable dark matter fermion could have a mass as small as necessary to fit the astronomy data (e.g. in the keV warm dark matter range, or even in the axion-like very low dark matter particle range).

These four new missing fundamental particles could fit a self-interacting warm dark matter model, fill the LP&C conjecture mass gap, would have no non-Higgs portal interactions with Standard Model particles, and as fairly light, electromagnetically neutral particles with no weak or strong force interactions, that decay to other electromagnetically neutral particles with no weak or strong force interactions, could have escaped detection as a Higgs boson decay channel so far at particle colliders, manifesting merely as missing traverse momentum in Higgs boson decays at levels that can't be ruled out yet.

While neither warm dark matter models, nor self-interacting dark matter models have proven very satisfactory in matching the astronomy data (although each of them do at least marginally better than collisionless cold dark matter models with GeV mass particles do), perhaps combining both of these cold dark matter particle model variants would work better than a model with one rather than both of these features.

Wednesday, November 8, 2023

More On Wide Binaries, MOND and Deur

A new study concludes very emphatically that wide binary stars behave in a manner more like Newtonian dynamics and less like MOND. This seems pretty definitive, but we've seen contradictory conclusions on the wide binary data before, so I don't consider this to be the final word.

More importantly, it doesn't rule out all gravity based explanations of dark matter phenomena. In particular, Deur's gravitational self-interaction driven approach explains dark matter phenomena with a gravitational solution (with no dark matter or dark energy) over a very wide range of applicability and does not predict significantly non-Newtonian behavior in wide binaries

Deur's approach may or may not be consistent with consideration of non-perturbative standard general relativistic effects as claimed. But whether Deur is applying general relativity in an unconventional way that rigorously considers a feature of general relativity that most other researchers neglect, or is actually a subtle gravity modification theory that isn't quite identical to standard general relativity, it works. Deur's approach also roots MOND-like galaxy behavior, and really all dark matter and dark energy theories in a theory that has a fairly simple deep theoretical motivation rather than just being a phenomenological fit to some key data points. And, unlike LambdaCDM and other standard dark energy theories, Deur's approach does so without globally violating conservation of mass-energy.

Deur's approach explains dark matter phenomena not just where MOND works, but also in many circumstances where MOND does not work. 

For example, Deur's approach works in galaxy clusters, in the Bullet cluster, with respect to wide binary star systems, as an explanation for different degrees of dark matter phenomena in differently shaped elliptical galaxies, as an explanation for the two-dimensional arrangement of satellite galaxies around spiral galaxies, and as an explanation for dark energy phenomena as well. It even provides a potential explanation for the Hubble constant tension and can reproduce the cosmic microwave background observed by the Planck collaboration and the early galaxy formation observed by the Webb telescope.

If the conclusion of this paper holds up, it somewhat decisively tips the balance between Deur's approach and other gravitational explanations of dark matter phenomena.
We test Milgromian dynamics (MOND) using wide binary stars (WBs) with separations of 2−30 kAU. Locally, the WB orbital velocity in MOND should exceed the Newtonian prediction by ≈20% at asymptotically large separations given the Galactic external field effect (EFE). 
We investigate this with a detailed statistical analysis of Gaia DR3 data on 8611 WBs within 250 pc of the Sun. Orbits are integrated in a rigorously calculated gravitational field that directly includes the EFE. We also allow line of sight contamination and undetected close binary companions to the stars in each WB. We interpolate between the Newtonian and Milgromian predictions using the parameter αgrav, with 0 indicating Newtonian gravity and 1 indicating MOND. 
Directly comparing the best Newtonian and Milgromian models reveals that Newtonian dynamics is preferred at 19σ confidence. Using a complementary Markov Chain Monte Carlo analysis, we find that αgrav=−0.021+0.065−0.045, which is fully consistent with Newtonian gravity but excludes MOND at 16σ confidence. This is in line with the similar result of Pittordis and Sutherland using a somewhat different sample selection and less thoroughly explored population model. 
We show that although our best-fitting model does not fully reproduce the observations, an overwhelmingly strong preference for Newtonian gravity remains in a considerable range of variations to our analysis. 
Adapting the MOND interpolating function to explain this result would cause tension with rotation curve constraints. We discuss the broader implications of our results in light of other works, concluding that MOND must be substantially modified on small scales to account for local WBs.
Indranil Banik, "Strong constraints on the gravitational law from Gaia DR3 wide binaries" arXiv:2311.03436 (November 6, 2023).

In other dark matter news, fuzzy dark matter theories (a very light boson with wave-like behavior dark matter theory) are compared to cold dark matter theories (with which all sorts of empirical evidence is inconsistent).

As I've noted often, but articulated less often, models with very light dark matter particles (especially bosonic ones) look a lot like quantum gravity theories with a graviton that has zero mass but non-zero mass-energy and self-interaction. Many axion-like dark matter theories, like fuzzy dark matter theories, lean towards this description. So, we are gradually starting to see dark matter theories converge on a mechanism that bears great similarities to a gravity based explanation of dark matter phenomena.

Finally, this paper is interesting:
The immense diversity of the galaxy population in the universe is believed to stem from their disparate merging histories, stochastic star formations, and multi-scale influences of filamentary environments. Any single initial condition of the early universe was never expected to explain alone how the galaxies formed and evolved to end up possessing such various traits as they have at the present epoch. However, several observational studies have revealed that the key physical properties of the observed galaxies in the local universe appeared to be regulated by one single factor, the identity of which has been shrouded in mystery up to date. 
Here, we report on our success of identifying the single regulating factor as the degree of misalignments between the initial tidal field and protogalaxy inertia momentum tensors. The spin parameters, formation epochs, stellar-to-total mass ratios, stellar ages, sizes, colors, metallicities and specific heat energies of the galaxies from the IllustrisTNG suite of hydrodynamic simulations are all found to be almost linearly and strongly dependent on this initial condition, when the differences in galaxy total mass, environmental density and shear are controlled to vanish. The cosmological predispositions, if properly identified, turns out to be much more impactful on galaxy evolution than conventionally thought.
Jun-Sung Moon, Jounghun Lee, "Why Galaxies are Indeed Simpler than Expected" arXiv:2311.03632 (November 7, 2023).

Monday, October 30, 2023

Scientific Conference Shenanigans

The Pre-Columbian Pacific Coast

Here's a map of more than 6000 contact-era Native American Villages on the West Coast that were recorded from written accounts or oral traditions. 

A Global Map Of The Last Glacial Maximum (And Dingos)

The Last Glacial Maximum land bridge in Southeast Asia (ca. 18,000-20,000 years ago) was not the source of dingos in Australia, although this land bridge may have facilitated the migration of modern humans who led to the extinction of relict archaic hominins to the west of the Wallace Line in what is now island Southeast Asia.

The Sahul Shelf and the Sunda Shelf during the past 12,000 years: Tasmania separated from the mainland 12,000 ybp, and New Guinea separated from the mainland 6,500–8,500 ybp.

It also turns out that recent discoveries have estimated that the arrival of the dingo (Australia's native dogs) in Australia may have been much more recently than previously estimated, since the oldest dingo remains in Australia were previously misdated. The current dates are consistent with an arrival of dingos in Australia via Austronesian marinersWikipedia explains that:
The earliest known dingo remains, found in Western Australia, date to 3,450 years ago. Based on a comparison of modern dingoes with these early remains, dingo morphology has not changed over thousands of years. This suggests that no artificial selection has been applied over this period and that the dingo represents an early form of dog. They have lived, bred, and undergone natural selection in the wild, isolated from other dogs until the arrival of European settlers, resulting in a unique breed.

In 2020, an MDNA study of ancient dog remains from the Yellow River and Yangtze River basins of southern China showed that most of the ancient dogs fell within haplogroup A1b, as do the Australian dingoes and the pre-colonial dogs of the Pacific, but in low frequency in China today. The specimen from the Tianluoshan archaeological site, Zhejiang province dates to 7,000 YBP (years before present) and is basal to the entire haplogroup A1b lineage. The dogs belonging to this haplogroup were once widely distributed in southern China, then dispersed through Southeast Asia into New Guinea and Oceania, but were replaced in China by dogs of other lineages 2,000 YBP.

The oldest reliable date for dog remains found in mainland Southeast Asia is from Vietnam at 4,000 YBP, and in Island Southeast Asia from Timor-Leste at 3,000 YBP. In New Guinea, the earliest dog remains date to 2,500–2,300 YBP from Caution Bay near Port Moresby, but no ancient New Guinea singing dog remains have been found. The earliest dingo remains in the Torres Straits date to 2,100 YBP. 

The earliest dingo skeletal remains in Australia are estimated at 3,450 YBP from the Mandura Caves on the Nullarbor Plain, south-eastern Western Australia; 3,320 YBP from Woombah Midden near Woombah, New South Wales; and 3,170 YBP from Fromme's Landing on the Murray River near Mannum, South Australia
Dingo bone fragments were found in a rock shelter located at Mount Burr, South Australia, in a layer that was originally dated 7,000-8,500 YBP. Excavations later indicated that the levels had been disturbed, and the dingo remains "probably moved to an earlier level." 
The dating of these early Australian dingo fossils led to the widely held belief that dingoes first arrived in Australia 4,000 YBP and then took 500 years to disperse around the continent. However, the timing of these skeletal remains was based on the dating of the sediments in which they were discovered, and not the specimens themselves.

In 2018, the oldest skeletal bones from the Madura Caves were directly carbon dated between 3,348 and 3,081 YBP, providing firm evidence of the earliest dingo and that dingoes arrived later than had previously been proposed. The next-most reliable timing is based on desiccated flesh dated 2,200 YBP from Thylacine Hole, 110 km west of Eucla on the Nullarbor Plain, southeastern Western Australia. When dingoes first arrived, they would have been taken up by indigenous Australians, who then provided a network for their swift transfer around the continent. Based on the recorded distribution time for dogs across Tasmania and cats across Australia once indigenous Australians had acquired them, the dispersal of dingoes from their point of landing until they occupied continental Australia is proposed to have taken only 70 years. The red fox is estimated to have dispersed across the continent in only 60–80 years.

At the end of the last glacial maximum and the associated rise in sea levels, Tasmania became separated from the Australian mainland 12,000 YBP, and New Guinea 6,500–8,500 YBP by the inundation of the Sahul Shelf. Fossil remains in Australia date to around 3,500 YBP and no dingo remains have been uncovered in Tasmania, so the dingo is estimated to have arrived in Australia at a time between 3,500 and 12,000 YBP. To reach Australia through Island Southeast Asia even at the lowest sea level of the last glacial maximum, a journey of at least 50 kilometres (31 mi) over open sea between ancient Sunda and Sahul was necessary, so they must have accompanied humans on boats.

Some best estimates of Austronesian migration are as follows:

Suggested early migration route of early Austronesians into and out of Taiwan based on ancient and modern mtDNA data. This hypothesis assumes the Sino-Austronesian grouping, a minority view among linguists. (Ko et al., 2014).

Map showing the migration of the Austronesians from Taiwan. Indonesia is reached ca. 3500 years BP, and Papua New Guinea is reached ca. 3300 years BP.

There is no direct evidence of the involvement of Austronesian mariners bringing dingos to Australia, but they were the only sea faring people in the region at the time who could have made the trip beyond the line of sight over deep waters in that era, were engaged in seafaring in the region at just about the right time, had ties to Southern China where dingos probably originated, and dingos pretty much had to have arrived in Australia with people by boat, as opposed to without human intervention. 

The timing and location of the earliest Australian dingo remains also suggests an introduction from someplace in Indonesia to someplace west of Cape York in Australia, rather than from Papua New Guinea to Cape York, which would have involved the shortest overwater journey. This was a trip well within the maritime capabilities of the Austronesians of 3500 BP to 3300 BP.