Pages

Thursday, November 30, 2023

Wide Binaries Are Basically Newtonian in Moffat's MOG Theory

Like Deur's approach, Moffat's modified gravity theory does not predict non-Newtonian behavior that is discernible in wide binary stars.
Wide binary stars are used to test the modified gravity called Scalar-Tensor-Vector Gravity or MOG. This theory is based on the additional gravitational degrees of freedom, the scalar field G=GN(1+α), where GN is Newton's constant, and the massive (spin-1 graviton) vector field ϕμ. The wide binaries have separations of 2-30 kAU. The MOG acceleration law, derived from the MOG field equations and equations of motion of a massive test particle for weak gravitational fields, depends on the enhanced gravitational constant G=GN(1+α) and the effective running mass μ. The magnitude of α depends on the physical length scale or averaging scale ℓ of the system. The modified MOG acceleration law for weak gravitational fields predicts that for the solar system and for the wide binary star systems gravitational dynamics follows Newton's law.
John W. Moffat, "Wide Binaries and Modified Gravity (MOG)" arXiv:2311.17130 (November 28, 2023).

Monday, November 20, 2023

A Newly Discovered Milky Way Satellite Star Cluster

Calling it a satellite galaxy may be something of an overstatement, it is really just a satellite of the Milky Way that is a gravitationally bound star cluster with a mass of 11 to 22 times the Sun.

Simon E. T. Smith, et al., "The discovery of the faintest known Milky Way satellite using UNIONS" arXiv:2311.10147 (November 16, 2023).

Wednesday, November 15, 2023

Are Sunspots Driven By The Gravitational Pull Of The Planets?

If correct, this hypothesis would be a major paradigm change in our understanding of how the Sun works, although the seeming lack of influence from any of the other planets with significant gravitational pulls on the Sun, relative to the Earth and Jupiter, is suspicious.

The average strength of the gravitational pull of the planets on the Sun, normalized so that Earth's pull on the Sun is equal to one, to three significant digits, is as follows:

* Jupiter 11.7
* Venus 1.56
* Saturn 1.04
* Mercury 0.369
* Mars 0.0463
* Uranus 0.0396
* Neptune 0.0188
* Pluto 0.00000140

Given this, one would expect Venus, Saturn, and Mercury's orbits to have non-negligible effects as well. 

Venus has an almost perfectly circular orbit, so that might explain a lack of a sunspot cycle effect from its gravitational pull, but this is not true of Mercury or Saturn. 

Mercury's 88 day period and strongly elliptical orbit ought to be very measurable in the data as well in this hypothesis even if its small size reduces the magnitude of its impact. 

Saturn's 29.4 year orbit and moderate elliptical orbit also ought to be discernible, but is long enough that the small sample size of Saturn's orbits in a three hundred year old data set whose quality declines in the older data could reduce the statistical significance of this signal.
The sunspot number record covers over three centuries.These numbers measure the activity of the Sun. This activity follows the solar cycle of about eleven years. 
In the dynamo-theory, the interaction between differential rotation and convection produces the solar magnetic field. On the surface of Sun, this field concentrates to the sunspots. The dynamo-theory predicts that the period, the amplitude and the phase of the solar cycle are stochastic. 
Here we show that the solar cycle is deterministic, and connected to the orbital motions of the Earth and Jupiter. This planetary-influence theory allows us to model the whole sunspot record, as well as the near past and the near future of sunspot numbers. We may never be able to predict the exact times of exceptionally strong solar flares, like the catastrophic Carrington event in September 1859, but we can estimate when such events are more probable. Our results also indicate that during the next decades the Sun will no longer help us to cope with the climate change. The inability to find predictability in some phenomenon does not prove that this phenomenon itself is stochastic.
Lauri Jetsu, "Sunspot cycles are connected to the Earth and Jupiter" arXiv:2311.08317 (November 14, 2014).

A 2022 paper includes Venus as well. There is also a 1975 paper purporting to rule out this relationship (and a 2022 paper as well) with a 2022 rebuttal (which is related to the 2022 paper).

Skimming the literature, it does seem that more accurate modeling of the shape of planetary orbits, the actual locations of planets on those orbits, and inclusion of more planets, does produce reasonably good fits to the Sun spot data, although it isn't a conclusive result.

Friday, November 10, 2023

A Nifty New Telescope

The latest Earth based telescope, the ILMT, is pretty cool.

Nestled in the mountains of Northern India, is a 4-metre rotating dish of liquid mercury. Over a 10-year period, the International Liquid Mirror Telescope (ILMT) will survey 117 square degrees of sky, to study the astrometric and photometric variability of all detected objects. . . .  
Baldeep Grewal, et al., "Survey of Variables with the ILMT" arXiv:2311.05620 (November 8, 2023). 

There is another description of the new telescope at arXiv:2311.05615 which explains the benefits of using a liquid mirror:
A perfect reflective paraboloid represents the ideal reference surface for an optical device to focus a beam of parallel light rays to a single point. This is how astronomical mirrors form images of distant stars in their focal plane. In this context, it is amazing that the surface of a  liquid rotating around a vertical axis takes the shape of a paraboloid under the constant pull of gravity and centrifugal acceleration, the latter growing stronger at distances further from the  central axis. The parabolic surface occurs because a liquid always sets its surface perpendicular to the net acceleration it experiences, which in this case is increasingly tilted and enhanced with distance from the central axis. The focal length F is proportional to the gravity acceleration g and inversely proportional to the square of the angular velocity ω. In the case of the ILMT, the angular velocity ω is about 8 turns per minute, resulting in a focal length of about 8m. Given the action of the optical corrector, the effective focal length f of the D=4m telescope is about 9.44m, resulting in the widely open ratio f/D∼2.4. In the case of the ILMT,  a thin rotating layer of mercury naturally focuses the light from a distant star at its focal point located at ∼8m just above the mirror, with the natural constraint that such a telescope always observes at the zenith. 

Thanks to the rotation of the Earth, the telescope scans a strip of sky centred at a declination equal to the latitude of the observatory (+29◦21′41.4′′ for the ARIES Devasthal observatory). The angular width of the strip is about 22′, a size limited by that of the detector (4k×4k) used in the focal plane of the telescope. Since the ILMT observes the same region of the sky night after night, it is possible either to co-add the images taken on different nights in order to improve the limiting magnitude or to subtract images taken on different nights to make a variability study of the corresponding strip of sky. Consequently, the ILMT is very well-suited to perform variability studies of the strip of sky it observes. While the ILMT mirror is rotating, the linear speed at its rim is about 5.6km/hr, i.e., the speed of a walking person.

A liquid mirror naturally flows to the precise shape paraboloid shape needed because it is a liquid under these conditions. And, since its surface is always dynamically readjusting itself to this shape, rather than being fixed in place just once as a solid mirror would be, any slight imperfections in its surface that do deviate from its paraboloid shape don't stay in exactly the same place. Instead, distortions from slight imperfections in the shape of the liquid mirror average out over multiple observations of the same part of the sky, to an average shape at any one location that is much closer to perfect than a solid mirror cast ultra-precisely just once. Thus, a liquid mirror reduces one subtle source of potential systemic errors that can arise from slight imperfections in the mirror's shape at particular locations that recur every time a particular part of the sky is viewed when a solid mirror is used.


The inauguration of the 4m International Liquid Mirror Telescope (ILMT) took place in Devasthalm, Uttarakhand, India on March 21, 2023. The observatory is situated in the Kumaon Himalayas at an altitude of 2450 meters (8,038 feet). 

The coordinates are 29°21′42″N 79°41′06″E, which matter in astronomy, because the latitude impacts which part of the sky it can see.

A New Top Quark Pole Mass Analysis And A Few Wild Conjectures Considered

A new paper makes an re-analysis of existing data to determine the top quark pole mass. It comes up with:

which is consistent with, but at the low end of the range of the Particle Data Group's estimate based upon indirect cross-section measurements (bringing these into the same amount of precision as its direct measurements of the top quark mass):


Recent previous coverage at this blog regarding new top quark mass measurements can be found in a February 7, 2023 post, a November 4, 2022 post and a September 1, 2022 post.

Why Care?

Since this new paper produces cross-section based top quark pole mass estimates that closely confirm direct measurements of top quark pole mass measurements with comparable precision, the fact that both measurements are substantially independent of each other makes our confidence in each of  these measurements greater and makes the best fit measurements produced more robust. It also makes combining results from the two methods statistically to produce a single global best fit measurement with a lower combined uncertainty, incorporating all measurement methods, appear more legitimate and appropriate.

In relative terms, the top quark pole mass is already one of the more precisely known physical constants in the Standard Model. It is more precisely determined in relative terms than the other five quark masses, the three neutrino masses, the four CKM matrix parameters, the four PMNS matrix parameters, and the strong force coupling constant. But, it is less precisely determined in relative terms than the three charged lepton masses, the three fundamental boson masses, the electromagnetic and weak force coupling constants, and Newton's constant.

But, because it is the largest mass which is a fundamental constant of the Standard Model, in absolute terms, the magnitude of the uncertainty in the top quark pole mass is huge. At the Particle Data Group cross-section measurement uncertainty, it is about five times the mass of the pion, and more than the masses of eight of the eleven other fundamental fermion masses in the Standard Model. Only the bottom quark mass, the charm quark mass, and the tau lepton mass are greater than the absolute magnitude in the uncertainty alone in the top quark mass from cross-section measurements.

Also, because the top quark mass is so large, precision in this measurement is important for higher loop adjustments to all sorts of calculations in the Standard Model. 

Relevance To Theory Building

Precision in the top quark mass is also critical for assessing global properties of the Standard Model like a proposed LC&P relationship between the Higgs field vacuum expectation value and the fundamental particle masses in the Standard Model:
The global LC&P relationship (i.e. that the sum of the squares of the fundamental SM particle masses is equal to the square of the Higgs vev, which is equivalent to saying that the sum of the SM particle Yukawas is exactly 1), which is about 0.5% less than the predicted value (2 sigma in top quark and Higgs boson mass, implying a theoretically expected top quark mass of 173,360 MeV and a theoretically expected Higgs boson mass of 125,590 MeV if the adjustments were proportioned to the experimental uncertainty in the LC&P relationship given the current global average measurements, in which about 70% of the uncertainty is from the top quark mass uncertainty and about 29% is from the Higgs boson mass uncertainty). . . . 

But, the LC&P relationship does not hold separately for fermions and bosons, which would be analogous in some ways to supersymmetry. This is only an approximate symmetry. The sum of the squares of the SM fundamental boson masses is more than half of the square of the Higgs vev by about 0.5% (3.5 sigma of Higgs boson mass, implying a theoretically expected Higgs boson mass of about 124,650 MeV), while the sum of the squares of the SM fundamental fermion masses is less than half of the square of the Higgs vev by about 1.5% (about 2.8 sigma of top quark mass, implying a theoretically expected top quark mass of about 173,610 MeV). The combined deviation from the LC&P relationship for both fermions and bosons independently is 4.5 sigma, which is a very strong tensions that is nearly conclusively ruled out. One wonders if the slightly bosonic leaning deviation from this symmetry between fundamental fermion masses and fundamental boson masses has some deeper meaning or source.

The result in this new paper, by largely corroborating direct measurements of the top quark mass at similar levels of precision, continues to favor a lower top quark mass than the one favored by LP&C expectations. 

Earlier, lower energy Tevatron measurements of the top quark mass (where the top quark was discovered) supported higher values for the top quark mass than the combined data from either of the Large Hadron Collider (LHC) experiments do and was closely in line with the LP&C expectation. 

But there is no good reason to think that both of the LHC experiments measuring the top quark mass have greatly understated the systemic uncertainties in their measurements (which combine measurements from multiple channels with overlapping but not identical sources of systemic uncertainty). Certainly, the LHC experiments have much larger numbers of top quark events to work with than the two Tevatron experiments measuring the top quark mass did, so the relatively low statistical uncertainties of the LHC measurements of the top quark mass are undeniably something that makes their measurements more precise relative to Tevatron.

Could This Hint At Missing Fundamental Particles?

If I were a phenomenologist prone to proposing new particles, I'd say that this close but not quite right fit to the LP&C hypothesis was a theoretical hint that the Standard Model might be missing one or more fundamental particles, probably fermions (which deviate most strongly from the expected values).

I'll explore some of those possibilities, because readers might find them to be interesting. But, to be clear, I am not prone to proposing new particles, indeed, my inclinations are quite the opposite. I don't actually think that any of these proposals is very well motivated.

In part, this is because I think that dark matter phenomena and dark energy phenomena are very likely to be gravitational issues rather than due to dark matter particles or dark energy bosonic scalar fields, I don't think we need need particles to serve that purpose. Dark matter phenomena would otherwise be the strong motivator for a beyond the Standard Model new fundamental particle. 

I am being lenient in not pressing some of the more involved arguments from the data and theoretical structure of fundamental physics that argue against the existence of many of these proposed beyond the Standard Model fundamental particles. 

This is so, even though the LP&C hypothesis is beautiful, plausible, and quite close to the experimental data, so it would be great to be able to salvage it somehow if the top quark mass and Higgs boson mass end up being close to or lower than their current best fit estimated values.

Beyond The Standard Model Fundamental Fermion Candidates

If the missing fermion were a singlet, LP&C would imply an upper bound on its mass of about 3 GeV. 

This could be a good fit for a singlet sterile neutrino that gets its mass via the Higgs mechanism and then transfers its own mass to the three active neutrinos via a see-saw mechanism. 

It could also be a good fit for a singlet spin-3/2 gravitino in a supersymmetry inspired model in which only the graviton, and not the ordinary Standard Model fermions and bosons have superpartners.

A largely sterile singlet gravitino, and a singlet sterile neutrino, have both been proposed as cold dark matter candidates and at masses under 3 GeV the bounds on their cross-sections of interaction (so that they could have some self-interaction or weak to feeble interaction with ordinary matter since purely sterile dark matter that only interacts via gravity isn't a good fit for the astronomy data) aren't as tightly constrained as heavier WIMP candidates. And, the constraints on a WIMP dark matter particle cross section of interaction from direct dark matter detection experiments gets even weaker fairly rapidly between 3 GeV and 1 GeV, which is what the LP&C conjecture would hint at if either the current best fit value for the top quark mass or the current best fit value for the Higgs boson mass were a bit light.

The LP&C conjecture isn't a useful hint towards (or against) a massive graviton, however, because the experimental bounds on those are on the order of 32 orders of magnitude or more too small to be discernible by that means.

If there were three generations of missing fermions, you'd expect them to have masses about two-thirds higher than the charged lepton masses, with the most massive one still close to 3 GeV, the second generation one at about 176 MeV, and the first generation one at about 0.8 MeV. But these masses could be smaller if the best fit values for the top quark mass and/or Higgs boson mass ended rising somewhat as the uncertainties in those measurements fall.

These masses for a missing fermion triplet might fit a leptoquark model of the kind that has been proposed to explain B meson decay anomalies. The experimental motivation for leptoquarks was stronger before the experimental data supporting violations of the Standard Model principle of charged lepton universality (i.e. that the three charged leptons have precisely the same properties except their masses) evaporated in the face of factors in the data analysis of the seemingly anomalous experimental results from B meson decays. But there are still some lesser and subtle anomalies in B meson decays that didn't go away with this improved data analysis that could motivate similar leptoquark models.

If there were two missing fermions of similar masses (or two missing fermion triplets with their sum of square masses dominated by the third-generation particle of each triplet), this would suggest a missing fundamental particle mass on the order of up to about 2 GeV each.

A model with two missing fermion triplets might make sense if there were two columns of missing leptoquarks, instead of one, just as there are two columns of quarks (up type and down type) and two columns of leptons (charged leptons and neutrinos), in the Standard Model.

Beyond The Standard Model Fundamental Boson Candidates

If the Standard Model fermions are a complete set, and the LP&C conjecture is correct, then we'd be looking for one or more beyond the Standard Model massive fundamental bosons with masses of less than 3 GeV for a singlet, less than about 2 GeV for two similar mass missing bosons, and masses of less than about 1.73 GeV (a tad less than the tau lepton mass) for three similar mass massing bosons.

Probably the most plausible well-explored proposed beyond the Standard Model particle for missing fundamental bosons in this mass range would be an additional electromagnetically neutral Higgs boson in a two Higgs doublet model - either a scalar light Higgs boson "h" (as opposed to a Standard Model heavier scalar Higgs boson "H"), or a pseudoscalar Higgs boson "A", or both. The problem with a two Higgs doublet models, though, is that it fives us four new massive fundamental bosons, even though we only need one to respond to the LP&C hint.

At least one of the two new electromagnetically neutral extra Higgs bosons in a two Higgs doublet model could be short lived and give rise to the neutrino masses, in lieu of existing see-saw models or Majorana mass models for neutrino mass. A mass in the range of perhaps 0.1-3 GeV, more or less, could serve this purpose and we might expect a boson that gives rise to neutrino masses to be significantly smaller than the Standard Model Higgs boson that gives rise to order of magnitude larger quark and charged lepton masses.

Another of these extra Higgs bosons could be stable or be created and destroyed at precisely the same rates, have only weak or feeble interactions with Standard Model particles, and could provide a dark bosonic dark matter candidate, which in  the face of indications that dark matter seems to be more wave-like than particle-like, might be a better fit to the astronomy data than a WIMP.  If the dark matter candidate extra Higgs boson were less massive than the least massive neutrino mass eigenstate (i.e. probably less than 1 meV and perhaps much less), its stability could be explained because it couldn't decay into anything else, since there were no massive particles less massive than it.

The biggest problem with using particles from a two Higgs doublet paradigm to reconcile the shortfall of fundamental particle masses suggested by the LP&C conjecture, however, is that it would imply in a fairly vanilla two Higgs double model, a positively charged and negatively charged Higgs boson (H+ and H-) of identical masses, something which can be pretty definitively ruled out for the masses of up to 1.73 GeV to 2 GeV that the LP&C conjecture could support.

The Particle Data Group notes that charged Higgs bosons have been ruled out for masses of less than 80 GeV (more like 155 GeV if you look at the underlying studies referenced through the year 2015) and for masses between the top quark mass and about 1,103 GeV (looking at studies through 2018). And a number of new results from the LHC since the Particle Data Group values were last updated make that exclusion even stronger. 

There is really no plausible way that particle physicists could have missed an electromagnetically charged fundamental particle (either a fermion or a boson) in the 0.1 GeV (about the same as the muon and strange quark) to 3 GeV (between the tau lepton and charm quark on one hand and the bottom quark on the other) mass range suggested by the LP&C conjecture and current data, even if it was produced very infrequently, in very rare processes that interact at all with any Standard Model particles non-gravitationally. Particle collider particle detectors are exquisitely sensitive to electromagnetically charged particles in that mass range, no matter how short-lived they may be (and stable charged fundamental particles in that mass window would be found, if not in colliders, by other means).

Of course, as a theoretical physicists proposing beyond the Standard Model physics, you can propose any new particles that you like and need not constrain yourself to well-explored proposals like a two Higgs doublet model if you don't want to do so.

Given those constraints, a singlet electromagnetically neutral neutrino mass imparting boson analogous to the Higgs boson for other fundamental Standard Model particles, outside of the two Higgs doublet model, might be a better candidate for a new fundamental boson that fills out the missing fundamental particle mass suggested by the LP&C conjecture. The source of neutrino mass is still an unsolved problem, so that provides at least some motivation for it. 

If this fundamental boson was stable, or produced and destroyed at strictly identical rates, it could also be a dark matter candidate, solving two issues with on BSM particle.

The LP&C conjecture provides no hint for or against a 17 MeV missing fundamental boson, which has been proposed by a single experiment to explain some subtle decay angle issues in nuclear physics, because 17 MeV squared in only about 0.1% of the uncertainty in the sum of the square of the fundamental particle masses, so the existence or non-existence of such a particle would be impossible to determine for the foreseeable future using the LP&C conjecture if it were true. 

Beyond The Standard Model Fundamental Fermion And Boson Set Candidates

Self-interacting dark matter (SIDM) models generally propose one predominant stable, electromagnetically neutral, fermionic dark matter candidate (possibly part of a three generation fermion triplet with more massive but unstable second and third generation counterparts), and one unstable, electromagnetically neutral, bosonic "dark photon" that carries a self-interaction force between dark matter fermions.

Since the mass of an unstable boson is functionally related to the range of the force it carries, which can be estimated from the inferred dynamics of fermionic dark matter particles in SIDM models, we can estimate that the sweet spot in terms of mass for a dark photon that carries the self-interaction force between dark matter fermion particles is about 100 MeV.

If both the dark matter fermion and the dark photon receive their masses via a Standard Model Higgs mechanism, then a dark matter fermion in the 0.1 GeV to 3 GeV range as suggested by the LP&C conjecture with the exact mass ultimately determined masses of the top quark and Higgs boson demand. And, if one had a dark matter fermion triplet, the third-generation unstable dark matter fermion could be at the high end of this mass range, the second-generation unstable dark matter fermion could be in the low hundreds of MeV in mass, and the lightest and stable dark matter fermion could have a mass as small as necessary to fit the astronomy data (e.g. in the keV warm dark matter range, or even in the axion-like very low dark matter particle range).

These four new missing fundamental particles could fit a self-interacting warm dark matter model, fill the LP&C conjecture mass gap, would have no non-Higgs portal interactions with Standard Model particles, and as fairly light, electromagnetically neutral particles with no weak or strong force interactions, that decay to other electromagnetically neutral particles with no weak or strong force interactions, could have escaped detection as a Higgs boson decay channel so far at particle colliders, manifesting merely as missing traverse momentum in Higgs boson decays at levels that can't be ruled out yet.

While neither warm dark matter models, nor self-interacting dark matter models have proven very satisfactory in matching the astronomy data (although each of them do at least marginally better than collisionless cold dark matter models with GeV mass particles do), perhaps combining both of these cold dark matter particle model variants would work better than a model with one rather than both of these features.

Wednesday, November 8, 2023

More On Wide Binaries, MOND and Deur

A new study concludes very emphatically that wide binary stars behave in a manner more like Newtonian dynamics and less like MOND. This seems pretty definitive, but we've seen contradictory conclusions on the wide binary data before, so I don't consider this to be the final word.

More importantly, it doesn't rule out all gravity based explanations of dark matter phenomena. In particular, Deur's gravitational self-interaction driven approach explains dark matter phenomena with a gravitational solution (with no dark matter or dark energy) over a very wide range of applicability and does not predict significantly non-Newtonian behavior in wide binaries

Deur's approach may or may not be consistent with consideration of non-perturbative standard general relativistic effects as claimed. But whether Deur is applying general relativity in an unconventional way that rigorously considers a feature of general relativity that most other researchers neglect, or is actually a subtle gravity modification theory that isn't quite identical to standard general relativity, it works. Deur's approach also roots MOND-like galaxy behavior, and really all dark matter and dark energy theories in a theory that has a fairly simple deep theoretical motivation rather than just being a phenomenological fit to some key data points. And, unlike LambdaCDM and other standard dark energy theories, Deur's approach does so without globally violating conservation of mass-energy.

Deur's approach explains dark matter phenomena not just where MOND works, but also in many circumstances where MOND does not work. 

For example, Deur's approach works in galaxy clusters, in the Bullet cluster, with respect to wide binary star systems, as an explanation for different degrees of dark matter phenomena in differently shaped elliptical galaxies, as an explanation for the two-dimensional arrangement of satellite galaxies around spiral galaxies, and as an explanation for dark energy phenomena as well. It even provides a potential explanation for the Hubble constant tension and can reproduce the cosmic microwave background observed by the Planck collaboration and the early galaxy formation observed by the Webb telescope.

If the conclusion of this paper holds up, it somewhat decisively tips the balance between Deur's approach and other gravitational explanations of dark matter phenomena.
We test Milgromian dynamics (MOND) using wide binary stars (WBs) with separations of 2−30 kAU. Locally, the WB orbital velocity in MOND should exceed the Newtonian prediction by ≈20% at asymptotically large separations given the Galactic external field effect (EFE). 
We investigate this with a detailed statistical analysis of Gaia DR3 data on 8611 WBs within 250 pc of the Sun. Orbits are integrated in a rigorously calculated gravitational field that directly includes the EFE. We also allow line of sight contamination and undetected close binary companions to the stars in each WB. We interpolate between the Newtonian and Milgromian predictions using the parameter αgrav, with 0 indicating Newtonian gravity and 1 indicating MOND. 
Directly comparing the best Newtonian and Milgromian models reveals that Newtonian dynamics is preferred at 19σ confidence. Using a complementary Markov Chain Monte Carlo analysis, we find that αgrav=−0.021+0.065−0.045, which is fully consistent with Newtonian gravity but excludes MOND at 16σ confidence. This is in line with the similar result of Pittordis and Sutherland using a somewhat different sample selection and less thoroughly explored population model. 
We show that although our best-fitting model does not fully reproduce the observations, an overwhelmingly strong preference for Newtonian gravity remains in a considerable range of variations to our analysis. 
Adapting the MOND interpolating function to explain this result would cause tension with rotation curve constraints. We discuss the broader implications of our results in light of other works, concluding that MOND must be substantially modified on small scales to account for local WBs.
Indranil Banik, "Strong constraints on the gravitational law from Gaia DR3 wide binaries" arXiv:2311.03436 (November 6, 2023).

In other dark matter news, fuzzy dark matter theories (a very light boson with wave-like behavior dark matter theory) are compared to cold dark matter theories (with which all sorts of empirical evidence is inconsistent).

As I've noted often, but articulated less often, models with very light dark matter particles (especially bosonic ones) look a lot like quantum gravity theories with a graviton that has zero mass but non-zero mass-energy and self-interaction. Many axion-like dark matter theories, like fuzzy dark matter theories, lean towards this description. So, we are gradually starting to see dark matter theories converge on a mechanism that bears great similarities to a gravity based explanation of dark matter phenomena.

Finally, this paper is interesting:
The immense diversity of the galaxy population in the universe is believed to stem from their disparate merging histories, stochastic star formations, and multi-scale influences of filamentary environments. Any single initial condition of the early universe was never expected to explain alone how the galaxies formed and evolved to end up possessing such various traits as they have at the present epoch. However, several observational studies have revealed that the key physical properties of the observed galaxies in the local universe appeared to be regulated by one single factor, the identity of which has been shrouded in mystery up to date. 
Here, we report on our success of identifying the single regulating factor as the degree of misalignments between the initial tidal field and protogalaxy inertia momentum tensors. The spin parameters, formation epochs, stellar-to-total mass ratios, stellar ages, sizes, colors, metallicities and specific heat energies of the galaxies from the IllustrisTNG suite of hydrodynamic simulations are all found to be almost linearly and strongly dependent on this initial condition, when the differences in galaxy total mass, environmental density and shear are controlled to vanish. The cosmological predispositions, if properly identified, turns out to be much more impactful on galaxy evolution than conventionally thought.
Jun-Sung Moon, Jounghun Lee, "Why Galaxies are Indeed Simpler than Expected" arXiv:2311.03632 (November 7, 2023).