This paper essentially rules out the remainder of the viable fuzzy dark matter parameters space. FDM had been one of the more viable ultra-light dark matter theories.
Tuesday, August 29, 2023
Fuzzy Dark Matter Ruled Out
Wednesday, August 23, 2023
What Would Dark Matter Have To Be Like To Fit Our Observations?
Stacey McGaugh at his Triton Station blog (with some typological errors due to the fact that he's dictating since he recently broke his wrist) engages with the question of what properties dark matter would have to have to fit our astronomy observations.
Cosmology considerations like the observed cosmic background radiation (after astronomy observations ruled out some of the baryonic matter contenders like brown dwarfs and black holes) suggest that dark matter should be nearly collisionless, lack interactions with ordinary matter other than gravity, and should be non-baryonic (i.e. not made up of Standard Model particles or composites of them).
But observations of galaxies show that dark matter with these properties would form halos different than those with the cosmology driven properties described above. Astronomy observations of galaxies show us that inferred dark matter distributions intimately track the distributions of ordinary matter in a galaxy, which Newtonian-like gravitational interactions can explain on their own.
As his post explains after motivating his comments with the historical background of the dark matter theoretical paradigm, the problem is as follows (I have corrected his dictation software related errors without attribution. The bold and underlined emphasis is mine):
If we insist on dark matter, what this means is that we need, for each and every galaxy, to precisely look like MOND.
I wrote the equation for the required effects of dark matter in all generality in McGaugh (2004). The improvements in the data over the subsequent decade enable this to be abbreviated to:
This is in McGaugh et al. (2016), which is a well known paper (being in the top percentile of citation rates).
So this should be well known, but the implication seems not to be, so let’s talk it through. g(DM) is the force per unit mass provided by the dark matter halo of a galaxy. This is related to the mass distribution of the dark matter – its radial density profile – through the Poisson equation. The dark matter distribution is entirely stipulated by the mass distribution of the baryons, represented here by g(bar). That’s the only variable on the right hand side, a(0) being Milgrom’s acceleration constant. So the distribution of what you see specifies the distribution of what you can’t.This is not what we expect for dark matter. It’s not what naturally happens in any reasonable model, which is an NFW halo. That comes from dark matter-only simulations; it has literally nothing to do with g(bar). So there is a big chasm to bridge right from the start: theory and observation are speaking different languages. Many dark matter models don’t specify g(bar), let alone satisfy this constraint. Those that do only do so crudely – the baryons are hard to model. Still, dark matter is flexible; we have the freedom to make it work out to whatever distribution we need. But in the end, the best a dark matter model can hope to do is crudely mimic what MOND predicted in advance. If it doesn’t do that, it can be excluded. Even if it does do that, should we be impressed by the theory that only survives by mimicking its competitor?The observed MONDian behavior makes no sense whatsoever in terms of the cosmological constraints in which the dark matter has to be non-baryonic and not interact directly with the baryons. The equation above implies that any dark matter must interact very closely with the baryons – a fact that is very much in the spirit of what earlier dynamicist had found, that the baryons and the dynamics are intimately connected. If you know the distribution of the baryons that you can see, you can predict what the distribution of the unseen stuff has to be.And so that’s the property that galaxies require that is pretty much orthogonal to the cosmic requirements. There needs to be something about the nature of dark matter that always gives you MONDian behavior in galaxies. Being cold and non-interacting doesn’t do that.
Instead, galaxy phenomenology suggests that there is a direct connection – some sort of direct interaction – between dark matter and baryons. That direct interaction is anathema to most ideas about dark matter, because if there’s a direct interaction between dark matter and baryons, it should be really easy to detect dark matter. They’re out there interacting all the time.There have been a lot of half solutions. These include things like warm dark matter and self interacting dark matter and fuzzy dark matter. These are ideas that have been motivated by galaxy properties. But to my mind, they are the wrong properties. They are trying to create a central density core in the dark matter halo. That is at best a partial solution that ignores the detailed distribution that is written above. The inference of a core instead of a cusp in the dark matter profile is just a symptom. The underlying disease is that the data look like MOND.MONDian phenomenology is a much higher standard to try to get a dark matter model to match than is a simple cored halo profile. We should be honest with ourselves that mimicking MOND is what we’re trying to achieve. Most workers do not acknowledge that, or even be aware that this is the underlying issue.There are some ideas to try to build-in the required MONDian behavior while also satisfying the desires of cosmology. One is Blanchet’s dipolar dark matter. He imagined a polarizable dark medium that does react to the distribution of baryons so as to give the distribution of dark matter that gives MOND-like dynamics. Similarly, Khoury’s idea of superfluid dark matter does something related. It has a superfluid core in which you get MOND-like behavior. At larger scales it transitions to a non-superfluid mode, where it is just particle dark matter that reproduces the required behavior on cosmic scales.I don’t find any of these models completely satisfactory. It’s clearly a hard thing to do. You’re trying to mash up two very different sets of requirements. With these exceptions, the galaxy-motivated requirement that there is some physical aspect of dark matter that somehow knows about the distribution of baryons and organizes itself appropriately is not being used to inform the construction of dark matter models. The people who do that work seem to be very knowledgeable about cosmological constraints, but their knowledge of galaxy dynamics seems to begin and end with the statement that rotation curves are flat and therefore we need dark matter. That sufficed 40 years ago, but we’ve learned a lot since then. It’s not good enough just to have extra mass. That doesn’t cut it.
This analysis is the main reason that I'm much more inclined to favor gravity based explanations for dark matter phenomena than particle based explanations.
Direct dark matter detection experiments pretty much rule out dark matter particles that interact with ordinary matter with sufficient strength with masses in the 1 GeV to 1000 GeV ranges (one GeV is 1,000,000,000 eV).
Collider experiments pretty much rule out dark matter particles that interact in any way with ordinary matter at sufficient strength with masses in the low single digit thousands GeVs or less. These experiments are certainly valid down to something less than the mass scale of the electron (which as a mass of about 511,000 eV).
Astronomy observations used to rule out MACHOs such as brown dwarfs, and large primordial black holes (PBHs), pretty much rule out dark matter lumps of asteroid size or greater (from micro-lensing for larger lumps, and from solar system dynamics for asteroid sized lumps), whether or not it interacts non-gravitationally with ordinary matter.
This leaves a gap between about 1000 GeV and asteroid masses, but the wave-like nature of dark matter phenomena inferred from astronomy observations pretty much rules out dark matter particles of more than 10,000 eV.
Direct dark matter detection experiments can't directly rule out these low mass dark matter candidates because their not sensitive enough.
Colliders could conceivably miss particles that interact only feebly with ordinary matter and have very low mass themselves, although nuclear physics was able to detect the feebly interacting and very low mass neutrinos way back in the 1930s with far more primitive equipment than we have now.
Even light dark matter candidates like axions, warm dark matter, and fuzzy dark matter still can't reproduce the observed tight fit between ordinary matter distributions and dark matter distributions within dark matter halos, however, if they have no non-gravitational interactions with ordinary matter.
All efforts to directly detect axions (which would have some interactions with ordinary matter that can be theoretically modeled) have had null results.
Furthermore, because the MOND equations that dark matter phenomena follow in galaxies are tied in particular to the amount of Newtonian-like acceleration due to gravity that objects in the galaxy experience from the galaxy, envisioning this phenomena as arising from a modification to gravity makes more sense than envisioning it as an entirely novel and unrelated to gravity fifth force between dark matter particles and ordinary matter.
If you take the dark matter particle candidates to explain dark matter phenomena off the field for these reasons, you can narrow down the plausible possible explanations for dark matter phenomena dramatically.
We also know that toy model MOND itself isn't quite the right solution.
The right solution needs to be embedded in a relativistic framework that addresses strong field gravitational phenomena and solar system scale gravitational phenomena more or less exactly identically to Einstein's General Relativity up to the limitations of current observational precision and accuracy which is great.
The right solution also needs to have a greater domain of applicability than toy-model MOND, by correctly dealing with galaxy cluster level phenomena (which displays a different by similar scaling law to the Tully-Fischer relation which can be derived directly from MOND), the behavior of particles near spiral galaxies that are outside the main galactic disk, the behavior of wide binary stars (which is still currently unknown), and must be generalized to address cosmology phenomena like the cosmic background radiation and the timing of galaxy formation.
Fortunately, several attempts using MOND-variants, Moffat's MOG theory, and Deur's gravitational field self-interaction model, have shown that this is possible in principle to achieve. All three approaches have reproduced the cosmic microwave background to high precision and modified gravity theories generically produce more rapid galaxy formation than the LambdaCDM dark matter particle paradigm.
I wouldn't put money on Deur's approach being fully consistent with General Relativity, which a recent paper claimed to disprove, albeit without engaging in the key insight of Deur's that non-perturbative modeling of the non-Abelian aspects of gravity is necessary.
But Deur's approach, even if it is actually a modification of GR, remains the only one that secures a complete range of applicability in a gravitational explanation of both dark matter and dark energy, from a set of theoretical assumptions very similar to those of general relativity and generically assumed in quantum gravity theories, in an extremely parsimonious and elegant manner.
MOND doesn't have the same theoretical foundation or level of generality, and some of its relativistic generalizations like TeVeS don't meet certain observational tests.
MOG requires a scalar-vector-tensor theory, while Deur manages to get the same results with a single tensor field.
Deur claims that he is introducing no new physically measured fundamental constants beyond Newton's constant G, but doesn't do this derivation for the constant he determines empirically for spiral galaxies from a(0), so that conclusion, if true, is an additional remarkable accomplishment, but I take it with a grain of salt.
Deur's explanation for dark energy phenomena also sets it apart. It dispenses with the need for the cosmological constant (thus preserving global conservation of mass-energy), in a way that is clever, motivated by conservation of energy principles at the galaxy scale related to the dark matter phenomena explanation of the theory, and is not used by any other modified gravity theories of which I am aware. It also provides an explanation for the apparent observation that the Hubble constant hasn't remained constant over the life of the universe, which flows naturally from Deur's theory and is deeply problematic in a theory with a simple cosmological constant.
So, I think that it is highly likely the Deur's resolution of dark matter and dark energy phenomena, or a theory that looks very similar, is the right solution to these unsolved problems in astrophysics and fundamental physics.
A Recap Of What We Know About Neutrino Mass
This post about the state of research on the neutrino masses was originally made (with minor modifications from it for this blog post) at Physics Forums. Some of this material borrows heavily from prior posts at this blog tagged "neutrino".
Cosmological measurements of the cosmic microwave background temperature and polarization information, baryon acoustic oscillations, and local distance ladder measurements lead to an estimate that the sum of for all i of m(i) < 90 meV at 90% CL which mildly disfavors the inverted ordering over the normal ordering since the sum of for all i of m(i) greater than or equal to 60 meV in the NO and greater than or equal to 110 meV in the IO; although these results depend on one’s choice of prior of the absolute neutrino mass scale.Significant improvements are expected to reach the σ(the sum of for all m(ν)) ∼ 0.04 eV level with upcoming data from DESI and VRO, see the CF7 report, which should be sufficient to test the results of local oscillation data in the early universe at high significance, depending on the true values.
Tuesday, August 22, 2023
Old But Interesting
We show that, in the application of Riemannian geometry to gravity, there exists a superpotential Vij of the Riemann-Christoffel tensor which is the tensor generalization of Poisson's classical potential. Leaving open the question of a zero on nonzero rest mass k of the graviton we show that, in the latter case, k2 Vij is an energy momentum density, or “Maxwell-like tensor,” of the gravity field itself, adding to the “material tensor” in the right-hand sides of both the (generalized) Poisson equation and the Einstein gravity equation, but that, nevertheless, Einstein's requirement of geodesic motion of a point particle is rigorously preserved.
Two interesting possibilities are thus opened: a tentative explanation of the cosmological “missing mass” and quantization of the Riemannian gravity field along a standard procedure.
Wednesday, August 16, 2023
Ötzi the Iceman’s DNA Revisited
In 2012, scientists compiled a complete picture of Ötzi’s genome; it suggested that the frozen mummy found melting out of a glacier in the Tyrolean Alps had ancestors from the Caspian steppe . . . The Iceman is about 5,300 years old. Other people with steppe ancestry didn’t appear in the genetic record of central Europe until about 4,900 years ago. Ötzi “is too old to have that type of ancestry,” says archaeogeneticist Johannes Krause of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. The mummy “was always an outlier.”
Krause and colleagues put together a new genetic instruction book for the Iceman. The old genome was heavily contaminated with modern people’s DNA, the researchers report August 16 in Cell Genomics. The new analysis reveals that “the steppe ancestry is completely gone.”About 90 percent of Ötzi’s genetic heritage comes from Neolithic farmers, an unusually high amount compared with other Copper Age remains. . . The Iceman’s new genome also reveals he had male-pattern baldness and much darker skin than artistic representations suggest. Genes conferring light skin tones didn’t become prevalent until 4,000 to 3,000 years ago when early farmers started eating plant-based diets and didn’t get as much vitamin D from fish and meat as hunter-gathers did. . . .“People that lived in Europe between 40,000 years ago and 8,000 years ago were as dark as people in Africa. . . .“We have always imagined that [Europeans] became light-skinned much faster. But now it seems that this happened actually quite late in human history.”
The Tyrolean Iceman is known as one of the oldest human glacier mummies, directly dated to 3350–3120 calibrated BCE. A previously published low-coverage genome provided novel insights into European prehistory, despite high present-day DNA contamination. Here, we generate a high-coverage genome with low contamination (15.3×) to gain further insights into the genetic history and phenotype of this individual. Contrary to previous studies, we found no detectable Steppe-related ancestry in the Iceman. Instead, he retained the highest Anatolian-farmer-related ancestry among contemporaneous European populations, indicating a rather isolated Alpine population with limited gene flow from hunter-gatherer-ancestry-related populations. Phenotypic analysis revealed that the Iceman likely had darker skin than present-day Europeans and carried risk alleles associated with male-pattern baldness, type 2 diabetes, and obesity-related metabolic syndrome. These results corroborate phenotypic observations of the preserved mummified body, such as high pigmentation of his skin and the absence of hair on his head.
We found that the Iceman derives 90% ± 2.5% ancestry from early Neolithic farmer populations when using Anatolia_N as the proxy for the early Neolithic-farmer-related ancestry and WHGs as the other ancestral component (Figure 3; Table S4). When testing with a 3-way admixture model including Steppe-related ancestry as the third source for the previously published and the high-coverage genome, we found that our high-coverage genome shows no Steppe-related ancestry (Table S5), in contrast to ancestry decomposition of the previously published Iceman genome. We conclude that the 7.5% Steppe-related ancestry previously estimated for the previously published Iceman genome is likely the result of modern human contamination. . . .
Compared with the Iceman, the analyzed contemporaneous European populations from Spain and Sardinia (Italy_Sardinia_C, Italy_Sardinia_N, Spain_MLN) show less early Neolithic-farmer-related ancestry, ranging from 27.2% to 86.9% (Figure 3A; Table S4). Even ancient Sardinian populations, who are located further south than the Iceman and are geographically separate from mainland Europe, derive no more than 85% ancestry from Anatolia_N (Figure 3; Table S4). The higher levels of hunter-gatherer ancestry in individuals from the 4th millennium BCE have been explained by an ongoing admixture between early farmers and hunter-gatherers in the Middle and Late Neolithic in various parts of Europe, including western Europe (Germany and France), central Europe, Iberia, and the Balkans.Only individuals from Italy_Broion_CA.SG found to the south of the Alps present similarly low hunter-gatherer ancestry as seen in the Iceman.We conclude that the Iceman and Italy_Broion_CA.SG might both be representatives of specific Chalcolithic groups carrying higher levels of early Neolithic-farmer-related ancestry than any other contemporaneous European group. This might indicate less gene flow from groups that are more admixed with hunter-gatherers or a smaller population size of hunter-gatherers in that region during the 5th and 4th millennium BCE. . . .
We estimated the admixture date between the early Neolithic-farmer-related (using Anatolia_N as proxy) and WHG-related ancestry sources using DATES to be 56 ± 21 generations before the Iceman’s death, which corresponds to 4880 ± 635 calibrated BCE assuming 29 years per generation (Figure 3B; Table S7) and considering the mean C14 date of this individual. Alternatively, using Germany_EN_LBK as the proxy for early Neolithic-farmer-related ancestry, we estimated the admixture date to be 40 ± 15 generations before his death (Table S7), or 4400 ± 432 calibrated BCE, overlapping with estimates from nearby Italy_Broion_CA.SG, who locate to the south of the Alps (Figure 3B).While compared with the admixture time between early Neolithic farmers and hunter-gatherers in other parts of southern Europe, for instance in Spain and southern Italy, we found that, particularly, the admixture with hunter-gatherers as seen in the Iceman and Italy_Broion_CA.SG is more recent (Figure 3B; Table S3), suggesting a potential longer survival of hunter-gatherer-related ancestry in this geographical region.
Climate And Archaic Hominins
John Hawks has an intriguing analysis of a new paper on how the range and interactions of Neanderthals and Denisovans may have had a climate component. We know from the existence of genetic evidence showing Neanderthal-Denisovan admixture that there was some interaction.
He is skeptical of some aspects of the paper, including the hypothesis that Denisovans were systemically more cold tolerant, and the underlying concept of that there was a stable over time geographic range of occupation by particular species with frontiers that were rarely crossed. He acknowledges, however, that there is a wide geographic range where they could have been Neanderthal-Denisovan interaction. He also notes that:
The conclusion I draw from Ruan and colleagues' study is that no strong east-west climate barriers could have kept these populations apart for the hundreds of thousands of years of their evolution. That leaves open the possibility that other aspects of the environment besides temperature, rainfall, and general biome composition could have shaped their evolution. The alternative is that the survival and local success of hominin groups was itself so patchy over the long term that only a handful of lineages could persist.
One hypothesis that I've advanced over the years is that the jungles and hominin occupants of mainland Southeast Asia, formed a barrier to Neanderthal and modern human expansion until the Toba eruption at least temporarily removed that barrier.
I reproduce two images he borrows from papers he discusses below:
Tuesday, August 15, 2023
A New Higgs Boson Mass Measurement
The mass of the Higgs boson is measured in the H→γγ decay channel, exploiting the high resolution of the invariant mass of photon pairs reconstructed from the decays of Higgs bosons produced in proton-proton collisions at a centre-of-mass energy s√=13 TeV. The dataset was collected between 2015 and 2018 by the ATLAS detector at the Large Hadron Collider, and corresponds to an integrated luminosity of 140 fb−1. The measured value of the Higgs boson mass is 125.17±0.11(stat.)±0.09(syst.) GeV and is based on an improved energy scale calibration for photons, whose impact on the measurement is about four times smaller than in the previous publication. A combination with the corresponding measurement using 7 and 8 TeV pp collision ATLAS data results in a Higgs boson mass measurement of 125.22±0.11(stat.)±0.09(syst.) GeV. With an uncertainty of 1.1 per mille, this is currently the most precise measurement of the mass of the Higgs boson from a single decay channel.
Monday, August 14, 2023
Pompeii Scrolls May Be Recoverable
Thursday, August 10, 2023
An Improved Muon g-2 Measurement
A comprehensive prediction for the Standard Model value of the muon magnetic anomaly was compiled most recently by the Muon g−2 Theory Initiative in 2020[20], using results from[21–31]. The leading order hadronic contribution, known as hadronic vacuum polarization (HVP)was taken from e+e−→hadrons cross section measurements performed by multiple experiments. However, a recent lattice calculation of HVP by the BMW collaboration[30] shows significant tension with the e+e− data. Also, a new preliminary measurement of the e+e−→π+π−cross section from the CMD-3 experiment[32] disagrees significantly with all other e+e−data. There are ongoing efforts to clarify the current theoretical situation[33].
While a comparison between the Fermilab result from Run-1/2/3 presented here, aµ(FNAL),and the 2020 prediction yields a discrepancy of 5.0σ, an updated prediction considering all available data will likely yield a smaller and less significant discrepancy.
World Experimental Average (2023): 116,592,059(22)Fermilab Run 1+2+3 data (2023): 116,592,055(24)Fermilab Run 2+3 data(2023): 116,592,057(25)Combined measurement (2021): 116,592,061(41)Fermilab Run 1 data (2021): 116,592,040(54)Brookhaven's E821 (2006): 116,592,089(63)Theory Initiative calculation: 116,591,810(43)BMW calculation: 116,591,954(55)
It is important to note that every single experiment and every single theoretical prediction matches up exactly 116,592 times 10^-8, rounded to the nearest 10^-8 (the raw number before conversion to g-2 form is 2.00233184, which has nine significant digits). The spread from the highest best fit experimental value to the lowest best fit theoretical prediction spans only 279 times 10^-11, which is equivalent to a plus or minus two sigma uncertainty of 70 times 10^-11 from the midpoint of that range. So, all of the experimental and theoretical values are ultra-precise.
The experimental value is already twice as precise at the theoretical prediction of its value in the Standard Model, and is expected to ultimately be about four times more precise than the current best available theoretical predictions as illustrated below.
Completed Runs 4 and 5 and in progress Run 6 are anticipated to reduce the uncertainty in the experimental measurement over the next two or three years, by about 50%, but mostly from Run 4 which should release its results sometime around October of 2024. The additional experimental precision anticipated from Run 5 and Run 6 is expected to be pretty modest.
The π+π−channel gives the major part of the hadronic contribution to the muon anomaly, 506.0±3.4×10−10 out of the total aHVP µ = 693.1±4.0×10−10 value. It also determines (together with the light-by-light contribution) the overall uncertainty ∆aµ= ±4.3×10−10 of the standard model prediction of muon g−2 [5].
To conform to the ultimate target precision of the ongoing Fermilab experiment [16,17]∆aexp µ [E989]≈±1.6×10−10 and the future J-PARC muon g-2/EDM experiment[18],the π+π− production cross section needs to be known with the relative overall systematic uncertainty about 0.2%.
Several sub-percent precision measurements of the e+e−→π+π− cross section exist. The energy scan measurements were performed at VEPP-2M collider by the CMD-2 experiment (with the systematic precision of 0.6–0.8%)[19,20,21,22] and by the SND experiment (1.3%)[23]. These results have some what limited statistical precision. There are also measurements based on the initial-state radiation(ISR) technique by KLOE(0.8%)[24,25, 26,27], BABAR(0.5%)[28] and BES-III (0.9%)[29]. Due to the high luminosities of these e+e−factories, the accuracy of the results from the experiments are less limited by statistics, meanwhile they are not fully consistent with each other within the quoted systematic uncertainties.
One of the main goals of the CMD-3 and SND experiments at the newVEPP-2000 e+e− collider at BINP, Novosibirsk, is to perform the new high precision high statistics measurement of the e+e−→π+π−cross section.
Recently, the first SND result based on about 10% of the collected statistics was presented with a systematic uncertainty of about 0.8%[30].
Monday, August 7, 2023
What Kind of Hominin Is The Latest Chinese Find?
Time For A New Dark Matter Phenomena Paradigm
The phenomenon of the Dark matter baffles the researchers: the underlying dark particle has escaped so far the detection and its astrophysical role appears complex and entangled with that of the standard luminous particles. We propose that, in order to act efficiently, alongside with abandoning the current ΛCDM scenario, we need also to shift the Paradigm from which it emerged.
Thursday, August 3, 2023
How Do People Decide Which Scientists To Believe?
Uncertainty that arises from disputes among scientists seems to foster public skepticism or noncompliance. Communication of potential cues to the relative performance of contending scientists might affect judgments of which position is likely more valid. We used actual scientific disputes—the nature of dark matter, sea level rise under climate change, and benefits and risks of marijuana—to assess Americans’ responses (n = 3150).
Seven cues—replication, information quality, the majority position, degree source, experience, reference group support, and employer—were presented three cues at a time in a planned-missingness design. The most influential cues were majority vote, replication, information quality, and experience. Several potential moderators—topical engagement, prior attitudes, knowledge of science, and attitudes toward science—lacked even small effects on choice, but cues had the strongest effects for dark matter and weakest effects for marijuana, and general mistrust of scientists moderately attenuated top cues’ effects.
Risk communicators can take these influential cues into account in understanding how laypeople respond to scientific disputes, and improving communication about such disputes.
Wednesday, August 2, 2023
A Strict Experimental Bound On A Quantum Gravity Effect From IceCube.
Neutrino oscillations at the highest energies and longest baselines provide a natural quantum interferometer with which to study the structure of spacetime and test the fundamental principles of quantum mechanics. If the metric of spacetime has a quantum mechanical description, there is a generic expectation that its fluctuations at the Planck scale would introduce non-unitary effects that are inconsistent with the standard unitary time evolution of quantum mechanics. Neutrinos interacting with such fluctuations would lose their quantum coherence, deviating from the expected oscillatory flavor composition at long distances and high energies.
The IceCube South Pole Neutrino Observatory is a billion-ton neutrino telescope situated in the deep ice of the Antarctic glacier. Atmospheric neutrinos detected by IceCube in the energy range 0.5-10 TeV have been used to test for coherence loss in neutrino propagation. No evidence of anomalous neutrino decoherence was observed, leading to the strongest experimental limits on neutrino-quantum gravity interactions to date, significantly surpassing expectations from natural Planck-scale models. The resulting constraint on the effective decoherence strength parameter within an energy-independent decoherence model is Γ0≤1.17×10^−15 eV, improving upon past limits by a factor of 30. For decoherence effects scaling as E^2, limits are advanced by more than six orders of magnitude beyond past measurements.
A (Weak) Challenge To Relativistic MOND
In this paper, we present several explicit reconstructions for a novel relativistic theory of modified Newtonian dynamics (RMOND) derived from the background of Friedmann-Lemaı̂tre-Robertson-Walker cosmological evolution. It is shown that the Einstein-Hilbert Lagrangian with a positive cosmological constant is the only Lagrangian capable of accurately replicating the exact expansion history of the Λ cold dark matter (ΛCDM) universe filled solely with dust-like matter and the only way to achieve this expansion history for the RMOND theory is to introduce additional degrees of freedom to the matter sectors. Besides, we find that the ΛCDM-era also can be replicated without any real matter field within the framework of the RMOND theory and the cosmic evolution exhibited by both the power-law and de-Sitter solutions also can be obtained.