This article discusses an important methodological issue of wide interdisciplinary importance: how to deal with "unknown unknowns" so as not to be overconfident about scientific results, without throwing out the baby with the bathwater and retreating to a nihilist position that we know nothing.
It demonstrates an approach to estimating the uncertainty of results even though we don't know the precise sources of the uncertainties, including possible researcher fraud.
Uncertainty quantification is a key part of astronomy and physics; scientific researchers attempt to model both statistical and systematic uncertainties in their data as best as possible, often using a Bayesian framework. Decisions might then be made on the resulting uncertainty quantification -- perhaps whether or not to believe in a certain theory, or whether to take certain actions.
However it is well known that most statistical claims should be taken contextually; even if certain models are excluded at a very high degree of confidence, researchers are typically aware there may be systematics that were not accounted for, and thus typically will require confirmation from multiple independent sources before any novel results are truly accepted.
In this paper we compare two methods in the astronomical literature that seek to attempt to quantify these `unknown unknowns' -- in particular attempting to produce realistic thick tails in the posterior of parameter estimation problems, that account for the possible existence of very large unknown effects.
We test these methods on a series of case studies, and discuss how robust these methods would be in the presence of malicious interference with the scientific data.
Peter Hatfield, "Quantification of Unknown Unknowns in Astronomy and Physics" arXiv:2207.13993 (July 28, 2022).
No comments:
Post a Comment