A new paper uses an improved neural network method to estimate the masses of various exotic hadrons (i.e. tetraquarks and pentaquarks) and doubly charmed and bottomed three valence quark baryons. It achieves results comparable to a couple of other analytical QCD methods and significantly improves on prior neural network methods.
The paper illustrates that while it is possible to make "reasonable predictions" of hadron masses, most of the approximations are quite crude compared to the experimental data, and even state of the art and computationally demanding lattice QCD methods still only approach parts per two thousand claimed accuracy.
A cynical person could argue that this method is merely trend fitting and doesn't reflect the underlying QCD physics that give rise to the result in any theoretically justifiable way, but given the great uncertainties and immense calculation efforts that go into theoretically justified QCD calculations of hadron masses with current methods and the fact that this approach can distinguish between hadrons that some of the alternatives cannot, this shortcoming is excusable for the time being.
Another important shortcoming of this method, however, is that the uncertainty in the estimates is not well quantified, although comparing experimental results to the predictions made can allow for a reasonable guess as to its order of magnitude (1%-11%), which is indeed competitive with alternative approaches that are currently viable such as older neural network methods, the Gaussian process method, and a Constituent Quark model approach. Also, the claimed error of some of the other predictions is not consistent with the experimental results.
The tables reprinted below also compactly review the valence quark structure of these hadrons (notwithstanding the fact that there is not complete consensus on this point) and the literature measuring and estimating their masses.
Recently, there have been significant developments in neural networks; thus, neural networks have been frequently used in the physics literature. This work estimates the masses of exotic hadrons, doubly charmed and bottomed baryons from the meson and baryon masses using neural networks. Subsequently, the number of data has been increased using the artificial data augmentation technique proposed recently. We have observed that the neural network's predictive ability increases using augmented data. This study has shown that data augmentation techniques play an essential role in improving neural network predictions; moreover, neural networks can make reasonable predictions for exotic hadrons, doubly charmed, and doubly bottomed baryons. The results are also comparable to Gaussian Process and Constituent Quark Model.
It's not too clear to me what the utility of using ML in this manner is
ReplyDelete- if you wanted an order of magnitude estimate for where to look in experiments, even a simple model like a fit to 2*(mass of heavy quark)+(constant binding energy) is fine
- if you want precision/ab-initio predictions, you need latticeQCD
- if you want 'explainability', then a simple fit of the form I mentioned above, with some parameters for the quantum numbers and light quark degrees of freedom, would provide just as good a fit, and explain the trends.
what exactly is the utility/takeaway of this study?
I think that this is simply applying the "try everything" and see what works approach.
ReplyDeleteIf you could get significantly closer to latticeQCD with less computing burden than a simple regression, that would be valuable.
There do exist various other models for predicting hadron masses without lattice input, e.g. bag models, skyrmionic models, constituent quark models. The thing that all of these models have in common is that though they are computationally very cheaqp, they are not systematically improvable, even if you put in a lot of effort you have no idea if your answer is actually correct or not. Lattice remains the only systematically improvable way to access nonperturbative information
ReplyDeleteThese phenomenological models do have one advantage, which is explainability - they offer a way to understand the nonperterbative structure, some sort of intuition, that may be applicable in other QFTs/condensed matter contexts/and so on.
This kind of ML approach seems to offer neither - it is not systematically improvable (it really is just an order of magnitude estimate of hadron masses), and it also offers no insight into why the hadron masses are what they are
@JoshuLin
ReplyDeleteFair analysis.