**tl;dr**

Scientists have finally figured out how to do a key calculation about what comes out when you collide protons at close to the speed of light, using only select few of a couple dozen experimentally measured physical constants you can find on this blog, and a few of the equations of the single page worth of forty year old equations that make up the Standard Model of Particle Physics (which you can also find in a bare bones form without full explanations on this blog).

Previously the end product of these calculations, called a PDF (not to be confused with the computer file format) had to be figured out through the brute force collection of billions of data points, because the math involved in solving the equations was so hard.

Figuring this out the hard way is a cottage industry in high energy physics because every other calculation that is done depends upon getting these right as one of the steps in the calculation. Progress on the brute force approach to figuring out PDFs is reported in probably at least one or two published physics journal articles a week, backed by large experimental collaborations, almost every week of the year.

Scientists realized that it was theoretically possible to do these calculations when they came up with the equations of the Standard Model about forty years ago. But, this month, for the first time, they managed to do those calculations. And, the Standard Model worked. The results of the calculations were a good fit to the billions of data points that had been collected in the meantime.

You can look at the last chart in this post to see that while the prediction that the calculated wasn't a tight fit to the data (the data is the thick black lines) because the prediction had significant margins of error, that the data did fit perfectly in the predicted range (which is shaded) everywhere, and the margins of error weren't absurdly large.

Figuring this out the hard way is a cottage industry in high energy physics because every other calculation that is done depends upon getting these right as one of the steps in the calculation. Progress on the brute force approach to figuring out PDFs is reported in probably at least one or two published physics journal articles a week, backed by large experimental collaborations, almost every week of the year.

Scientists realized that it was theoretically possible to do these calculations when they came up with the equations of the Standard Model about forty years ago. But, this month, for the first time, they managed to do those calculations. And, the Standard Model worked. The results of the calculations were a good fit to the billions of data points that had been collected in the meantime.

You can look at the last chart in this post to see that while the prediction that the calculated wasn't a tight fit to the data (the data is the thick black lines) because the prediction had significant margins of error, that the data did fit perfectly in the predicted range (which is shaded) everywhere, and the margins of error weren't absurdly large.

**A Less Abbreviated Description Of What They Did**

A parton distribution function, also known as a "PDF" gives the probability to find a parton (a quark or gluon) in a hadron (for example, a proton) as a function of the fraction x of the hadron's momentum carried by the parton.

*A sample PDF from the Wikipedia link above. "The CTEQ6 parton distribution functions in the MS renormalization scheme and Q = 2 GeV for gluons (red), up (green), down (blue), and strange (violet) quarks. Plotted is the product of longitudinal momentum fraction x and the distribution functions f versus x." The Y axis corresponds to a probability from 0 to 1 (which is 100%). The X axis corresponds to the fraction of the proton's momentum carried by the parton from 0 to 1 (which is 100%).*

Basically, in other words, you can used it to help determine the probability of particular kinds of particles spewing out from a collision in a particle collider of composite particles, each of which is bound by the strong force of quantum chromodynamics a.k.a. QCD.

This is not a trivial matter because a proton, for example, is really much more complicated than its two valence up quarks and one valence down quark, bound by gluons used to describe it on a simplified basis that is sufficient for many purposes, but not for describing the collisions of protons at relativistic speeds. A proton actually also includes a virtual sea of almost every other imaginable fundamental particles in the Standard Model, but at much lower probabilities of encountering them.

For example, even though neither a proton does not have a strange quark as one of its valence quarks, a proton-proton collision with the right amount of collision momentum can produce a kaon which does as a strange quark as one of its valence quarks.

Also, calculating PDFs require you to use what is called "non-perturbative QCD" regime which is much harder to do calculations with since non-linear effects overwhelm linear ones. In contrast, most kinds of QCD that we can make good calculations about , involve the usually higher energy perturbative QCD regime which is more tractable and linear mathematically.

Also, calculating PDFs require you to use what is called "non-perturbative QCD" regime which is much harder to do calculations with since non-linear effects overwhelm linear ones. In contrast, most kinds of QCD that we can make good calculations about , involve the usually higher energy perturbative QCD regime which is more tractable and linear mathematically.

**In practice, PDFs are actually determined by looking at lots of collisions, seeing what falls out, and smoothing out that data from vast amounts of data collection, from billions of particle collisions at all sorts of different energies, into a continuous function.**

But, the PDFs of particles in the Standard Model of Particle Physics aren't fundamental experimentally measured constants.

**In theory, you can calculate them using only the couple dozen experimentally measured fundamental constants of the Standard Model and its equations**(which you could write readably by hand in chalk on a single normal classroom sized blackboard). It is just hard to do so.**A new paper, the abstract of which is below, makes huge progress on this front with the first "from scratch" theoretical calculation of a PDF for protons and neutrons that is fully consistent with experimental results to within the relevant margins of error**(which aren't small, as in all QCD calculations).

**The details of the final results in the charts below are too technical to explain in this format. But, in the chart below, the thick black line is the experimental data, and the shaded areas are the theoretical predictions including the error**due to the numerical approximations made in the calculations, and the inexactly measured physical constants used in the calculations. I reproduce it here to convey visually a sense of how accurate or inaccurate the calculations done in this paper were within the overall parameter space. The different colored shadings reflect different kinds of error sources that are included in each of the colored shadings.

For technical reasons, predictions when x (on the X-axis of the charts below) is very small, are harder to calculate.

There are at least a few reasons that this is significant.

First,

**a good theory is one that gives you information for free**.[1] (Cf. "A good theory explains a lot but postulates little." - Richard Dawkins.) The case of deriving PDFs which previously took billions of data points to determine from a couple of dozen physical constants and a few equations in the Standard Model is an extreme example of a good theory, greatly improving the leverage we get from the Standard Model in terms of predicted results.
Second,

**the flow of information goes both ways.****A theoretical determination of PDFs in terms of fundamental physical constants provides a new means of indirectly tuning our determinations of the values of some of those physical constants.**This can be done by comparing the experimental data previously used to empirically determine PDFs to the theoretical predictions, and in the process finding which values of those physical constants provide the best fit to the PDF data, in a manner independent of the other hadron property measurements previously used for those purposes. At this point, the PDF determinations probably aren't precise enough to bound the values of many of the fundamental constants of the Standard Model very tightly, but even then, it provides independent corroboration of values for those constants measured by other means.
Third, as the abstract notes,

**the formula helps us to better understand the nature of the internal properties of protons, neutrons and other hadrons in a more compact and analytical way**, as opposed to as the black box big data approaches by which PDFs are currently determined.
Fourth,

**the fact that this can be done is a good and robust non-parametric test of the conceptual soundness and completeness of the Standard Model**. The fact that the same values of the same physical constants and the same equations can be applied in many diverse applications and produce consistent results is an excellent sign that the theory is sound and that any flaws in its are quantitatively small. There are lots of theories you can use to predict the hadron mass spectrum, for example. But, only really good ones can also accurately predict hadron PDFs and determine the mean lifetimes of hadrons from first principles. A PDF is highly sensitive to lots of beyond the Standard Model New Physics that one could imagine exists (particular new physics giving rise to additional particle content beyond the Standard Model) and the fact that it does not places empirical global constraints on the extent to which any true laws of Nature can deviate from the Standard Model.
[1] "The cognitive psychologist Pascal Boyer introduced me to the phrase “theory is information for free.” It’s a succinct way of saying that if you have a theoretical framework you can deduce and extrapolate a lot about the world without having to know everything. And, you can take new information and fit it quickly into your model and generate more propositions (you may not need to know everything, but you do need to know something)." - Razib Khan from here.

[Submitted on 5 May 2020]

# Parton distribution functions from lattice QCD at physical quark masses via the pseudo-distribution approach

One of the great challenges of QCD is to determine the partonic structure of the nucleon from first principles. In this work, we provide such a determination of the unpolarized parton distribution function (PDF), utilizing the non-perturbative formulation of QCD on the lattice.We apply Radyushkin's pseudo-distribution approach to lattice results obtained using simulations with the light quark mass fixed to its physical value; this is the first ever attempt for this approach directly at the physical point. The extracted coordinate-space matrix elements are used to find the relevant physical Ioffe time distributions from a matching procedure. The full Bjorken-x dependence of PDFs is resolved using several reconstruction methods to tackle the ill-conditioned inverse problem encountered when using discrete lattice data. Another novelty of this calculation is the consideration of the combination with antiquarksqv+2q¯ . The latter, together with the non-singlet valence quark PDFqv , provides information on the full distribution.Good agreement is found with PDFs from global fits already within statistical uncertainties and it is further improved by quantifying several systematic effects. The results presented here are the first everThus, they pave the way to investigating a wider class of partonic distributions, such as e.g. singlet PDFs and generalized parton distributions.ab initiodeterminations of PDFs fully consistent with global fits in the wholex -range.Therefore, essential and yet missing first-principle insights can be achieved, complementing the rich experimental programs dedicated to the structure of the nucleon.

## No comments:

Post a Comment