Pages

Thursday, February 28, 2019

The Truth About Calculus





Alt Text: "Symbolic Integration" is when you theatrically go through the motions of finding integrals, but the actual result you get doesn't matter because it's purely symbolic."

("Symbolic integration" actually means solving an integral analytically in a general indefinite integral form, rather than numerically.)
A procedure called the Risch algorithm exists which is capable of determining whether the integral of an elementary function (function built from a finite number of exponentials, logarithms, constants, and nth roots through composition and combinations using the four elementary operations) is elementary and returning it if it is. In its original form, Risch algorithm was not suitable for a direct implementation, and its complete implementation took a long time. It was first implemented in Reduce in the case of purely transcendental functions; the case of purely algebraic functions was solved and implemented in Reduce by James H. Davenport; the general case was solved and implemented in Axiom by Manuel Bronstein. 
However, the Risch algorithm applies only to indefinite integrals and most of the integrals of interest to physicists, theoretical chemists and engineers, are definite integrals often related to Laplace transforms, Fourier transforms and Mellin transforms. Lacking of a general algorithm, the developers of computer algebra systems, have implemented heuristics based on pattern-matching and the exploitation of special functions, in particular the incomplete gamma function.[1] Although this approach is heuristic rather than algorithmic, it is nonetheless an effective method for solving many definite integrals encountered by practical engineering applications. Earlier systems such as Macsyma had a few definite integrals related to special functions within a look-up table. However this particular method, involving differentiation of special functions with respect to its parameters, variable transformation, pattern matching and other manipulations, was pioneered by developers of the Maple[2] system then later emulated by Mathematica, Axiom, MuPAD and other systems.
The fact that a function in calculus and its inverse are profoundly different in difficulty is very non-intuitive but is definitely true. The assumption that they should be similar in difficulty is similar to the faulty reasoning behind "naturalness" as a hypothesis generator and evaluator in physics.

The "alt text" while seemingly just tongue in cheek word play actually hints at a deeper truth as well. While "symbolic integration" doesn't mean what the alt text says that it does, it isn't actually uncommon in theoretical physics to have a paper that calculates something as a proof of concept or a demonstration of a method when the actual result of the calculation doesn't matter.

FYI: This blog is currently one post short of its 3% humor quota.

Tuesday, February 26, 2019

Sean Carroll On Cosmology

  1. The Big Bang model is simply the idea that our universe expanded and cooled from a hot, dense, earlier state. We have overwhelming evidence that it is true.
  2. The Big Bang event is not a point in space, but a moment in time: a singularity of infinite density and curvature. It is completely hypothetical, and probably not even strictly true. (It’s a classical prediction, ignoring quantum mechanics.)
  3. People sometimes also use “the Big Bang” as shorthand for “the hot, dense state approximately 14 billion years ago.” I do that all the time. That’s fine, as long as it’s clear what you’re referring to.
  4. The Big Bang might have been the beginning of the universe. Or it might not have been; there could have been space and time before the Big Bang. We don’t really know.
  5. Even if the BB was the beginning, the universe didn’t “pop into existence.” You can’t “pop” before time itself exists. It’s better to simply say “the Big Bang was the first moment of time.” (If it was, which we don’t know for sure.)
  6. The Borde-Guth-Vilenkin theorem says that, under some assumptions, spacetime had a singularity in the past. But it only refers to classical spacetime, so says nothing definitive about the real world.
  7. The universe did not come into existence “because the quantum vacuum is unstable.” It’s not clear that this particular “Why?” question has any answer, but that’s not it.
  8. If the universe did have an earliest moment, it doesn’t violate conservation of energy. When you take gravity into account, the total energy of any closed universe is exactly zero.
  9. The energy of non-gravitational “stuff” (particles, fields, etc.) is not conserved as the universe expands. You can try to balance the books by including gravity, but it’s not straightforward.
  10. The universe isn’t expanding “into” anything, as far as we know. General relativity describes the intrinsic geometry of spacetime, which can get bigger without anything outside.
  11. Inflation, the idea that the universe underwent super-accelerated expansion at early times, may or may not be correct; we don’t know. I’d give it a 50% chance, lower than many cosmologists but higher than some.
  12. The early universe had a low entropy. It looks like a thermal gas, but that’s only high-entropy if we ignore gravity. A truly high-entropy Big Bang would have been extremely lumpy, not smooth.
  13. Dark matter exists. Anisotropies in the cosmic microwave background establish beyond reasonable doubt the existence of a gravitational pull in a direction other than where ordinary matter is located.
  14. We haven’t directly detected dark matter yet, but most of our efforts have been focused on Weakly Interacting Massive Particles. There are many other candidates we don’t yet have the technology to look for. Patience.
  15. Dark energy may not exist; it’s conceivable that the acceleration of the universe is caused by modified gravity instead. But the dark-energy idea is simpler and a more natural fit to the data.
  16. Dark energy is not a new force; it’s a new substance. The force causing the universe to accelerate is gravity.
  17. We have a perfectly good, and likely correct, idea of what dark energy might be: vacuum energy, a.k.a. the cosmological constant. An energy inherent in space itself. But we’re not sure.
  18. We don’t know why the vacuum energy is much smaller than naive estimates would predict. That’s a real puzzle.
  19. Neither dark matter nor dark energy are anything like the nineteenth-century idea of the aether.
From Sean Carroll's blog (a January 12, 2019 post).

He is mostly, but not entirely, correct. I have put what I agree with in bold, and what I think is wrong or overstated in strikeout, and that visually makes clear the extent to which I do and do not agree with his 19 statements about cosmology.

I agree with 1-12, 14, and 18.

I disagree with 13 ("Dark matter exists. Anisotropies in the cosmic microwave background establish beyond reasonable doubt the existence of a gravitational pull in a direction other than where ordinary matter is located."). Dark matter phenomena definitely exist and require "new physics" to explain, but the interpretation he gives to the CMB is more model dependent than he acknowledges. There are, however, at most, 50-50 odds that it is caused by dark matter particles rather than gravity modification or something similar. Also, many of the more viable dark matter particle theories require a fifth force or gravity modification in addition to dark matter particles. Personally, I think that an explanation predominantly from gravity modification (including subtle refinements of GR in either a classical or quantum gravity mode) is more likely than not to be correct.

The first sentence of 15 ("Dark energy may not exist; it’s conceivable that the acceleration of the universe is caused by modified gravity instead.") is true. The second ("But the dark-energy idea is simpler and a more natural fit to the data.") is not. 

I disagree with 16 ("Dark energy is not a new force; it’s a new substance. The force causing the universe to accelerate is gravity."). This is a possibility, but not anything approaching a certainty. Indeed 16 is internally inconsistent with 15.

The last sentence of 17 ("But we’re not sure.") is true. The first two sentences of 17 ("We have a perfectly good, and likely correct, idea of what dark energy might be: vacuum energy, a.k.a. the cosmological constant. An energy inherent in space itself.") are mostly true except for the "likely correct" part.

19 is mostly true, but "anything like" in 19 is susceptible to different interpretations and if you standard for similarity is low, it isn't true, so it slightly overstates this proposition. 

Monday, February 25, 2019

The NYT On Dark Energy

Today's New York Times has an article discussing recent research efforts related to the phenomena attributed to dark energy.

Two basic kinds of astronomy observations, the first being discrepancies on the order of 9% in different kinds of measurements of Hubble's constant which measures the rate at which the expansion of the universe seems to be accelerating, and the other involving discrepancies in the apparent rate of expansion over time from a constant value based upon myriad observations of very old quasars.

Both of these results could be due to systemic errors in astronomy measurements which are hard to quantify, or could be solved by new physics such as quintessence or phantom energy theories in which the amount of dark energy is not constant (as it is if the cosmological constant is merely added to the equations of general relativity, the leading and most simple explanation for what is observed).

The problem with the new physics approaches discussed in the article, is that the kinds of new physics that would be necessary to reproduce what the observational evidence seems to show is very weird and ill motivated. Even the proponents of these new physics explanations justify them more as a proof of concept, showing that it is possible to come up with some sort of new physics that could explain the data, rather than strongly arguing that their crazy mechanism are actually what is causing the observational discrepancies that we see.

I don't rule out the possibility of new physics in this area that will help explain the data either, although at least some of the discrepancies are almost surely due to systemic errors in astronomy observations that aren't well quantified. But, if there is a new physics solution, it seems very unlikely that the ones proposed (such as a 100,000 year period in the early universe where extra dark energy appears for a while and then vanishes, or a theory in which energy is not conserved when things go fast enough) are actually the right ones.

Negative Mass Models Of Dark Energy And Dark Matter Don't Work

Another explanation of dark energy and dark matter fails miserably.

Can a negative-mass cosmology explain dark matter and dark energy?

A recent work by Farnes (2018) proposed an alternative cosmological model in which both dark matter and dark energy are replaced with a single fluid of negative mass. This paper presents a critical review of that model. A number of problems and discrepancies with observations are identified. For instance, the predicted shape and density of galactic dark matter halos are incorrect. Also, halos would need to be less massive than the baryonic component or they would become gravitationally unstable. Perhaps the most challenging problem in this theory is the presence of a large-scale version of the `runaway' effect, which would result in all galaxies moving in random directions at nearly the speed of light. Other more general issues regarding negative mass in general relativity are discussed, such as the possibility of time-travel paradoxes.
Comments:Submitted to A&A
Subjects:Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA); General Relativity and Quantum Cosmology (gr-qc)
Cite as:arXiv:1902.08287 [astro-ph.CO]
(or arXiv:1902.08287v1 [astro-ph.CO] for this version)
Certain classes of modified gravity theories are also ruled out by new observations:

Constraints of general screened modified gravities from comprehensive analysis of binary pulsars

Testing gravity by binary pulsars nowadays becomes a key issue. Screened modified gravity is a kind of scalar-tensor theory with screening mechanism in order to satisfy the tight Solar System tests. In this paper, we investigate how the screening mechanism affects the orbital dynamics of binary pulsars, and calculate in detail the five post-Keplerian (PK) parameters in this theory. These parameters differ from those of general relativity (GR), and the differences are quantified by the scalar charges, which lead to the dipole radiation in this theory. We combine the observables of PK parameters for the ten binary pulsars, respectively, to place the constraints on the scalar charges and possible deviations from GR. The dipole radiation in the neutron star (NS) - white dwarf (WD) binaries leads to more stringent constraints on deviations from GR. The most constraining systems for the scalar charges of NS and WD are PSR~B1913+16 and PSR~J1738+0333, respectively. The results of all tests exclude significant strong-field deviations and show good agreement with GR.
Comments:14 pages, 20 figures, 2 tables, ApJ accepted
Subjects:General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA)
Cite as:arXiv:1902.08374 [gr-qc]
(or arXiv:1902.08374v1 [gr-qc] for this version)
Also, LIGO rules out certain kinds of compact dark matter objects in the solar system:

Gravitational waves from compact dark matter objects in the solar system

Dark matter could be composed of compact dark objects (CDOs). We find that a close binary of CDOs orbiting {\it inside} solar system bodies can be a loud source of gravitational waves (GWs) for the LIGO and VIRGO detectors. An initial search of data from the first Advanced LIGO observing run (O1), sensitive to h01024, rules out close binaries orbiting near the center of the Sun with GW frequencies (twice the orbital frequency) between 50 and 550 Hz and CDO masses above approximately 10^{-9} M_sun.
Comments:5 pages, 3 figures
Subjects:General Relativity and Quantum Cosmology (gr-qc); High Energy Astrophysical Phenomena (astro-ph.HE); Instrumentation and Methods for Astrophysics (astro-ph.IM)
Cite as:arXiv:1902.08273 [gr-qc]
(or arXiv:1902.08273v1 [gr-qc] for this version)
On the other hand, this new gravity based paper looks interesting:

Dark matter effect attributed to the inherent structure of cosmic space

We propose that anomalous gravitational effects currently attributed to dark matter can alternatively be explained as a manifestation of the inherent structure of space at galactic length scales. Specifically, we show that the inherent curvature of space amplifies the gravity of ordinary matter such that the effect resembles the presence of the hypothetical hidden mass. Our study is conducted in the context of weak gravity, nearly static conditions, and spherically symmetric configuration, and leverages the Cosmic Fabric model of space developed by Tenev and Horstemeyer [T. G. Tenev and M. F. Horstemeyer, Int. J. Mod. Phys. D 27 (2018) 1850083; T. G. Tenev and M. F. Horstemeyer, Rep. Adv. Phys. Sci. 2 (2018) 1850011]
Subjects:General Relativity and Quantum Cosmology (gr-qc)
MSC classes:83D05, 74L99
Journal reference:International Journal of Modern Physics D (2019) 1950082
DOI:10.1142/S0218271819500822
Cite as:arXiv:1902.08504 [gr-qc]
(or arXiv:1902.08504v1 [gr-qc] for this version)

This is summed up in the preprint's conclusion:
We showed that the inherent curvature of physical space (that is curvature uncaused by matter) amplifies the gravitational effects of ordinary matter and produces the kind of gravitational anomalies that are currently attributed to the presence of dark matter (DM). We proposed the Inherent Structure Hypothesis (ISH) stating that the so called DM effect is the manifestation of the inherent structure of space at galactic length-scales, and not the result of invisible mass. 
We demonstrated that any DM effect, which can be explained by the Modified Newtonian Dynamics (MOND) theory or by the presence of a DM halo, can be equally well explained by the ISH. At the same time, we showed, ISH allows for DM effects that cannot be explained by MOND or by DM halos. Therefore, we concluded that the Inherent Structure and DM explanations are observationally equivalent with each other to within some distance from the center of a gravitating system. However, beyond such distance, the ISH predicts that the gravitational impact of the hypothetical dark matter begins to be reversed and is nearly completely eliminated at sufficiently far distances. This is a verifiable prediction that would distinguish our model from other explanations of the DM effect. 
In the comparison between the ISH and MOND we noted an interesting relationship between the size of a gravitational system and its Schwartzchild radius through the MOND parameter a0. Such relationship hinted at the structural underpinnings of the DM effect. 
The Inherent Structure Hypothesis stems from the principle that structure is a fundamental aspect of matter, space, and nature in general, and as such can be incorporated into cosmological models that subscribe to the same principle. 

Friday, February 22, 2019

Strong Force Coupling Constant Measured With Unprecedented Accuracy

We present state-of-the-art extractions of the strong coupling based on 
N3LO+NNLL accurate predictions for the two-jet rate in the Durham clustering algorithm at e+e collisions, as well as a simultaneous fit of the two- and three-jet rates taking into account correlations between the two observables. The fits are performed on a large range of data sets collected at LEP and PETRA colliders, with energies spanning from 35 GeV to 207 GeV. Owing to the high accuracy of the predictions used, the perturbative uncertainty is considerably smaller than that due to hadronization. Our best determination at the Z mass is αs(MZ)=0.11881±0.00063(exp.)±0.00101(hadr.)±0.00045(ren.)±0.00034(res.), which is in agreement with the latest world average and has a comparable total uncertainty.
Andrii Verbytskyi, et al., "High precision determination of αs from a global fit of jet rates" (February 21, 2019).

Thus, bottom line, combining all four components of the margin of error in the measurement of the strong force coupling constant at the Z boson mass energy scale is 0.11881(132). The last digit in the measured value and the margin of error, however, are not really significant digits, so it would be more proper to express it as 0.1188(13) which disavows spurious accuracy.

According to the Particle Data Group, the global average value for this constant is currently 0.1181(11).

If one combines the new result with the old global average in an error weighted combined average value, you end up with a global average value of 0.1184 (I'll look up the formula for computing the margin of error for the weighted average another day, if I have time, but my numerical intuition is that it is approximately ± 0.0010, although I also suspect that the error bars in these measurements across the board, are overstated by about a factor of two). Thus, we know the value of this constant to about 3 significant digits.

This back of napkin analysis isn't completely correct, because to do it absolutely right, you would have to consider the correlations in the contributions that the LEP and PETRA data make to the global averages with a different analysis of the data. But, those adjustment are almost surely immaterial relative to the margin of error in the respective measurements.

Relevance

When the new data only results in adjustments on the order of parts per thousand, the result isn't exactly revolutionary or mind blowing. But, every little improvement in the precision with which this constant can be measured accurately is important, because absent special symmetries in an experiment, no calculation a QCD predicted value can ever produce an answer more precise than the best available value of αs(MZ).

The new result shifts the central value of the strong force coupling constant by about one part in 394, which is a quantity large enough to have just barely observable, albeit subtle, differences in QCD predictions using the respective values of the strong force coupling constant.

The fact that a newly refined methodology for extracting this measurement from the data also provides some incrementally greater reassurance that the quantity being measured really is a single fundamental constant of nature that each experiment is measuring with more or less accuracy, and that the combined measurement is robust, as opposed to being strongly dependent upon the technique used to make the measurement.

The more significant digits you have in the least precisely known of the fundamental constants of the Standard Model, the more meaningful numerological hypotheses about the relationships between these physical constants that are consistent with the data become. It is much harder, for example, to come up with relationships between the coupling constants of the Standard Model that are true to three significant digits of accuracy, for example, than it is to do so to a precision of five or more significant digits. 

It is also worth mentioning in passing that no deviations from the Standard Model related to the running of the strong force coupling constant with energy scale have been observed to date, despite the existence of data from energy scales comparable to those within a single stable or metastable hadron at rest, to the largest 13 TeV to 14 TeV energy scales (which translate to usable exclusion ranges in data up to about a tenth of those energies). These results are contrary to the expectation of a version of the Minimal Supersymmetric Standard Model (MSSM) in which supersymmetric phenomena begin to arise around the electroweak scale (i.e. low hundreds o GeVs).

But, proponents of supersymmetry do fairly point out that the accuracy of strong force coupling constant measurements at very high energies is sufficiently great that they are incapable of having enough statistical power to definitively distinguish between the Standard Model and MSSM expectations at even the two sigma level, even though those deviations are quite material.

For example, as discussed in a June 19, 2018 post at this blog:
The strong force coupling constant, which is 0.1184(7) at the Z boson mass, would be about 0.0969 at 730 GeV and about 0.0872 at 1460 GeV, in the Standard Model and the highest energies at which the strong force coupling constant could be measured at the LHC is probably in this vicinity. 
In contrast, in the MSSM [minimal supersymmetric standard model], we would expect a strong force coupling constant of about 0.1024 at 730 GeV (about 5.7% stronger) and about 0.0952 at 1460 GeV (about 9% stronger).
Current individual measurements of the strong force coupling constant at energies of about 40 GeV and up (i.e. without global fitting or averaging over multiple experimental measurements at a variety of energy scales), have error bars of plus or minus 5% to 10% of the measured values. But, even a two sigma distinction between the SM prediction and SUSY prediction would require a measurement precision of about twice the percentage difference between the predicted strength under the two models, and a five sigma discovery confidence would require the measurement to be made with 1%-2% precision (with somewhat less precision being tolerable at higher energy scales).
UPDATE from the comments February 24, 2019:

There is another new evaluation of this constant using Lattice QCD, using a different renormalization scheme, apparently derived from the physical pion mass:

The strong running coupling from the gauge sector of Domain Wall lattice QCD with physical quark masses

We report on the first computation of the strong running coupling at the physical point (physical pion mass) from the ghost-gluon vertex, computed from lattice simulations with three flavors of Domain Wall fermions. We find αMS(m2Z)=0.1172(11), in remarkably good agreement with the world-wide average. Our computational bridge to this value is the Taylor-scheme strong coupling, which has been revealed of great interest by itself because it can be directly related to the quark-gluon interaction kernel in continuum approaches to the QCD bound-state problem.
The introductory material hits a lot of key high points:
Quantum Chromodynamics (QCD), the non-Abelian gauge quantum field theory describing the strong interaction between quarks and gluons, can be compactly expressed in one line with a few inputs; namely, the current quark masses and the strong coupling constant, αs [1]. The latter is a running quantity which sets the strength of the strong interaction for all momenta. This running can be, a priori, inferred from the theory and encoded in the Renormalization Group equation (RGE) of αs, the value of which can be thus propagated from one given momentum to any other. The strong coupling is expressed by either the boundary condition for its RGE, generally dubbed ΛQCD, or its value at a reference scale, typically the Z 0 -pole mass. This value is considered one of the QCD fundamental parameters, to be fitted from experiments, and amounts to αs(m2 Z ) = 0.1181(11) [2], in the MS renormalization scheme. Its current uncertainty of about 1 % renders it the least precisely known of all fundamental coupling constants in nature. But at the same time, it is interesting to mention that a plethora of computations of LHC processes depend on an improved knowledge of αs to reduce their theoretical uncertainties [3]. Especially in the Higgs sector, the uncertainty of αs dominates that for the H → cc, gg ¯ branching fractions and, after the error in the bottom mass, the one for the dominant H → b ¯b partial decay. And contrarily to other sources of uncertainty, as parton distribution functions, which reduced substantially [4], that for αs has not significantly changed in the last decade. Moreover, the αs running and its uncertainty also has a non-negligible impact in the study of the stability of the electroweak vacuum, in the determination of the unification scale for the interaction couplings and, generally, in discriminating different New Physics scenarios. 
There are many methods to determine the QCD coupling constant based on precision measurements of different processes and at different energy scales. A description of which can be found in the last QCD review of Particle Data Group (PDG) [2] or in specific reviews as, for instance, Ref. [5]. Alternatively, lattice QCD can be applied as a tool to convert a very precise physical observation, used for the lattice spacing setting, into ΛQCD. Thus, lattice QCD calculations can potentially be of a great help to increase the accuracy of our knowledge of αs. A review of most of the procedures recently implemented to determine the strong coupling from the lattice can be found in Ref. [6]. Among these procedures, there are those based on the computation of QCD Green’s functions (see for instance [7–9]), the most advantageous of which exploits the ghost-gluon vertex renormalized in the so called Taylor scheme [10–17] such that the involved coupling can be computed from two-point Green’s functions. As a bonus, this coupling is connected to the quark-gluon interaction kernel in continuum approaches to the QCD bound-state problem [18–22]. In this letter, we shall focus on this method and evaluate the Taylor coupling from lattice simulations with three Domain Wall fermions (DWF) at the physical point. DWF (cf. [23, 24] for two interesting reviews), owing to their very good chiral properties, are expected to suffer less the impact of discretization artifacts.
END UPDATE

SECOND UPDATE February 25, 2019:

The FLAG group, a competitor to PDG has this to say about the strong force coupling constant in their 2019 review (emphasis added):
9.10.4 Conclusions 
With the present results our range for the strong coupling is (repeating Eq. (346)) 
α (5) MS(MZ ) = 0.11823(81) Refs. [13, 15, 24, 80–83], 
and the associated Λ parameters 
Λ (5) MS = 211(10) MeV Refs. [13, 15, 24, 80–83], (353) 
Λ (4) MS = 294(12) MeV Refs. [13, 15, 24, 80–83], (354) 
Λ (3) MS = 343(12) MeV Refs. [13, 15, 24, 80–83]. (355) 
Compared to FLAG 16, the errors have been reduced by about 30% due to new computations. As can be seen from Fig. 38, when surveying the green data points, the individual lattice results agree within their quoted errors. Furthermore those points are based on different methods for determining αs, each with its own difficulties and limitations. Thus the overall consistency of the lattice αs results and the large number of ⋆ in Tab. 38, engenders confidence in our range. 
It is interesting to compare to the Particle Data Group average of nonlattice determinations of recent years, 
α (5) MS(MZ) = 0.1174(16), PDG 18, nonlattice [136] (281) 
α (5) MS(MZ) = 0.1174(16), PDG 16, nonlattice [198] (356) 
α (5) MS(MZ) = 0.1175(17), PDG 14, nonlattice [167] (357) 
α (5) MS(MZ) = 0.1183(12), PDG 12, nonlattice [798] (358) 
(there was no update in [136]). There is good agreement with Eq. (346). Due to recent new determinations the lattice average is by now a factor two more precise than the nonlattice world average and an average of the two [Eq. (346) and Eq. (281)] yields 
α (5) MS(MZ) = 0.11806(72), FLAG 19 + PDG 18. (359) 
In Fig. 38 we also depict the various PDG pre-averages which lead to the PDG 2018/2016 nonlattice average. They are on a similar level as our pre-ranges (grey bands in the graph): each one corresponds to an estimate (by the PDG) of αs determined from one set of input quantities. Within each pre-average multiple groups did the analysis and published their results as displayed in Ref. [136]. The PDG performed an average within each group;76 we only display the latter in Fig. 38. 
The fact that our range for the lattice determination of αMS(MZ) in Eq. (346) is in excellent agreement with the PDG nonlattice average Eq. (281) is an excellent check for the subtle interplay of theory, phenomenology and experiments in the nonlattice determinations. The work done on the lattice provides an entirely independent determination, with negligible experimental uncertainty, which reaches a better precision even with our quite conservative estimate of its uncertainty. 
Given that the PDG has not updated their number, Eq. (359) is presently the up-to-date world average. 
We finish by commenting on perspectives for the future. The step-scaling methods have been shown to yield a very precise result and to satisfy all criteria easily. A downside is that dedicated simulations have to be done and the method is thus hardly used. It would be desirable to have at least one more such computation by an independent collaboration, as also requested in the review [661]. For now, we have seen a decrease of the error by 30% compared to FLAG 16. There is potential for a further reduction. Likely there will be more lattice calculations of αs from different quantities and by different collaborations. This will enable increasingly precise determinations, coupled with stringent cross-checks.
Other Fundamental Physical Constants

It is useful to review for context, the other experimentally measured fundamental physical constants that are known and the precision with which they are known.

It is customary to distinguish fundamental physical constants which are experimentally measured (of which there are about 30 necessary in principle to calculate everything in the Standard Model and General Relativity, whose exact description may vary because the constants which you call fundamental and those which you call derived can vary when constants are related to each other) from the physical constants whose exact values are believed to be known exactly as a matter of theory.

Generally speaking, most physicists hope that we will someday discover how to reduce the number of experimentally measured fundamental constants in physics by deriving some of them from other of them or from new more fundamental physic constants, according to some deeper theory which could be either a "beyond the Standard Model theory" if it makes predictions different from the Standard Model in some circumstances, or what I call a "within the Standard Model" theory which provides a more fundamental way of explaining the predictions and features of the Standard Model while not actually being different in phenomenology in any discernible respect.

These theoretically assumed physical constants which are properties of the fundamental laws of physics (i.e. basically the Standard Model, Special Relativity, and General Relativity) include:
  • the electric charge, hypercharge, and weak isospin of the fundamental particles, 
  • the total angular momentum (commonly called "spin" or "J") of the fundamental particles,
  • the conservation of angular momentum, 
  • the conservation of linear momentum, 
  • the exactly zero rest mass of photons and gluons (which can have eight kinds of QCD color charge), 
  • the fact that CPT is conserved, 
  • the conservation of baryon number and the baryon number of various particles, 
  • the conservation of lepton number and the lepton number of various particles, 
  • the non-existence of right parity neutrinos and left parity antineutrinos, 
  • the relationship between the masses of the fundamental particles (with the possible exception of neutrinos) and their coupling to the Higgs boson (also called a Yukawa) including its self-coupling, 
  • the conservation of quark type and lepton type in the absence of W boson interactions and neutrino oscillations, 
  • the conservation of mass-energy which convert between each other only according to the formula E=mc2, except possibly with regard to the effects of the cosmological constant and/or "dark energy" which is another term used to describe effects usually described in cosmology with the cosmological constant, 
  • the manner in which all particles always obey the Lorentz conversions of special relativity, 
  • the unitary nature of probabilities in Standard Model calculations, 
  • the fact that charge-parity (a.k.a. CP) is conserved in all interactions that do not involve the W boson interacting with a fundamental fermion (thus, the strong force, Z boson interactions, electromagnetic interactions and gravitational interactions preserve CP), 
  • the fact that fundamental fermions have only two possible parities (common called "left" and "right"),
  • the differences in properties between integer spin particles called bosons and moduli of half integer spin particles called fermions, and the absence of total angular momentum other than in multiples of 1/2.,
  • the exact terms and the operation of the beta functions of all of the experimentally measured constants of the Standard Model (i.e. how they change with moment transfer scale a.k.a. energy scale), 
  • the relationship between a photon's energy and its frequency, 
  • the principles that govern how a particle is allowed to decay and with what frequency it does so given the experimentally measured constants that apply, 
  • the form of the propagator function of Standard Model particles,
  • the magnitude and types of allowed QCD color charges for quarks and gluons, 
  • the existence of exactly three dimensions of space and one dimension of time, 
  • the conservation of electric charge, the conservation of color charge, 
  • the "generation" of a particular fundamental fermion and the fact that there are exactly three generations of each kind of fundamental fermion, 
  • the fact that "right handed" parity particles and "left handed" parity antiparticles don't interact with the weak force, 
  • the fact that antiparticles are identical to ordinary particles in all respects except that they have reversed charge and parity,
  • the principle of gauge invariance a.k.a. background independence that means that the choice of units and coordinate systems used to describe and measure phenomena cannot influence the behavior of the fundamental laws of physics, 
  • the fact that the Higgs boson has "even" rather than "odd" parity (i.e. that a Higgs boson is scalar rather than pseudoscalar), and 
  • the assumption that charged leptons are equally likely adjusting only for mass to be transformed into other charged lepton types in W boson interactions (sometimes called the "democratic principle").
While quantum gravity is still only a hypothetical theory, it is widely assumed that the only additional particle involved in most quantum gravity theories is a massless, spin-2 boson called a graviton that couples to all other fundamental particles (including other gravitons) with a strength proportional to the mass-energy of those particles and Newton's constant. Basically, the properties of the graviton in most quantum gravity theories are completely determined by theory, but the mathematics involved in making phenomenological predictions with this theory are completely intractable outside the most simplified cases.

The Quark Masses

FLAG 19 has notably also updated its estimates of five of the six quark masses.

Its Nf 2+1+1 estimate of the pole mass of the charm quark is:
Combining all four results yields mc(mc) = 1.282(17) GeV Refs. [8, 9, 15, 22] , (59)
Its Nf 2+1+1+1 estimate of the pole mass of the bottom quark is:
Nf = 2 + 1 + 1 : mb(mb) = 4.198(12)[GeV] Refs. [8, 15, 26–28] . (67) 
Thus, we know, to 2 significant digits the masses of the up and down quarks individually, and to 3 significant digits, the masses of the average of the up and down quark masses, the mass of the strange quark, the pole masses of the charm and bottom quarks, the strong force coupling constant and the top quark pole mass (which is 173.1 ± 0.9 GeV according to the Particle Data Group's 2018 values).


The Other Standard Model Coupling Constants

We know the other coupling constants of each of the other three fundamental forces of physics to significantly greater precision (i.e. three to eight additional orders of magnitude).

The corresponding physical constant for electromagnetism is 7.297 352 5664(17)×10−3. Thus, we know the value of this constant to about 10 significant digits.

Fermi's coupling constant, which is proportional to the weak force coupling constant, is 1.166 378 7(6)×10−5GeV−2. Thus we know the value of this constant to about to about 8 significant digits.

Other Standard Model Physical Constants Apart From The Neutrino Sector

We know a variety of other fundamental or nearly fundamental physical constants to precisions of 1 to 10 significant digits.

The electron mass is 0.510 998 946 10 ± 0.0000000031 MeV, which we know to 10 significant digits.

The muon mass is 105.658 374 5 ± 0.0000024 MeV, which we know to 8 significant digits.

The tau lepton mass is 1776.86 ± 0.12 MeV, which we know to 5 significant digits.

Incidentally, the masses of the electron, muon and tau lepton are related to each other in a manner that is consistent with Koide's rule to within the current margins of error in these measurements, which made a prediction regarding the mass of the tau lepton as a function of the mass of the electron and the muon in 1981, six years after the tau lepton was first discovered. If Koide's rule is correct, the mass of the tau lepton, given the known electron mass and muon mass is 1776.96894(7), which would be precise to 9 significant digits. The difference between the measured mass of the tau lepton and its predicted value given Koide's rule is 0.1 MeV, which is less than one standard deviation of measurement error from the predicted value.

The W boson mass is 80.379 ± 0.012 GeV, which we know to 4 significant digits. A global electroweak fit suggests that the true value is closer to 80.356 +/- 0.009 GeV.

The Z boson mass is 91.1876 ± 0.0021 GeV, which we know to 5 significant digits.

The Higgs boson mass (per the 2018 Particle Data Group average which is not quite up to date), but is close enough for purposes of significant digits of precision, is 125.18 ± 0.16 GeV, which we know to 4 significant digits.

The four parameters of the CKM matrix, in the Wolfenstein parameterization, they are λ = 0.225 37 ± 0.00061 , A = 0.814+0.023 −0.024, ρ¯ = 0.117 ± 0.021 , η¯ = 0.353 ± 0.013. One of them is known to 4 significant digits; the other three are know to 3 significant digits.

While they are not usually considered fundamental Standard Model constants, two other key physical constants are used in the Standard Model (in addition to the speed of light, discussed below):

The Higgs vacuum expectation value (a.k.a. Higgs vev) is 246.227 957 9 ± 0.0000010 GeV, which is known to 10 significant digits. While the Higgs boson mass was discovered only in 2012, the Higgs vev has been known to great precision for decades.

Planck's constant is 6.626 069 57×10−34 ± 0.00000029 J*s. It is known to 8 significant digits.

The mean lifetime of each of the various fundamental and composite particles in the Standard Model, which is inversely proportional to their "decay width" and proportional to their half-life, is not a fundamental constant in the Standard Model. Instead, it is calculated (in principle) from the fundamental constants of the Standard Model.

The Seven Standard Model Neutrino Physics Constants

There are four parameters of the PMNS matrix; three of which are known to 2 significant digits. These three parameters are θ12=33.36 +0.81/-0.78 degrees, θ23=40.0+2.1/-1/5 degrees or 50.4+1.3/-1.3 degrees, and θ13=8.66+0,44/-0.46 degrees. We also don't know the quadrant of θ23 (i.e. if it is under or over 45 degrees) (θ is pronounced "theta" and is a Greek letter often used to describe angles in physics).

The CP violating phase of the PMNS matrix is known to 1 significant digit, which is just enough to confirm that it is not zero. 

We aren't 100% certain, but it is very likely (probably more than 90% likely), that the neutrino mass eigenstates have a "normal" hierarchy. Their absolute masses, if there is a normal hierarchy, are:

v1: 0 meV to 12 meV at a two sigma precision ± 6 meV
v2: 8.42 meV to 21.9 meV at a two sigma precision ± 6.74 meV
v3: 56.92 meV to 72.4 meV at a two sigma precision ± 7.74 meV

(1 meV = 0.001 electron volts.)

Thus, the absolute neutrino masses are known to about 1 significant digit, the vast majority of which is completely correlated between the three absolute neutrino masses, and the balance of which is known to 2-3 significant digits.

The difference between the second and third neutrino mass state is roughly 49.5 ± 0.5 meV, which is 3 significant digits and the difference between the first and second neutrino mass state is roughly 8.66 ± 0.12 meV, which is 2 significant digits.

The minimum sum of the three neutrino mass eigenstates, at 95% confidence, from neutrino oscillation data, is 64.4 meV, which is known to 3 significant digits. The maximum sum sum of the three neutrino mass eigenstates, at 95% confidence (which is independent of the neutrino hierarchy and is one of the main pieces of evidence in favor of a normal neutrino mass hierarchy as opposed to an "inverted" neutrino mass hierarchy) is derived from cosmology models and is known to 2 significant digits.

We also don't know if the neutrino masses are "Dirac" or "Majorana". If the neutrino masses are Majorana, neutrinoless double beta decay is possible. Neutrinoless double beta decay has not be observed to date in any widely accepted experimental findings (one positive result from a Moscow experiment has been repeatedly ruled out by multiple other experiments to much greater precision than the claimed detection). But, existing limits on the maximum rates of neutrinoless double beta decay are not inconsistent with the possibility that neutrinos with masses as small as those summarized above could be Majorana (the heavier Majorana mass neutrinos are, the larger their rates of neutrinoless double beta decay).

If neutrinos have Majorana mass, there are additional CP phase constants.

The possibility that there is a fourth "sterile" neutrino (i.e. one that has no weak force interactions) that oscillates with the three "active" neutrino types has also not been ruled out, although on the whole, this appears to be disfavored by the experimental data. If there is a fourth "sterile" neutrino, it would have a mass eigenstate and there would be three more (completely correlated) PMNS matrix elements to full describe neutrino oscillation possibilities.

Physical Constants Associated With Gravity and the Speed of Light

The speed of light is 299,792,458 m*s−1 and is an experimentally measured constants (even though the speed of light's value is now part of the definition of the meter), that was known to a precision of 9 significant digits when it was defined in terms of meters. This is used in the Standard Model, General Relativity and Special Relativity.

Newton's constant of gravitation, G, is 6.674 08(31)×10−11m3kg−1s−2. Thus, we know the value of this constant to about 5 significant digits. Despite the name, this constant is also used in General Relativity.

The cosmological constant which is 1.1056 10-52 m-2 and is known to 3 significant digits (it is a function in the Standard Model of Cosmology also known as the lambdaCDM model of the dark energy fraction of the mass-energy of the universe in this model, and Hubble's constant). This accuracy is overstated, however, as there are significant tensions in measurements of Hubble's constant measured by different techniques. Given the tensions in the measurements of Hubble's constant at the current time, the actual precision is closer to 1 significant digit.

The age of the Universe is 13.80 ± 0.04 billion years. Thus, the age of the Universe is known to 4 significant digits. The estimated amount of time between the Big Bang and certain significant events in the history of the Universe, however,  is believed to be known in standard cosmology to as little as tiny fractions of a second. In high energy physics, something that has a mean lifetime on the order of magnitude of the lifetime of the universe but not an infinite mean lifetime is called "metastable." The age of the Universe is really not a "fundamental constant" of physics because it is not a number which is always true in describing the laws of physics and indeed isn't an input itself into any of the laws of physics, but does describe the universe that we currently live in, in useful ways.

Planck Units

Some physical constants which are functions of Plank's constant, Newton's constant and the speed of light, determined by dimensional reasoning, that are relevant to beyond the Standard Model physics are:

Planck length is 1.616 229(38) × 10−35 meters. By comparison, a pion, the lightest and one of the smallest composite particles in the Standard Model, is about 1020 Planck lengths in diameter.

Planck time is 5.391 16(13) x 10−44 seconds. By comparison, the fastest known process in the Standard Model takes about 2 x 1020 Plank units of time to occur, on average.

Planck length and Planck time are minimum units of length and time at which measurement become ill-defined in the Standard Model, basically due to the Heisenberg uncertainty principle. In practice, it is not possible to measure quantities with precisions anywhere near the Planck length or Planck time as a matter of non-fundamental technological and engineering limitations.

Planck mass a.k.a. Planck energy is 1.220 910(29) × 1019 GeV/c2 = 2.176 47(5) × 10−8 kilograms = 543.36 kilowatt hours = 1.160 452 506 170 x 1011 degrees kelvin. This temperature is estimated by cosmologists to have existed for only one unit of Planck time after the Big Bang.

Planck energy, viewed as an energy scale of momentum transfer of Standard Model particle interactions (i.e. basically as an extremely high temperature defined at the subatomic level), is viewed by many theorists as a possible ultraviolet point of completion of the renormalization group values of the fundamental physical constants that run with energy scale in quantum field theories such as the Standard Model, at which there may be what amounts to a phase change in the laws of physics.

END SECOND UPDATE