Showing posts with label natural philosophy. Show all posts
Showing posts with label natural philosophy. Show all posts

Tuesday, August 23, 2022

A Hypothetical Pre-Big Bang Universe And More Conjectures

There are at least three plausible solutions to the question of why we live in a matter dominated universe when almost all processes experimentally observed conserve the number of matter particles minus the number of antimatter particles. 

One is that the initial Big Bang conditions were matter dominated (as our post-Big Bang universe almost surely was a mere fraction of a second after the Big Bang). There is no scientific requirement that the universe had any particular initial conditions.

A second is that there are new, non-equilibrium physics beyond the Standard Model that do not conserve baryon and lepton number and are strongly CP violating at extreme high energies. No new physics is necessary for the Standard Model to continue to make mathematical sense to more than 10^16 GeV, known as the grand unified theory (GUT) scale. And, there is no evidence of such physics yet. But the most powerful particle colliders and natural experiments that function as particle colliders have interaction energies far below the 10^16 GeV. The most powerful man made collider, the Large Hadron Collider (LHC), is probing energies on the order of 10^4 GeV, about a trillion times lower than those of the immediate vicinity in time of the Big Bang.

A third is that matter, which can be conceived of as particles moving forward in time, dominates our post-Big Bang universe, while there is a parallel pre-Big Bang mirror universe dominated by antimatter, which can be conceived of as particles moving backward in time. To the extent that this calls for beyond the Standard Model physics, the extensions requires are very subtle and apply only at the Big Bang singularity itself. 

I tend to favor this quite elegant approach, although the evidence is hardly unequivocal in favoring it over the alternatives, and it may never be possible to definitively resolve the question.

The introduction to a new paper and its conclusion, below, explain the features and virtues of this third scenario. 

The paper argues that the primary arrow of time (since fundamental physics observes CPT symmetry to high precision) is entropy as one gets more distant in time from the Big Bang, that cosmological inflation and primordial gravitational waves is not necessary in this scenario, and in this scenario, it makes sense that the strong force would not violate CP symmetry, despite the fact that there is an obvious way to insert strong force CP violation into the Standard Model Lagrangian. 

In contrast, cosmological inflation is quite an ugly theory, with hundreds of variants, many of which can't be distinguished from each other with existing observations.

In a series of recent papers, we have argued that the Big Bang can be described as a mirror separating two sheets of spacetime. Let us briefly recap some of the observational and theoretical motivations for this idea. 

Observations indicate that the early Universe was strikingly simple: a fraction of a second after the Big Bang, the Universe was radiation-dominated, almost perfectly homogeneous, isotropic, and spatially flat; with tiny (around 10^−5 ) deviations from perfect symmetry also taking a highly economical form: random, statistically gaussian, nearly scale-invariant, adiabatic, growing mode density perturbations. Although we cannot see all the way back to the bang, we have this essential observational hint: the further back we look (all the way back to a fraction of a second), the simpler and more regular the Universe gets. This is the central clue in early Universe cosmology: the question is what it is trying to tell us. 

In the standard (inflationary) theory of the early Universe one regards this observed trend as illusory: one imagines that, if one could look back even further, one would find a messy, disordered state, requiring a period of inflation to transform it into the cosmos we observe. 

An alternative approach is to take the fundamental clue at face value and imagine that, as we follow it back to the bang, the Universe really does approach the ultra-simple radiation-dominated state described above (as all observations so far seem to indicate). Then, although we have a singularity in our past, it is extremely special. Denoting the conformal time by τ , the scale factor a(τ) is ∝ τ at small τ so the metric gµν ∼ a(τ)^2ηµν has an analytic, conformal zero through which it may be extended to a “mirror-reflected” universe at negative τ. 

[W]e point out that, by taking seriously the symmetries and complex analytic properties of this extended two-sheeted spacetime, we are led to elegant and testable new explanations for many of the observed features of our Universe including: (i) the dark matter; (ii) the absence of primordial gravitational waves, vorticity, or decaying mode density perturbations; (iii) the thermodynamic arrow of time (i.e. the fact that entropy increases away from the bang); and (iv) the homogeneity, isotropy and flatness of the Universe, among others. 

In a forthcoming paper, we show that, with our new mechanism for ensuring conformal symmetry at the bang, this picture can also explain the observed primordial density perturbations. 

In this Letter, we show that: (i) there is a crucial distinction, for spinors, between spatial and temporal mirrors; (ii) the reflecting boundary conditions (b.c.’s) at the bang for spinors and higher spin fields are fixed by local Lorentz invariance and gauge invariance; (iii) they explain an observed pattern in the Standard Model (SM) relating left- and right-handed spinors; and (iv) they provide a new solution of the strong CP problem. . . . 

In this paper, we have seen how the requirement that the Big Bang is a surface of quantum CT symmetry yields a new solution to the strong CP problem. It also gives rise to classical solutions that are symmetric under time reversal, and satisfy appropriate reflecting boundary conditions at the bang. 

The classical solutions we describe are stationary points of the action and are analytic in the conformal time τ. Hence they are natural saddle points to a path integral over fields and four-geometries. The full quantum theory is presumably based on a path integral between boundary conditions at future and past infinity that are related by CT-symmetry. The cosmologically relevant classical saddles inherit their analytic, time-reversal symmetry from this path integral, although the individual paths are not required to be time-symmetric in the same sense (and, moreover may, in general, be highly jagged and non-analytic). 

We will describe in more detail the quantum CT-symmetric ensemble which implements (12), including the question of whether all of the analytic saddles are necessarily time-symmetric, and the calculation of the associated gravitational entanglement entropy, elsewhere.

The paper and its abstract are as follows:
We argue that the Big Bang can be understood as a type of mirror. We show how reflecting boundary conditions for spinors and higher spin fields are fixed by local Lorentz and gauge symmetry, and how a temporal mirror (like the Bang) differs from a spatial mirror (like the AdS boundary), providing a possible explanation for the observed pattern of left- and right-handed fermions. By regarding the Standard Model as the limit of a minimal left-right symmetric theory, we obtain a new, cosmological solution of the strong CP problem, without an axion.
Latham Boyle, Martin Teuscher, Neil Turok, "The Big Bang as a Mirror: a Solution of the Strong CP Problem" arXiv:2208.10396 (August 22, 2022).

Some of their key earlier papers by some of these authors (which I haven't yet read and don't necessarily endorse) are: "Gravitational entropy and the flatness, homogeneity and isotropy puzzles" arXiv:2201.07279, "Cancelling the vacuum energy and Weyl anomaly in the standard model with dimension-zero scalar fields" arXiv:2110.06258, "Two-Sheeted Universe, Analyticity and the Arrow of Time" arXiv:2109.06204, "The Big Bang, CPT, and neutrino dark matter" arXiv:1803.08930, and "CPT-Symmetric Universe" arXiv:1803.08928.

Moreover, if Deur's evaluation of gravitational field self-interactions (which is most intuitive from a quantum gravity perspective but he claims can be derived from purely classical general relativity) is correct, then observations attributed to dark matter and dark energy (or equivalently a cosmological constant) in the LambdaCDM Standard Model of Cosmology, can be explained with these non-Newtonian general relativity effects in weak gravitational fields, predominantly involving galaxy and galaxy cluster scale agglomerations of matter. 

This would dispense with the need for any new particle content in a theory of everything beyond the almost universally predicted, standard, plain vanilla, massless, spin-2 graviton giving rise to a quantum gravity theory that could be theoretically consistent with the Standard Model. 

It would also imply that there are no new high energy physics that need to be discovered apart from one at the very Big Bang singularity itself where matter and antimatter pairs created according to Standard Model physics rules segregate between the post-Big Bang and pre-Big Bang universe at this point of minimum entropy, to explain all of our current observations. 

We would need no dark matter particles, no quintessence, no inflatons, no axions, no supersymmetric particles, no sterile neutrinos, no additional Higgs bosons, and no new forces.

We aren't quite there. We have some final details about neutrino physics to pin down. Our measurements of the fundamental particle masses, CKM matrix elements, and PMNS matrix elements need greater precision to really decisively support a theory behind the source of these physical constants. We have QCD to explain hadrons but can't really do calculations sufficient to derive the spectrum of all possible hadrons and all of their properties yet, even though it is theoretically possible to do so. And, of course, there are lots of non-fundamental physics questions in both atomic and larger scale lab physics and in the formation of the universe that are almost certainly emergent from these basic laws of physics in complex circumstances, which we haven't yet fully explained.

There would also be room for further "within the Standard Model" physics to derive its three forces plus gravity, and couple dozen physical constants from a more reductionist core, but that is all. And, there is even some room in the form of conjectures about variants on an extended Koide's rule and the relationship between the Higgs vacuum expectation value and the Standard Model fundamental particles to take that further.

It is also worth noting that even if Deur's treatment of gravitational field self-interactions is not, as claimed, possible to derive from ordinary classical General Relativity, either because it is a subtle modification of Einstein's field equations, or because it is actually a quantum gravity effect, there is still every reason to prefer his gravitational approach, that explains all dark matter and dark energy phenomena and is consistent with astronomy observations pertinent to cosmology (for example, explaining the CMB peaks and resolving the impossible early galaxy problem) with a simple and elegant theory, neatly paralleling QCD, that has at least two fewer degrees of freedom than the LambdaCDM Standard Model of Cosmology despite fitting the observational data better at the galaxy and galaxy cluster scales.

And, Deur's approach is pretty much the only one that can explain the data attributed to dark energy in a manner the does not violate conservation of mass-energy (a nice compliment to a mirror universe cosmology that is time symmetric since conservation of mass-energy is deeply related to time symmetry).

Deur's paradigm has the potential to blow away completely the Standard Model of Cosmology, and the half century or so of astronomy work driven by it and modest variation upon it, in addition to depriving lots of beyond the Standard Model particle physics concepts of any strong motivation.

Milgrom's Modified Newtonian Dynamics (MOND) has actually done a lot of the heavy lifting in showing that observational data for galaxies can be explained, for observations within this toy model theory's limited domain of applicability, without dark matter. 

But Deur's theory, by providing a deeper theoretical justification for the MOND effects that it reproduces, by extending these observations of galaxy clusters and cosmology scale phenomena, by making the theory naturally relativistic in a manner fully consistent with all experimental confirmations of General Relativity, and by providing an elegant solution to observations seemingly consistent with dark energy or a cosmological constant, unifies and glows up MOND's conclusions in a way that makes a gravitational explanation of dark matter far more digestible and attractive to astrophysicists who have so far clung to the increasingly ruled out dark matter particle hypotheses.

A mirror universe cosmology, likewise, has the potential to stamp out the theoretical motivation for all sorts of new physics proposals that simply aren't necessary to explain what we observe as part of a new paradigm of the immediate Big Bang era cosmology.

We are now in a position where physicists can see fairly clearly what the metaphorically promised land of a world where the laws of physics are completely known, even if the scientific consensus hasn't yet caught up with this vision.

It turns out that many of the dominant topics of theoretical physics discussion over the last half-century, from dark matter, to dark energy, to cosmological inflation, to supersymmetry, to string theory, to the multiverse, to cyclic cosmologies, to the anthropic principle, to technicolor, to multiple Higgs doublets, to a grand unified theory or theory of everything uniting physics into a single master Lie group or Lie algebra, do not play an important role in that future vision. Likewise, this would dispense with the need for the many heavily analyzed, but less subtle than Deur's variations on Einstein's Field Equations as conventionally applied, which are the subject of regular research.

If the scientific method manages to prevail over scientific community sociology, in a generation or two, all of these speculative beyond the Standard Model physics proposals will be discarded, and we will be left with a moderately complicated explanation for the universe that nonetheless explains pretty much everything. 

I may not live to see that day come, but I have great hope that my grandchildren or great-grandchildren might live in this not so distant future when humanity has grandly figured out all of the laws of physics in a metaphysically naturalist world.

Friday, July 29, 2022

Unknown Unknowns

This article discusses an important methodological issue of wide interdisciplinary importance: how to deal with "unknown unknowns" so as not to be overconfident about scientific results, without throwing out the baby with the bathwater and retreating to a nihilist position that we know nothing. 

It demonstrates an approach to estimating the uncertainty of results even though we don't know the precise sources of the uncertainties, including possible researcher fraud.
Uncertainty quantification is a key part of astronomy and physics; scientific researchers attempt to model both statistical and systematic uncertainties in their data as best as possible, often using a Bayesian framework. Decisions might then be made on the resulting uncertainty quantification -- perhaps whether or not to believe in a certain theory, or whether to take certain actions. 
However it is well known that most statistical claims should be taken contextually; even if certain models are excluded at a very high degree of confidence, researchers are typically aware there may be systematics that were not accounted for, and thus typically will require confirmation from multiple independent sources before any novel results are truly accepted. 
In this paper we compare two methods in the astronomical literature that seek to attempt to quantify these `unknown unknowns' -- in particular attempting to produce realistic thick tails in the posterior of parameter estimation problems, that account for the possible existence of very large unknown effects. 
We test these methods on a series of case studies, and discuss how robust these methods would be in the presence of malicious interference with the scientific data.
Peter Hatfield, "Quantification of Unknown Unknowns in Astronomy and Physics" arXiv:2207.13993 (July 28, 2022).

Friday, March 11, 2022

Naturalness And Similar Delusions

"Naturalness" is not a real physics problem. The "hierarchy problem" and the "strong CP problem" and the "baryon asymmetry of the Universe problem", are likewise not real physics problems. These are just cases of unfounded conjectures about how Nature ought to be that are wrong.
At Quanta magazine, another article about the “naturalness problem”, headlined A Deepening Crisis Forces Physicists to Rethink Structure of Nature’s Laws. This has the usual problem with such stories of assigning to the Standard Model something which is not a problem for it, but only for certain kinds of speculative attempts to go beyond it. John Baez makes this point in this tweet:
Indeed, calling it a “crisis” is odd. Nothing that we really know about physics has become false. The only thing that can come crashing down is a tower of speculations that have become conventional wisdom.
James Wells has a series of tweets here, starting off with
The incredibly successful Standard Model does not have a Naturalness problem. And if by your criteria it does, then I can be sure your definition of Naturalness is useless.
He points to a more detailed explanation of the issue in section 4 of this paper.
My criticisms of some Quanta articles are motivated partly by the fact that the quality of the science coverage there is matched by very few other places. If you want to work there, they have a job open.

I share Woit's opinion that Quanta is, on average, one of the better sources of science journalism directed to educated laypersons in the English language.

Sunday, December 19, 2021

Superdeterminism and More

Sabine Hossenfelder is an advocate for (although not necessarily very dogmatically) and has published papers on superdeterminism in quantum mechanics. She has a new blog post on the topic.

Some of the weirdest aspects of quantum mechanics are its seeming non-locality, particularly but not only when there is entanglement, and the fact that measurement changes how particles behave, with what constitutes measurement not defined in a very satisfactory manner. Superdeterminism is a theory that seeks to explain these weird aspects of quantum mechanics in a way that seems less werid.

Basically, superdeterminism is a hidden variables theory (with properties that escape Bell's Inequality like a lack of statistical independence) that argues that the non-local effects in quantum mechanics are really due to individual quantum mechanical particles having pre-determined non-linear properties that are measured when a measurement happens.

So, for example, the slit that a photon goes through in a two slit experiment is, in a superdeterminism framework, already determined when it is emitted.

Superdeterminists assert that the behavior of quanta is too mathematically chaotic to be measured or predicted otherwise, leaving us with average outcomes of chaotic processes that are functionally random from the point of view of an observer, even though they are actually deterministic at the level of the individual particle.

She also makes the important point that the colloquial understanding of free will is not consistent with the purely stochastic leading theory of quantum mechanics any more than it is with determinism, since we have no choice regarding how the pure randomness of quantum mechanics manifests itself. 

The way that the term "free will" is used in quantum mechanics, which involves statistical independence, is a technical meaning that is a false friend and does not imply what "free will" means in colloquial discussion.

I am not convinced that superdeterminism is correct. And, she acknowledges that we lack the instrumentation to tell at this point, while bemoaning the scientific establishments failure to invest in what we would need to get closer to finding out. 

But, her points on free will, and on the sloppy way that the Bell's Inequality is assumed to rule out fewer hidden variables theories than it does, are well taken.

Monday, March 16, 2020

Superdeterminism

Quantum mechanics is commonly assumed to be stochastic, which leads to the paradox that it cannot be simultaneously "real", "local" and "causal" (all defined terms of art) at once. But, superdeterminism is one way to approach that puzzle. 
Superdeterminism, a long-abandoned idea, may help us overcome the current crisis in physics. 
BY SABINE HOSSENFELDER & TIM PALMER 
Quantum mechanics isn’t rocket science. But it’s well on the way to take the place of rocket science as the go-to metaphor for unintelligible math. Quantum mechanics, you have certainly heard, is infamously difficult to understand. It defies intuition. It makes no sense. Popular science accounts inevitably refer to it as “strange,” “weird,” “mind-boggling,” or all of the above. 
We beg to differ. Quantum mechanics is perfectly comprehensible. It’s just that physicists abandoned the only way to make sense of it half a century ago. Fast forward to today and progress in the foundations of physics has all but stalled. The big questions that were open then are still open today. We still don’t know what dark matter is, we still have not resolved the disagreement between Einstein’s theory of gravity and the standard model of particle physics, and we still do not understand how measurements work in quantum mechanics. 
How can we overcome this crisis? We think it’s about time to revisit a long-forgotten solution, Superdeterminism, the idea that no two places in the universe are truly independent of each other. This solution gives us a physical understanding of quantum measurements, and promises to improve quantum theory. Revising quantum theory would be a game changer for physicists’ efforts to solve the other problems in their discipline and to find novel applications of quantum technology.
Hat tip to Sabine's blog which also features another rather philosophical essay. 

Sunday, September 15, 2019

Why Is The Universe So Complicated In Ways That Don't Matter?

The Standard Model of Particle physics sets forth a huge range of possible phenomena and interactions. But, most of them are observable only in high energy collider experiments recreating circumstances that have not existed naturally in the near vicinity of Earth or the Sun for many billions of years, certainly, long before life as we know it came into being.

Why do we have such a sophisticated set of parts and rules for such a simple universe that makes so little use of so many of those parts and rules?

There are six kinds of quarks: top, bottom, charm, strange, down and up (in order of mass). But, all ordinary matter made up of quarks is made up of up and down quarks, with an occasional strange quark flitting into and out of existence in a kaon within an atomic nucleus. Gluons and quarks are also always confined within hadrons at temperatures cool enough to come into being on Earth or in the Sun, so we never see them in isolation. There are hundreds of possible mesons and hadrons (even before venturing into tetraquarks, pentaquarks, and hexaquarks), but protons and neutrons and several mesons with no significant strange quark or heavy quark components (i.e. the pion, the omega meson, the rho meson and the sigma meson) that are involved in the nuclear force, suffice to describe almost everything in the observable world to a high degree of accuracy.

Apart from nuclear physics, which only has a narrow range of engineering and scientific applications (apart from understanding fundamental physics for fundamental physics' sake), we don't need to know anything about the strong and weak nuclear forces at all, beyond the fact that nuclei hold together in the absence nuclear fission, nuclear fusion and radioactive decay which could be described with far simpler phenomenological models (and as a practical matter are dealt with that way, even today, in most engineering applications). Nuclear weapons, nuclear fission reactors and the early forms of nuclear medicine were invented as practical applications of nuclear physics before QCD or electroweak theory had reached their modern Standard Model form.

We know that there are three kinds of neutrinos, and three kinds of anti-neutrinos, and some of the rather mysterious properties like neutrino oscillation, but they interact so weakly that there aren't many applications in which knowledge of them is helpful, and there are even fewer applications in which it is necessary, possible and helpful to distinguish between neutrino flavors.

We know that there are two kinds of particles (muons and tau leptons) that are just like electrons but more massive, and how they behave, and we even use muons in a number of practical applications.

But, in a universe with twelve kinds of fundamental fermions, most of what we observe can be understood with just three of them (up quarks, down quarks and electrons), throwing in the electron neutrinos, muon neutrinos and muons and bringing the total to six, for the truly sophisticated. A world without second and third generation fermions at all would be almost impossible for a casual observer to distinguish from our own.

We live in a universe with twelve or thirteen kinds of fundamental fermions, but the eight kinds of gluons are always confined, the Z boson has negligible relevance practically, the W+ and W- boson can be summed up in a black box theory of weak force decay for most practical purposes, and the photon and possibly the hypothetical graviton, are all that we need to deal with for most purposes.

One needs to understand special relativity for many practical purposes, but the far more mathematically and conceptually difficult general relativity for far fewer. We need to understand quantum electrodynamics for many practical applications, but electroweak theory and quantum chromodynamics, for only a very few applications.

We know all of the fundamental physics that will ever be necessary to understand chemistry and biology and geology from first principles.

We understand the least about gravity, but fortunately, while knowing more about it is important in terms of cosmology, and explaining what astronomers see in the very distant depths of the universe, none of the mysteries of dark matter and dark energy have any practical relevance to a species that may never settle more than a handful of nearby star systems, a scale too small for either of those phenomena to have any real relevance. For all but the highest precision applications, even in the solar system and its nearby neighboring star systems, plain old Newtonian gravity and special relativity are quite sufficient to meet our needs.

And, I think that we will probably master the main problems of quantum gravity, dark matter and dark energy, if not in my lifetime (not an unlikely possibility if I live to a ripe old age), in the lifetime of my children or grandchildren (assuming that I will have any). Knowing this won't have many applications, but it will be satisfying and I suspect it will overhaul a lot of mainstream astrophysics related to conventional wisdom about cosmological inflation, the early universe, dark matter and dark energy, in a way that may leave some philosophical ripples that escape into the larger culture.

Cracking the unsolved problems of high energy physics is something we crave, and we might someday discover a layer beneath what we know now that explains Standard Model physics in terms of something simpler at a more microscopic level that unifies it and provides a means from which its many arbitrary constants can be derived from first principles. But, it increasingly looks as if there is no beyond the Standard Model physics that would make differing predictions from the Standard Model in any way that matters.

I am skeptical that we will penetrate that deeper level any time in the next several centuries, no matter how many billions of dollars of resources we throw at it, and I am even more skeptical that we will be able to find any technological applications for it if we do. Understanding the deeper underpinnings of the Standard Model, or at least some of them, will probably only satisfy our intellectual curiosity and make our knowledge of a few physical constants that could then be calculated from first principles. a few orders of magnitude more precise that what we measured experimentally something that is already quite precise in most cases.

Really the only bright spots in terms of progress for the next few centuries in Standard Model physics are an improved understanding of neutrinos, increasingly accurately measured fundamental physical constants, and an increased ability to apply QCD to high energy physics experiments, neutron star properties and the Big Bang.

Much of what we know already is only applicable to the early moments of the universe right after the Big Bang and has little application once nucleosynthesis has run its course.

While we have not yet reached the point of complete scientific knowledge, our understanding of fundamental physics is such that we already know almost everything that could have a technological application that would be economically useful in any way.

We know such more beyond what is economically or technologically useful already, and yet, this knowledge is already esoteric to a great extent.

Now, this doesn't mean that just because we know all of the fundamental rules of physics that we need to use, all of the laws of nature that matter to us, that there isn't lots of critical and economically valuable science to be done, explaining all of the implications of those fundamental rules that are relevant to our complicated world. In areas like condensed matter physics, nuclear engineering, genetics and medicine, there is much to be learned. But, you can go a long way towards doing that with QED and practical simplifications of those fundamental rules for circumstances were they need to be applied and experimental measurements more precise than those that could be derived from first principles.

Monday, June 10, 2019

There Is No Experimental Or Observational Evidence To Support A Zero Aggregate Baryon Number At T=0

A recent physics paper described for the first time (at the more than five sigma discovery threshold in a reputable peer reviewed physics journal) CP violation in charmed hardon decays that they had measured to the precise degree predicted by the Standard Model of Particle Physics. The paper is R. Aaij et al. (LHCb Collaboration), "Observation of CP Violation in Charm Decays." 122 Phys. Rev. Lett. 211803 (May 29, 2019).

In response to this paper, I noted at the Physics Forums (underlined emphasis added in this post):
Basically, this is just one more confirmation of a Standard Model prediction, made possible by improved experimental detection capacity at the LHCb. 
The introduction of the Letter notes that:
The noninvariance of fundamental interactions under the combined action of charge conjugation (C) and parity (P) transformations, so-called CP violation, is a necessary condition for the dynamical generation of the baryon asymmetry of the universe. The standard model (SM) of particle physics includes CP violation through an irreducible complex phase in the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix. The realization of CP violation in weak interactions has been established in the K- and B-meson systems by several experiments, and all results are well interpreted within the CKM formalism. However, the size of CP violation in the SM appears to be too small to account for the observed matter-antimatter asymmetry, suggesting the existence of sources of CP violation beyond the SM. The observation of CP violation in the charm sector has not been achieved yet, despite decades of experimental searches. Charm hadrons provide a unique opportunity to measure CP violation with particles containing only up-type quarks. The size of CP violation in charm decays is expected to be tiny in the SM, with asymmetries typically of the order of 10^−4 − 10^−3, but due to the presence of low-energy strong-interaction effects, theoretical predictions are difficult to compute reliably.
The observed amount of CP violation of just the magnitude that the Standard Model predicts: 
(−15.4 +/- 2.9) × 10^−4 
The paper notes in its conclusion that:
The result is consistent with, although in magnitude at the upper end of, SM expectations, which lie in the range 10^−4 − 10^−3. In particular, the result challenges predictions based on first principle QCD dynamics. It complies with predictions based on flavor-SU(3) symmetry, if one assumes a dynamical enhancement of the penguin amplitude.
The researchers belief that it is likely that there will be BSM phenomena that will explain the baryon asymmetry of the universe is a case of hope triumphing over experience. Every single bit of available empirical and observational evidence suggests that the baryon asymmetry of the universe was part of its initial conditions, and these initial conditions do not violate any requirement theoretically necessary for a consistent cosmology model. But, because a Big Bang made of "pure energy" that is then deviated from due to CP violation somehow seems prettier than a non-zero baryon number of the universe to start with, researchers have presumptuously convinced themselves that they must be missing something.
Another very knowledgable participant in the discussion (mfb) responded to the language underlined above stating:
Please name such a piece of evidence, because I think that statement is blatantly wrong (unless you say “0 out of 0 is 100%”).
The post that follows was my response:
1. There has never been an observation of non-conservation of baryon number. This has been tested in multiple processes, e.g. proton decay, flavor changing neutral currents, etc. The experimental bounds on proton decay and neutron oscillation are both very strict. "No baryon number violating processes have yet been observed." Lafferty (2006) citing S. Eidelman et al. (Particle Data Group), Phys. Lett. B592 (2004). 
"Despite significant experimental effort, proton decay has never been observed. If it does decay via a positron, the proton's half-life is constrained to be at least 1.67×10^34 years." Yet, the universe is roughly 1.4*10^9 years old. This experimental result has been a leading means by which GUT theories are ruled out. 
Similarly, neutron-antineutron oscillation is not observed but if baryon asymmetry involves this process there "is an absolute upper limit on the n − n¯ oscillation time τn−n¯ of 5 × 10^10 sec. irrespective of the B − L breaking scale, which follows from the fact that we must generate enough baryon asymmetry via this mechanism (according to the linked 2013 paper). The limit on neutron-antineutron oscillation as of 2009 was τn−n¯ ≥ 10^8 sec. See also confirming the experimental result here
Exclusions for flavor changing neutral currents at the tree level have also not been observed although the measurements are less precise:
In the SM, flavor-changing neutral currents (FCNC) are forbidden at tree level and are strongly suppressed in loop corrections by the Glashow–Iliopoulos–Maiani (GIM) mechanism with the SM branching fraction of t → qH predicted to be O(10^−15). Several extensions of the SM incorporate significantly enhanced FCNC behavior that can be directly probed at the CERN LHC.
In top quark decays they are excluded to a branching fraction of not more than about 0.47% (per the link above). 
2. Likewise, there are no processes which have ever been observed which do not conserve lepton number (e.g. there is no observational evidence of neutrinoless double beta decay). These bounds are very strict already. 
The universe is roughly 1.4*10^9 years old, so the current limit from GERDA (from 2015) means that no more than one in 3.8*10^16 of hadrons that could have done so have actually experienced neutrinoless double beta decay since the formation of the universe. 
3. There has never been an observation of a sphaleron interaction (which would not conserve baryon number or lepton number) but the energies at which sphaleron interactions would take place (about 10 TeV and up) and the rates at which they would occur in the SM (whose parameters and equations are well tested) if they do exist are too small, particularly in light of the small CP violating phase in the CKM matrix (which has been carefully measured). See also, e.g., Koichi Funakub, "Status of the Electroweak Baryogenesis" ("[W]e find that the sphaleron process is in chemical equilibrium at T between 100 GeV and 10^12 GeV.") 
4. It is widely accepted and has been proven that with SM physics (and the linked article below acknowledges), that (1)-(3) imply that the baryon number of the initial conditions is positive and non-zero in the absence of BSM physics of particular baryon number and lepton number violating, CP violating processes that occur (only) out of equilibrium.
These are known as Sakharov’s conditions (Yoshimura is also sometimes given credit for them). This source also notes that:
Another way to view things consists in assuming that the primordial Universe developed through interactions of gravity and other fundamental forces, e.g. through the amplification of vacuum fluctuations. In such a case, gravity being blind to the difference between matter and antimatter, equal initial numbers of baryons and antibaryons are expected, and the current unbalance must be induced by subsequent interactions. . . . We only mention for completeness the possibility that the observed baryon excess is a local artefact, and that the Universe is constituted with domains with either baryon or antibaryon excess. The gamma rays arising from annihilation at the boundary of such domains would be a tell-tale sign, and the fact that they have not been observed rejects such a possibility to the limit of the observable Universe.
See also Paolo S. Coppi, "How Do We Know Antimatter Is Absent?" (2004) (reviewing the evidence against spatial anti-matter domains). 
Thus, no theory of quantum gravity alone can solve the problem unless it has CP violation which no leading theory of quantum gravity does. There is no experimental evidence of CP violation in gravity at the local level. 
In the Standard Model, neither the strong force nor the electromagnetic force have any CP violation either. 
The sole source of CP violation in the Standard Model is the weak force, a force in which the coupling constant gets smaller, not larger, a higher energies (as shown in the famous MSSM gauge coupling constant unification illustration below in the left hand panel; no anomalies in the running of any of the SM coupling constants with energy scale has been observed at the LHC so far), which is the opposite of the direction needed if it is to provide a source of CP violation sufficient to explain the baryon asymmetry of the universe ("BAU") given an assumption that aggregate baryon number at the time of the Big Bang was zero.
See also Wikipedia articles on Baryon asymmetry and Baryogenesis
5. The Higgs boson mass and the associated beta function for it, imply that the SM maintains unitarity up to Big Bang energies. There is nothing that would cause the SM to break down in terms of mathematically if there were no new physics at all at any scale above what is measured and the universe is at least metastable up to the mass (the Higgs boson and top quark masses haven't been measured precisely enough to determine if the universe is stable or metastable if there are no laws of physics other than the Standard Model). See also, e.g., Koichi Funakub, "Status of the Electroweak Baryogenesis" (noting that Higgs boson masses with more than 120 GeV are problematic for models creating BAU from a starting point of zero, when the global average measured value as of 2019 is 125.10 ± 0.14 GeV). 
We know empirically that the SM laws of physics remain valid at least up to Big Bang Nucleosynthesis energy scales (see below) and Large Hadron Collider ("LHC") energy such as those necessary to create a quark-gluon plasma. 
6. No one disputes that the aggregate mass-energy of the universe at the Big Bang was non-zero so it isn't as if there is a precedent that every aggregate parameter of the universe had to be zero at time equals zero (there is dispute, however, over whether gravitational energy is conserved globally in general relativity). 
7. Another quantity that is conserved in the Standard Model locally, and in the aggregate, is electromagnetic charge (for example, e → νe γ and astrophysical limits [m] >4.6 × 10^26 yr, CL = 90%), which still indistinguishable from zero in the aggregate in the universe now, and at all observable times in the history of the universe, and hence, aren't subject to wildly different laws of the universe from the SM if they were zero in the aggregate at the time of the Big Bang. 
This is particularly notable because aggregate baryon number, is equal three times the number of quarks in the universe minus the number of anti-quarks in the universe, and all quarks also electromagnetically charged. Thus, any baryon number violating or lepton number violating process must also be electromagnetic charge neutral. 
8. There are no traces in the predictions of Big Bang Nucleosynthesis that imply that there was not baryon asymmetry in the initial conditions of the universe. Indeed J.-M. Fr`ere, "Introduction to Baryo- and Leptogenesis" (2005) notes that:
based on nucleosynthesis (which occurs late in the history of the Universe and is therefore not too sensitive to the various scenarios – even if it can be affected by the number of neutrino species and the neutrino background) indicate a stricter, but compatible bound: 4 10^−10 < nB/nγ < 7 10^−10.
Any baryon number violating process must take place at T > 200 MeV (the QCD phase transition temperature), otherwise the success of nucleosynthesis will be spoiled. This temperature is about 400,000,000 times the temperature of the Sun and is believed to correspond to a time one microsecond after the Big Bang in the conventional chronology of the universe. One microsecond is about the time it takes a muon to decay. BBN itself is assumed to take place 10 to 1000 seconds after the Big Bang. This temperature is in the ballpark of the highest temperatures arising at the Large Hadron Collider (a temperature scale at which the Standard Model continues to perform as expected in myriad experimental tests). 
Put another way, even advocates of a zero baryon number initial condition (and this would be a majority of theoretical physicists and cosmologists notwithstanding the lack of empirical or observational evidence for it) pretty much agree based upon observation and empirical evidence and well established SM equations and reasonable extrapolations beyond the Standard Model, that the baryon asymmetry of the universe had to be in place around one microsecond after the Big Bang. 
The main reason we can't rule out baryon number violation prior to one microsecond after the Big Bang is that we have no way to observe it. 
9. There are no traces in the CMB that imply that there was not baryon asymmetry in the initial conditions of the universe. (This is unsurprising given that BBN happens much earlier in the cosmology timeline than the CMB traces at t=100,000 years or so). See also the implications of the CMB for inflation
Indeed, both of these windows into the very early universe (8) and (9) imply that if there was not baryon asymmetry at time equal zero, that baryon asymmetry had to completely arise very, very quickly. 
10. As of 2019, all experimentally measured CP violation observed in Nature (apart from neutrino oscillations data where the experimental uncertainties are too great to saying anything more than that CP violation occurs in these oscillations which may be possible to characterize with a single parameter of the PMNS matrix), is defined by a single parameter out of four parameters in all, in the CKM matrix, which as noted above, is insufficient in magnitude to explain the baryon asymmetry of the universe.
The present consistency of global CKM fits is displayed in Fig. 4. Each coloured band defines the allowed region of the apex of the unitarity triangle, according to the measurement of a specific process. Such a consistency represents a tremendous success of the CKM paradigm in the SM: all of the available measurements agree in a highly profound way. In presence of BSM physics affecting the measurements, the various contours would not cross each other into a single point. Hence the quark-flavour sector is generally very well described by the CKM mechanism, and one must look for small discrepancies.
244891


There are also no experimentally measured deviations from CPT symmetry in non-gravitational physics. See generally, Thomas Mannel "Theory and Phenomenology of CP Violation" (2006). 
But see Belfatto, et al. (pre-print 2019) (arguing that there is a 4 sigma tension with unitarity in the measurements of the CKM elements involving the up quark, although such a tension, even if it is more than a fluke measurement wouldn't be remotely sufficient to explain BAU and primarily involves the non-CP violating parameters of the CKM matrix). 
11. Attempts to fit cosmology data to inflation theories allow for only a reasonably narrow number of e-fold (ca. 20-80 at the outside with 40-60 cited more often as consistent with the data), which implicitly imposes strict minimum boundaries on the amount of baryon asymmetry that has to emerge per e-fold since the available time in which cosmological inflation must occur and the available time in which baryon asymmetry must occur if you start from zero aggregate baryon number are basically the same. But, we don't have any indication whatsoever that there is a process that is both baryon number violating and CP violating to the necessary degree, or anything remotely close to that. 
12. We don't need it to get dark energy. Indeed, while the conventional cosmological constant starts with a near zero dark energy and then has it grow proportionately to the volume of space over time, a transition of zero baryon number to massive baryon asymmetry would imply a huge surplus of energy very close to time zero that is not needed to make the Lambda-CDM model (a.k.a. the Standard Model of Cosmology) work. 
13. We don't need it to get 21cm observations to coincide with what is observed. These measure conditions at ca. 300,000 years after the Big Bang so we wouldn't expect them to so signs of baryon symmetry violating processes. 
14. We don't need an initial condition of baryon number equal to zero to get Hubble's constant or a particular amount of dark matter. Assuming a dark matter particle paradigm, according to a pre-print by Yang (2015) subsequently published in Physical Review D, the lower bound on the mean lifetime of dark matter particles is 3.57×10^24 seconds. This largely rules out the possibility that dark matter could bear baryon number and serve as an escape valve around baryon number conservation that is hard or impossible to measure directly. 
There really are no observed phenomena in astronomy or the SM which we need an initial baryon number of zero to explain. Even if the phenomena searched for were "just beyond" current experimental limits, the rates of phenomena like proton decay, neutron oscillation, neutrinoless double beta decay and CP violation beyond the Standard Model, and sphaleron interactions were all observed, so long as the existing experimental results remained accurate, none of these could explain the BAU from a hypothesis that aggregate baryon number was zero at T=0.
A non-zero baryon number as an initial condition is the null hypothesis. It is the conclusion that we reach when we follow the available experimental and observational data, and all experimentally validate laws of physics to their logical conclusion, and assume no modifications of those laws of physics not motivated by empirical or observational evidence. 
To be clear, it isn't impossible that the laws of physics could deviated wildly from the Standard Model at energy scales well in excess of those at the LHC. Lots of respectable physics believe that someday, somehow, we will discover something like this, and almost all published articles in the field of baryogenesis consider the hypothesis that the initial aggregate baryon number of the universe to be "well motivated". Some physicists even assume, without any evidentiary or theoretical consistency support that the initial conditions of the universe must have included a zero baryon number. For example: 
the CP violation in the standard model is a small effect. In particular it is too small to create the observed matter-antimatter asymmetry of the universe, for which CP violation is an indispensable ingredient.
- Thomas Mannel "Theory and Phenomenology of CP Violation" (2006) (emphasis added). 
But, we have no meaningful positive evidence to indicate that not only does this happen, but that the deviation from the Standard Model violates baryon and lepton number, is strongly (basically maximally) CP violating, and only occurs in out of equilibrium systems. Indeed, some of the strongest experimental exclusions in all of physics involve searches for baryon number violating processes and lepton number violating processes.

Arguing for a non-zero aggregate baryon number at the Big Bang isn't glamorous or fun. It's like arguing that the Electoral College is a good idea, or that coal needs to be phased out gradually rather than immediately to prevent the economy from collapsing. But, all existing empirical and observational evidence to date supports this conclusion.
I will be curious to see what kind of response I get (if any).

The "agenda" I am pushing in this post is essentially the same one advanced by Sabine Hossenfelder in her 2018 book "Lost in Math: How Beauty Leads Physics Astray", which which I wholeheartedly agree (the German title, which I prefer, is "The Ugly Universe.").

The notion that the initial aggregate baryon number of the universe must have been zero at the moment of the Big Bang is very widely endorsed by physicists with many prominent ones taking it as an article of faith. But, that is all that it is, a faith based position no supported by an empirical or observational evidence, or any deductions from validated laws of physics, motivated almost entirely by a sense of mathematical beauty that is ultimately in the eye of the beholder.

Like Dr. Hossenfelder, I believe that if hypothesis generation was more closely tied to empirical and observational evidence and deductions from validated theories, and less strongly driven by mathematical beauty, that we would make more progress as a scientific community instead of spending inordinate amounts of time chasing down rabbit holes at great express producing little scientific knowledge of value.

Wednesday, June 5, 2019

More Environmental Astronomy

Even though we don't understand the Earth and its ecosphere as a being, in many respects, it acts like on, and the assumption that it does, called the Gaia hypothesis, is a reliable way to make good predictions about how the global ecosystem will respond to stress factors, including extraterrestrial impacts. This hypothesis predicts that Earth is robust in the long term and that life will find a way to stabilize things in the long run even though that are factors that could screw it up. 

Of course, on these time scales, the "long term" may be too long to be relevant to humans.
The Gaia hypothesis postulates that life regulates its environment to be favorable for its own survival. Most planets experience numerous perturbations throughout their lifetimes such as asteroid impacts, volcanism, and the evolution of a star's luminosity. For the Gaia hypothesis to be viable, life must be able to keep the conditions of its host planet habitable, even in the face of these challenges. 
ExoGaia, a model created to investigate the Gaia hypothesis, has been previously used to demonstrate that a randomly mutating biosphere is in some cases capable of maintaining planetary habitability. However, those model scenarios assumed that all non-biological planetary parameters were static, neglecting the inevitable perturbations that real planets would experience. To see how life responds to climate perturbations to its host planet, we created three climate perturbations in ExoGaia: one rapid cooling of a planet and two heating events, one rapid and one gradual. The planets on which Gaian feedbacks emerge without climate perturbations are the same planets on which life is most likely to survive each of our perturbation scenarios. Biospheres experiencing gradual changes to the environment are able to survive changes of larger magnitude than those experiencing rapid perturbations, and the magnitude of change matters more than the sign. 
These findings suggest that if the Gaia hypothesis is correct, then typical perturbations that a planet would experience may be unlikely to disrupt it.
Olivia D. N. Alcabes, Stephanie Olson, Dorian S. Abbot, "Typical Climate Perturbations Unlikely to Disrupt Gaia Hypothesis" submitted for publication to MNRAS (June 3, 2019).

Wednesday, May 22, 2019

What do scientists mean when they say that something exists?

Sabine Hossenfelder does her usual spot on job of navigating through the weeds of what it means in science to say that something exists, using the Higgs boson, quarks, and gravitational waves as examples. An excerpt:
When we say that these experiments measured “gravitational waves emitted in a black hole merger”, we really mean that specific equations led to correct predictions.

It is a similar story for the Higgs-boson and for quarks. The Higgs-boson and quarks are names that we have given to mathematical structures. In this case the structures are part of what is called the standard model of particle physics. We use this mathematics to make predictions. The predictions agree with measurements. That is what we mean when we say “quarks exist”: We mean that the predictions obtained with the hypothesis agrees with observations. 
She goes on to discuss the philosophical concept of "realism" and to, appropriately, dismiss it as basically irrelevant. 

Thursday, February 28, 2019

The Truth About Calculus





Alt Text: "Symbolic Integration" is when you theatrically go through the motions of finding integrals, but the actual result you get doesn't matter because it's purely symbolic."

("Symbolic integration" actually means solving an integral analytically in a general indefinite integral form, rather than numerically.)
A procedure called the Risch algorithm exists which is capable of determining whether the integral of an elementary function (function built from a finite number of exponentials, logarithms, constants, and nth roots through composition and combinations using the four elementary operations) is elementary and returning it if it is. In its original form, Risch algorithm was not suitable for a direct implementation, and its complete implementation took a long time. It was first implemented in Reduce in the case of purely transcendental functions; the case of purely algebraic functions was solved and implemented in Reduce by James H. Davenport; the general case was solved and implemented in Axiom by Manuel Bronstein. 
However, the Risch algorithm applies only to indefinite integrals and most of the integrals of interest to physicists, theoretical chemists and engineers, are definite integrals often related to Laplace transforms, Fourier transforms and Mellin transforms. Lacking of a general algorithm, the developers of computer algebra systems, have implemented heuristics based on pattern-matching and the exploitation of special functions, in particular the incomplete gamma function.[1] Although this approach is heuristic rather than algorithmic, it is nonetheless an effective method for solving many definite integrals encountered by practical engineering applications. Earlier systems such as Macsyma had a few definite integrals related to special functions within a look-up table. However this particular method, involving differentiation of special functions with respect to its parameters, variable transformation, pattern matching and other manipulations, was pioneered by developers of the Maple[2] system then later emulated by Mathematica, Axiom, MuPAD and other systems.
The fact that a function in calculus and its inverse are profoundly different in difficulty is very non-intuitive but is definitely true. The assumption that they should be similar in difficulty is similar to the faulty reasoning behind "naturalness" as a hypothesis generator and evaluator in physics.

The "alt text" while seemingly just tongue in cheek word play actually hints at a deeper truth as well. While "symbolic integration" doesn't mean what the alt text says that it does, it isn't actually uncommon in theoretical physics to have a paper that calculates something as a proof of concept or a demonstration of a method when the actual result of the calculation doesn't matter.

FYI: This blog is currently one post short of its 3% humor quota.

Monday, January 28, 2019

The Anti-Universe

This paper discusses what I think, in broad outlines, particularly the portion in bold, although not necessarily the specifics, is the most likely explanation of the baryon asymmetry of the universe, although it may never be possible to prove (I blogged a 2017 paper on the same theme previously). See previous discussion of the concept here and here and here and here and here and here. I don't think, however, that it is necessary to inject right handed neutrinos or dark matter into this mix. 

Sabine Hossenfelder mentions and discusses this paper in a recent post.

The Big Bang, CPT, and neutrino dark matter

We investigate the idea that the universe before the Big Bang is the CPT reflection of the universe after the bang, so that the state of the universe does {\it not} spontaneously violate CPT. The universe before the bang and the universe after the bang may be viewed as a universe/anti-universe pair, created from nothing. The early universe is radiation dominated and inflationary energy is not required. We show how CPT selects a preferred vacuum state for quantum fields on such a cosmological spacetime. This, in turn, leads to a new view of the cosmological matter/anti-matter asymmetry, and a novel and economical explanation of the dark matter abundance. If we assume that the matter fields in the universe are described by the standard model of particle physics (including right-handed neutrinos), it is natural for one of the heavy neutrinos to be stable, and we show that in order to match the observed dark matter density, its mass must be 4.8×108 GeV. We also obtain further predictions, including: (i) that the three light neutrinos are majorana; (ii) that the lightest of these is exactly massless; and (iii) that there are no primordial, long-wavelength gravitational waves.
Comments:43 pages
Subjects:High Energy Physics - Phenomenology (hep-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
Cite as:arXiv:1803.08930 [hep-ph]
(or arXiv:1803.08930v1 [hep-ph] for this version)