Pages

Friday, November 1, 2024

A Novel And Intriguing Explanation For The CKM And PMNS Matrixes

I've never seen anyone reach this remarkable insight before and it is indeed very tantalizing. This is huge if true. The authors note in the body text that: 
To our knowledge, this is the first time the differing CKM and PMNS structures have arisen from a common mechanism that does not invoke symmetries or symmetry breaking.

The paper and its abstract are as follows: 

The Cabibbo-Kobayashi-Maskawa (CKM) matrix, which controls flavor mixing between the three generations of quark fermions, is a key input to the Standard Model of particle physics. In this paper, we identify a surprising connection between quantum entanglement and the degree of quark mixing. Focusing on a specific limit of 2→2 quark scattering mediated by electroweak bosons, we find that the quantum entanglement generated by scattering is minimized when the CKM matrix is almost (but not exactly) diagonal, in qualitative agreement with observation. 
With the discovery of neutrino masses and mixings, additional angles are needed to parametrize the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix in the lepton sector. Applying the same logic, we find that quantum entanglement is minimized when the PMNS matrix features two large angles and a smaller one, again in qualitative agreement with observation, plus a hint for suppressed CP violation. 
We speculate on the (unlikely but tantalizing) possibility that minimization of quantum entanglement might be a fundamental principle that determines particle physics input parameters.
Jesse Thaler, Sokratis Trifinopoulos, "Flavor Patterns of Fundamental Particles from Quantum Entanglement?" arXiv:2410.23343 (October 30, 2024).

The paper's literature review does note one prior paper making a similar analysis:
Entanglement is a core phenomenon in quantum mechanics, where measurement outcomes are correlated beyond classical expectations. In particle physics, entanglement is so ubiquitous that we often take it for granted, but every neutral pion decay to two photons (π0 → γγ) is effectively a mini Einstein–Podolsky–Rosen experiment. In the context of SM scattering processes, though, the study and quantification of entanglement in its own right has only begun relatively recently [12–26]. In terms of predicting particle properties from entanglement, the first paper we are aware of is Ref. [27] which showed that maximizing helicity entanglement yields a reason but every neutral pion decay to two photons (π0 → γγ) is effectively a mini Einstein–Podolsky–Rosen experiment. In the context of SM scattering processes, though, the study and quantification of entanglement in its own right has only begun relatively recently [12–26]. In terms of predicting particle properties from entanglement, the first paper we are aware of is Ref. [27] which showed that maximizing helicity entanglement yields a reasonable prediction for the Weinberg angle θ(W), which controls the mixing between electroweak bosons.

References 12-26 are from 2012 to 2024.

Ref. [27] is A. Cervera-Lierta, J. I. Latorre, J. Rojo and L. Rottoli, Maximal Entanglement in High Energy Physics, SciPost Phys. 3 (2017) 036, [1703.02989].

Footnote 6 of the main paper is also of interest and addresses the fact that using only a one-loop calculation the get a value of 6º for a parameter whose measured value is 13º:
We happened to notice that in the limit where we neglect photon exchange, the exact valueθmin C =13◦ is recovered. However, we do not have a good reason on quantum field theoretic grounds to neglect the photon contribution. Because of the shallow entanglement minimum in Fig.2a, a 10% increase in the charged current process over the neutral-current one would be enough to accomplish this shift, which is roughly of the expected size for higher-order corrections.
A somewhat similar prior analysis that is not cited is Alexandre Alves, Alex G. Dias, Roberto da Silva, "Maximum Entropy Principle and the Higgs boson mass" (2015) (cited 42 times) whose abstract states:
A successful connection between Higgs boson decays and the Maximum Entropy Principle is presented. Based on the information theory inference approach we determine the Higgs boson mass as M(H) = 125.04 ± 0.25 GeV, a value fully compatible to the LHC measurement. 
This is straightforwardly obtained by taking the Higgs boson branching ratios as the target probability distributions of the inference, without any extra assumptions beyond the Standard Model. Yet, the principle can be a powerful tool in the construction of any model affecting the Higgs sector. We give, as an example, the case where the Higgs boson has an extra invisible decay channel within a Higgs portal model.
I would argue that all three of the papers linked in this post are not just "numerology" papers, as they suggest a plausible physical mechanism or theoretical principal by which the values of the SM physical constants in question can be determined.

2 comments:

  1. PF Meta Considerations

    Reference: https://www.physicsforums.com/threads/a-novel-explanation-of-ckm-and-pmns-matrix-parameters.1066657/#post-7129499
    PF sent me multiple warning that only peer review papers could be discussed

    ReplyDelete
  2. I do cite to some peer reviewed published work, even though the main article that I tee off from is not yet published. I also engaged with it in a science oriented careful way, instead of just spouting off somewhat randomly without analysis and pre-prints after often a gray area - some have even been published or are accepted for publication.

    ReplyDelete