The three physics papers discussed in this post all involve some somewhat (to an educated lay person) subtle or non-obvious conclusions and applications of plain old Standard Model Physics.

**Neutral B Meson Lifetime**

A new paper determines the mean lifetime of a neutral B meson based upon experimental data from 2019 data at Belle II. The main result is as follows (with world average obtained from the Particle Data Group).

The estimated lifetime is τB_{0}= 1.48 ± 0.28 ± 0.06 ps, where the first uncertainty is statistical and the second is systematic. This value is compatible with the world average of 1.519 ± 0.004 ps.

Note that "ps" means picosecond which means 10

^{-12}seconds (i.e. a trillionth of a second). So the uncertainty in the Particle Data Group value for the properties of a bottom quark and an anti-strange quark bound by gluons (or its antiparticle) is one 250 trillionth of a second, which is incredibly tiny although the relative error is not completely insignificant at about 0.26% (about one part in 400).
Note also that mean lifetime (reported above) of something is proportional to the half-life of something, that decays in quantum or nuclear physics. The mean life equals the half life divided by the natural logarithm of 2 (i.e. 0.693 147 180 559 945 . . . .).

The combined error value is ± 0.286 ps, with the systemic error increasing the combined error only slightly in excess of the statistical error. The consistency is at the 1.4 sigma level, which is right around where it should be (the mean error from the true value in a Gaussian distribution is one sigma in any given experimental measurement). This is just what you would expect in any situation in which statistical uncertainty, which can be calculated more or less exactly, dominates over systemic error which is prone to overstatement or understatement.

This is a quantity that can be calculated, in principle, from the fundamental constants of the Standard Model, but that prediction is not mentioned in the paper.

In and of itself, this result isn't very notable. It gives us another obscure property of a particle only created in particle accelerators remeasured with less accuracy than we already have. But, because the theoretical calculation of the mean lifetime decay of a neutral B meson (which is pseudo-scalar) is relatively straight forwards and clean, and the meson's internal structure is well understood, it does provide a good measurement for reducing the uncertainty in the fundamental constants involved in the calculation with the largest relative uncertainty.

In and of itself, this result isn't very notable. It gives us another obscure property of a particle only created in particle accelerators remeasured with less accuracy than we already have. But, because the theoretical calculation of the mean lifetime decay of a neutral B meson (which is pseudo-scalar) is relatively straight forwards and clean, and the meson's internal structure is well understood, it does provide a good measurement for reducing the uncertainty in the fundamental constants involved in the calculation with the largest relative uncertainty.

It is also a good global test of the soundness of the Standard Model, both because it can be reproduced the the near 0.3 picoseconds, and because all sorts of beyond the Standard Model physics could throw off the calculation.

**Photo-Production of Charmed B Mesons**

Many people are familiar with the notion that certain kinds of matter-antimatter collisions can produce a pair of photons in what is called an annihilation interaction (this isn't actually true of all kinds of matter-antimatter collisions, however, as many people erroneously believe). An annihilation interaction can also be called an exclusively radiative decay.

Fewer people are familiar with the inverse of an annihilation interaction, which is the creation of a matter-antimatter particle pair or pairs from the collision of pairs of photons with each other, which is called photo-production governed by a "creation" operator.

If what is created is a quark-anti-quark pair or pairs (other than top quark top antiquark pairs), these almost immediately hadronize, often producing mesons. One of the things that can pop out when you collide to photons with very high energies is a pseudo-scalar (i.e. spin-0, odd parity) or vector (i.e. spin-1) charmed B meson (i.e. a particle made up of a bottom quark and a charm anti-quark, or visa versa, in either case bound to each other by gluons).

The Standard Model can be used to calculate the probably that a charmed B meson is photoproduced given details about the photons.

Naively this calculation is very scale dependent. In other words, the proportion of charmed B mesons produced from such collisions depends heavily on the amount of energy in the photons that collide with each other. And, one of the biggest sources of uncertainty is particle collider measurements at the Large Hadron Collider and its predecessors has been difficulties involved in accurately determining that scale.

But, that brings us to the notable insight of this new paper, which is that if you include more terms in the calculation (for example, doing the calculations to next to leading order rather than leading order), the outcome is less scale dependent. So, much of the scale dependence uncertainty in particle collider experiment results, may be a function of not including sufficiently many terms in a calculation which has a methodological artifact of greater scale dependence, rather than a situation in which actually measuring the scale of an interaction is as fundamentally important as it seems.

The paper is:

[Submitted on 15 May 2020]

#
NLO QCD corrections to Bc -pair production in photon-photon collision

TheBc meson pair, including pairs of both pseudoscalar states and vector states, productions in high energy photon-photon interaction are investigated at the next-to-leading order (NLO) accuracy in the nonrelativistic quantum chromodynamics (NRQCD) factorization formalism. The corresponding cross sections at the futuree+e− colliders withs√=250 GeV and500 GeV are evaluated. Numerical result indicates thatthe inclusion of the NLO corrections shall greatly suppress the scale dependence and enhance the prediction reliability.In addition to the phenomenological meaning, the NLO QCD calculation of this process subjects to certain technical issues, which are elucidated in details and might be applicable to other relevant investigations.

**Non-Physical Virtual States Matter In Bound Muon Decay**

A muon is just like an electron but about 207 times heavier and prone to decay, most often, into an electron, an anti-electron neutrino, and a muon neutrino.

A muon can, for example, be present in an atom in lieu of one of its electrons, although its physical location within the atom it is bound to would be different if it was substituted for an electron, and bound muon energy levels in stable "orbits" (orbit is not a perfect analogy at the subatomic level) are different from bound electron energy levels, because of the mass difference.

A muon can, for example, be present in an atom in lieu of one of its electrons, although its physical location within the atom it is bound to would be different if it was substituted for an electron, and bound muon energy levels in stable "orbits" (orbit is not a perfect analogy at the subatomic level) are different from bound electron energy levels, because of the mass difference.

This paper asked what is the likelihood that when a muon decays to an electron and other stuff that the resulting electron will still be bound in the atom, which is something that is mostly a function of quantum electrodynamics and the weak nuclear force, both of which can be calculated with to extremely high precision.

This is necessary because the frequency with which this happens is very low. Specifically, it happens on the order of three to twelve times per ten billion bound muon decays. But, since you are working with something so precise, leaving any factor that needs to be in a physically correct calculation out is a big problem.

This is necessary because the frequency with which this happens is very low. Specifically, it happens on the order of three to twelve times per ten billion bound muon decays. But, since you are working with something so precise, leaving any factor that needs to be in a physically correct calculation out is a big problem.

This paper is notable because it demonstrates that you need to consider non-physical negative energy states that add to the total calculation outcome, even though negative energy states can't be present in a final observable outcome, to get a correct answer. This isn't unusual in Standard Model quantum physics calculations, but this is a nice clean concrete example of it that has a clear effect on an observable quantity (the probability that an electron produced in the decay of a muon bound in an atom remains bound).

For example, failing to properly consider the negative energy states increases the calculated likelihood that an electron produced by the decay of a bound muon will also be bound by about 38% in an atom with 80 protons.

The paper and its abstract are:

For example, failing to properly consider the negative energy states increases the calculated likelihood that an electron produced by the decay of a bound muon will also be bound by about 38% in an atom with 80 protons.

The paper and its abstract are:

[Submitted on 14 May 2020]

# Decay of a bound muon into a bound electron

When a muon bound in an atom decays, there is a small probability that the daughter electron remains bound. That probability is evaluated. Surprisingly,a significant part of the rate is contributed by the negative energy component of the wave function, neglected in a previous study. A simple integral representation of the rate is presented. In the limit of close muon and electron masses, an analytic formula is derived.

## No comments:

Post a Comment