**The Interval and the Feynman Photon Propagator**

More pervasive than entanglement or weak force decay patterns in quantum mechanics is a key term in the Feynman propagator, which calculates the probability that a photon which is at point A now will end up at point B at some point in the future. This probability is calculated with a path integral that calculates a contribution to the final probability that a photon will go from point A to point B from every possible path from point A to point B in a particular way. The dominant contribution in the final answer except at very short or stylized situations comes from paths that involve photons traveling at the speed of light. But, there are also contributions necessary to get the right prediction that are attributable to photons travelling at more than the speed of light or less than the speed of light. This is impossible in the equations of classical electromagnetism, special relativity and general relativity. The contributions of each of these paths is equal to the inverse of "the Interval" which is equal to the square of the magnitude of the spacelike vector from point A to point B minus the square of the magnitude of the timelike vector from the source to the destination, with length and time units made equivalent based on a speed of light ("c") conversion factor.

It isn't really obvious what this means. It works and is a necessary part of the theory. But, does it really reflect deviations from the speed of light? It could instead have a root in space-time having an underlying structure that is almost, but not quite local, with some points being adjacent by a certain route to a point that would otherwise be non-adjacent. Loop quantum gravity, which is four dimensional only emergently as something close to a regular ordering appears from nodes that are connected to each other in networks, is one example of a model with this kind of fundamental non-locality in spacetime.

It could also reflect interactions between photons and the vacuum. There is a certain probability that the vacuum will simply emit a photon seemingly out of nothing (one possible source of this may be derivable of the excess certainty that comes from knowing the position and momentum of empty space, although there are other ways to get there), and that the new photon rather than the original one is the one that ends up at point B. Also, photons can spontaneously turn into a particle and antiparticle pair that a massive and then annihilate and turn back into a photon, slowing down the show while they have rest mass. But, the probability of exceeding and undershooting the speed of light in the calculation are identical, while all of the sensible explanations like transmutation into massive particles and back, can only explain slower than speed of light paths.

Generally, one would associate a speed faster than the speed of light, to the extent that special relativity, which is incorporated into quantum mechanics as well as general relativity, applies with movement backwards in time and the possibility of a breakdown of causality. So the fact that this possibility must be incorporated into the equations to calculate the movement of every particle in quantum mechanics (massive particle propagation add terms that elaborate the photon propagation formula but don't remove any of the photon terms), seems pretty important in understanding the fundamental nature of time, causality and locality in the universe.

Of course, those equations aren't measurable phenomena. We don't directly observe probability amplitudes or the paths that go into the Feynman propagator path integral. We only see where the particle starts and where it ends up and come up with a theory to explain that result.

Still, non-locality or non-causality, which are different sides of the same coin, are deeply ingrained in quantum mechanics, which, as weird as it seems, works with stunning accuracy.

**Non-Locality In General Relativity**

Now, I've said that general relativity assumes continuity of space-time and I haven't been entirely honest in that. There is one part of general relativity that some people argue has an element of non-locality to it.

While general relativity obeys the law of mass-energy conservation with conversions mass never being created or destroyed and energy always staying constant except in cases where one is converted to the other according to the familiar equation E=mc^2, on a local and global basis, the accounting is less straightforward for one particular kind of energy, to whit, gravitational potential energy. At least some people who can handle this issue with the appropriate mathematical rigor say that the equations of general relativity sometimes conserve gravitational potential energy only non-locally. Losses of gravitational potential energy in one local part of a system are sometimes offset by gains in gravitational potential energy elsewhere. Others doubt that this holding is truly rigorous and wonder if the equations of general relativity truly concern mass-energy at all when gravitational potential energy is considered.

All of this analysis of GR flows from the equations themselves. But, a heuristic mechanism to explain what is going on and understand why the equations seem to make this possible in a less abstract way isn't obvious.

**Planck Length Revisited**

One is about the possibility that a fundamental particle might be 10^-41 meters in radius which might explain CP violation with GR, as I suggested in a previous post, is that this is a length which is considerably less than Planck length. Is that a profoundly troubling result?

Maybe not. The fundamental relationship that is a real law of physics is the uncertainty principal which in mathematical form says that inaccuracy in space measurement times inaccuracy in momentum measurement can't be less than Planck's constant. For a particle of a given mass, that works out to a length times time unit. Thus, the real laws of physics allow for trade offs between precision in length measurement and precision in time measurement (in speed of light based units). If it is possible to have well defined distances of say 10^-43 meters (the black hole size for fundamental particles according to GR), then the maximum precision with which one can measure time is not as great. There is a continuum between minimum length and minimum time unit and no obvious preferred way to strike a balance between the two.

Planck length and time add one more constant to the mix, the gravitational constant, to strike that balance. But, the choice of the gravitational constant is simply a product of the fact that it has the right units. The units of the gravitational constant produce an answer that works with the right units and we don't have a lot of other constants floating around with the units to give us that result when multiplied by Planck's constant and the speed of light in the appropriate way, so as a matter of really nothing more than informed numerology, we use the constants to create the Planck units. But, there is no real physical reason that a different fundamental unit of length, which would in turn imply a different fundamental unit of time and work just as well.

Indeed, the 10^-41 scale for a fundamental particle radius scale derives indirectly from the gravitational constant which is in the time dilation equation, just as Planck length does. It just invokes one more term, the CKM matrix CP violating terms and anomalous variations from that in B meson decay, to the mix to provide a suitable length scale that gives us an experimental hook that could ground the relationship of Planck's constant, the speed of light and the gravitational constant at a particular place along the Planck constant trade off between length and time units. The fact that one other experiment out there also points to a fundamental distance scale (if any) smaller than Planck length but not ruled out for the 10^-41 fundamental particle radius scale, is also encouraging on this front.

## No comments:

Post a Comment