The Problem: Experimentally Measured CP Violations In A Nutshell
The CP violations that have been observed fit a very narrow profile. They don't happen in forces mediated by photons or gluons (the subject of a previous speculation at my other blog before this one was established), or in gravity. So far as I can tell, they have never been observed in Z boson decay of the weak force either. They have only been observed in decay chains that involve mesons with bottom or strange quarks in them decaying through W+ or W- bosons of the weak force. Top quarks almost always decay to bottom quarks, and bottom quark meson decays have been observed to show CP violation. There are probably both bottom quark and strange quark contributions to decays of mesons with bottom quarks and strange quarks in them. Charm quarks often decay to strange quarks, and neutral kaon mesons, which have strange quarks in them as part of a linear combination of two other meson types, in turn show CP violation in their decays.
It is a relatively easy thing to insert a complex numbered CP violating parameter into the CKM matrix of the Standard Model, in which the square of the matrix entries govern weak force charged flavor changing current probabilities, to capture the CP violations in neutral kaon meson decay. The current version of the Standard Model does that, but the predicted CP violation from that parameter is not big enough to account for CP violations from mesons that include bottom quarks decaying through charged weak force currents.
This is harder to fix, because if the probabilities in the CKM matrix are to add up to 100%, there are only so many degrees of freedom in the matrix and there are no parameters left to account for the excess CP violations in bottom quarks. Hence, the attractiveness of any kind of beyond the standard model theory, even one that adds a simple fourth generation of particles, thereby turning the CKM matrix into a four by four instead of a three by three matrix, that provides an additional degree of freedom or two to fit the excess CP violation seen in the decays of mesons including bottom quarks that decay via charged weak force currents.
SUSY and Technicolor theories provide other ways to add degrees of freedom but have their own problems.
A Quantum Mechanics and General Relativity Paradox
Experimental evidence to date (which recently tightened the limits on quarks not being point-like by a factor of three to four this year or last, although the limits are still far above the Planck scale of ca. 10^-35 meters proposed by many theorists to be their true size), have not found any evidence that Standard Model fundamental particles are not point-like, and have found no direct evidence of any internal structure in Standard Model fundamental particles. The ability of W boson decays to turn into almost any left handed parity fundamental particle is also a poor fit to a composite model of these particles. But, a point-like particle model poses all sorts of mathematical problems.
Classical general relativity in non-discrete space-time is not consistent with any model for the other three fundamental forces that assumes that point-like particles have mass or energy. Taken to its logical extreme (as cosmologists are a bit more comfortable doing than I am comfortable with), any amount of mass or energy in a mere point has infinite density that would give rise to a black hole singularity with an event horizon larger than the point, and would not allow bosons of any kind to escape it causing the strong force, weak force and electromagnetism to cease to function.
There is a bit of paradox here, however, in how you do the math. If you look at gravity alone, you end up with a singularity. However, if you look at the scales where the strong force, weak force and electromagnetic are observed (the first two operate at short range over a very narrow distance scale, the latter is a long range force proportionate to distance squared in the classical limit just like Newtonian gravity), with the coupling constants of each of those respective forces relative to their masses, and look at the ratio of those forces to gravity, gravity is vanishingly small and disappears from the equation.
Point-like particles moving randomly would also have a vanishingly small probability of every bumping into a force carrying boson in a geometric heuristic of quantum field theory Feynman diagram interactions which is contrary to reality.
The strong force falls off to zero at very short distances and grows to infinity at long distances, in almost the opposite of the long range electromagnetic force and gravity. The weak force is so weird it barely qualifies as a force in the conventional sense of the word - it has so much particle nature switching going on in addition to doing work pushing or pulling anything.
Of course, while quantum field theorists often talk about fundamental particles as being point-like for sake of convenience, quantum mechanics itself, with its famous uncertainty principle that makes it fundamentally and not just practically impossible to know a particle's momentum and location at the same time, in addition to contradicting Newtonian clockwork universe deterministic mechanics, also implies that even fundamental particles are in some profound way smeared at different densities over a volume of space-time (an infinite one in the extreme low density limit) rather than being truly point-like.
Quantum gravity theories overcome the singularity problem in one of two ways. Either they assume that at some scale space-time is discrete (and not necessarily perfectly local) rather than continuous and local (i.e. points in space-time can't be described with a real number coordinate system in any of its frames of reference) that reduces to general relativity in the classical scale limit, or at the quantum level, gravity is simply one more non-Abelian gauge field mediated by boson called a spin-2 graviton that is exchanged in a manner quite similar to the gluons of the strong force that interacts with mass rather than color charge and reduces to general relativity in the classical scale limit.
String theory assumes that while there are just one or two fundamental particles that manifest in all of the other fundamental particles (strings, open or closed) that those fundamental particles are not points, but instead have at least length and possibly area, in addition to retaining the uncertainty principle concept of ordinary quantum mechanics that smears fundamental particles at varying densities over a space-time region, so it has three different fixes to the gravitational singularity addressing this issue simultaneously.
Approximations And The Conjecture
Still, assume that at even very short distances that gravity behaves semi-classically in a manner analytically similar to general relativity, and that while they are fundamental that particles are sufficiently not point-like to prevent them from creating black hole singularities. The Standard Model and anything resembling it still creates lumps of very high matter-energy density interrupted by big gaps of very low matter-energy density in every case, rather than the much more homogeneous matter-energy distributions assumed for mathematical convenience in astronomy applications of general relativity. (Special relativity is built into quantum mechanics already.)
Also, assume that we are thinking about general relativity effects from a reference frame of an observer inside the fundamental particle who has the same angular momentum and linear momentum as the particle.
What does general relativity say in general about the effects of high mass-energy distributions in systems where all of the matter is at rest relative to the observer?
It says that time passes more slowly in a gravity field to the extent of that gravity field's strength according to a well defined formula. Per Wikipedia:
A common equation used to determine gravitational time dilation is derived from the Schwarzschild metric, which describes spacetime in the vicinity of a non-rotating massive spherically-symmetric object. The equation is:
* t0 is the proper time between events A and B for a slow-ticking observer within the gravitational field,
* tf is the coordinate time between events A and B for a fast-ticking observer at an arbitrarily large distance from the massive object (this assumes the fast-ticking observer is using Schwarzschild coordinates, a coordinate system where a clock at infinite distance from the massive sphere would tick at one second per second of coordinate time, while closer clocks would tick at less than that rate),
* G is the gravitational constant,
* M is the mass of the object creating the gravitational field,
* r is the radial coordinate of the observer (which is analogous to the classical distance from the center of the object, but is actually a Schwarzschild coordinate),
* c is the speed of light, and
* r0 = 2GM/c^2 is the Schwarzschild radius of M. If a mass collapses so that its surface lies at less than this radial coordinate (or in other words covers an area of less than 4πG2M2/c^4), then the object exists within a black hole.
Now, one thing that the flavor changing charged weak force currents of meson decay in mesons involving heavy quarks where CP violations is observed have in common is that the rest mass of the particles on one side of the interaction is much greater than the rest mass of the particles on other other side of the interaction.
Thus, for our observer who follows a path along a Feynman diagram from the source heavy quark to any one of the particles the result from its decay, time is going to pass more slowly before our observer starts riding a W boson than our observer does at the other end of that W boson interaction.
So, if weak interactions proceed at the same speed from the perspective of an observer in the particle or any of the particles it couples to at either end of the interaction, from the perspective of an outside observer time will be passing more slowly at the heavy end than at the light end giving rise to different interaction speeds in different directions of the interaction.
Intuitively, it makes sense that orders of magnitude heavier mass densities are going to lead to genuine CP violations and that the heavier the heaviest particle involved in the interaction, the more pronounced the CP violation will be.
It also suggests that we ought to be able to fit that model and the known rest masses of the particles involved in the decay chains of CP violating flavor changing weak force currents to the rest mass data and the time dialation effects implied for symmetric stationary masses in general relativity to infer the mass densities at each end of the interaction, or at least the curve of possible mass density differences at each end of the interaction and infer from that a semi-classical fundamental particle radius for the particles that participate in CP violating weak force interactions, addressing these in a manner mechanically quite different than the usual arbitrary empirically determined CKM matrix component approach. These semi-classical particle radii will necessarily change less markedly than mass between the particles in this model and it may very well be possible to assign a single radius to all Standard Model fermions (and even all massive Standard Model particles including the W and Z), at least as a first order approximation, by choosing points on the mass difference-radius size cover for the interactions that show CP violations that intersect. One could also try arbitrarily using the Planck length of 1.616*10^-35 meters.
Or, one could use the Compton wavelength of the particle (which is Planck's constant divided by the speed of light divided by the particle's rest mass) (for example, about 2.5 times 10^-12 meters for an electron) to see if those would fit the data. The Compton wavelength already sets the approximate effective range of a Standard Model force and its cross-section of interaction, and dovetails with the non-CP violating nature of the electromagnetic force, which makes it particularly attractive to consider as a possible radius size (perhaps adjusted by a fixed constant as is done in the case of the Bohr radius of an electron and its classical electron radius). Using Compton wavelength, which varies porportionate to 1/mass should still cause density to increase markedly with higher generation particles whose masses increase dramatically with each higher generation of a type of fermion (although I haven't done the back of napkin calculations to confirm that yet, and would have to dig up the time dialation formula - still, the calculation of that part would be elementary - it is fitting the different time rates into the weak interaction calculations that would be a bit more subtle, although probably not beyond even my abilities with a few weeks to work on it). This would further reify the heuristic intuition associated with using a Compton wavelength to determine the cross-section of interaction of a particle in the Standard Model (on the notion that this quantified the probability that it will bump into a boson in the same space near its path).
Putting Compton wavelength into the equation for r, once gets t0=tf(1-2GM^2/hc)^-2. Simplified, that is: t0=tf(1-KM^2)^-2, for K=2G/hc. I'll have to save putting particle masses appropriately converted from electron volt units, etc. (to four significant digits 1 MeV/c^2=5.914 x 10^-7 kilograms; c=2.998*10^8 meters/second; h=6.626*10^-34 kilograms*meters^2/second; the reduced Plancks constant which is h divided by 2pi, which is natural for some quantum mechanics applications, is 1.055*10^-34 kilograms*meters^2/second, and G=6.734*10^-11 meters^3/(kilogram*seconds^2); and K with ordinary non-reduced Planck's constant is 2.131*10^-24 for masses in MeV/c^2) into this equation and comparing them to observed forward v. backward rate ratios in the flavor changing weak force currents that show CP violations for a future post, or an update to this post.
For example, using Compton's wavelength, time dialation for a bottom quark which is 4200 MeV/c^2 would be about (1-3.759*10^-17)^-2=a reduction on the order of 10^-9 (almost nil). This is more like 5700 MeV/c^2 for some of the decaying B mesons that have been shown to exhibit strong CP violation.
Using Planck's length, time dialation for a bottom quark would be determined for r=1.616*10^-35 and r0=3.722*10^-30, for a total (1-4.341*10^-6)^-2=2*10^-6.
Clearly, the length would have to be much smaller to make that kind of difference. A fundamental particle scale on the order of 10^-41 meters would be necessary to get gravitational GR effects at the levels needed to fit the CP violations we observe. But, since we already have some thin evidence that space is continuous at the sub-Planck length scale, that isn't necessarily a death blow to this idea.
Benefits Of This Approach
This would give us the first directly experimentally motivated estimate of the length scale of fundamental particles or discrete distance scales, which surely shouldn't be truly point-like in a sensible theory, as opposed to merely theoretically motivated dimensional analysis used to calculate a Planck length.
This approach would also eliminate the need to add any new particles or forces or dimensions to the Standard Model to explain the observed CP violation, although it might add a new Standard Model constant to each massive particle in the Standard Model to determine its semi-classical radius if the Compton wavelength or Planck length or some other single length for all fundamental particles in the Standard Model with rest mass doesn't provide the correct time dialation effect to fit the observed CP violations.
It would also point the way towards a more general semi-classical approach to integrating general relativity and quantum mechanics at tiny length scales which is consistent with the observation made this year that by one measure at least, space-time appears to be continuous rather than discrete at a scale well below the Planck scale. String theorists could call it the characteristic string scale, and loop quantum gravity theorists could shrug and bear it. In other words, perhaps general relativity is already perfect at arbitrarily small scales and we simply need to fix a detail about particle size in quantum mechanics.
This approach would almost necessarily work to fit the data (it has a deterministic rule to fit constants with potentially many degrees of freedom, if necessary, in fundamental particle semi-classical radii), and if it found that the Compton wavelength of a single radius worked, would add no more than one constant to the Standard Model while removing the constants associated with the CP violating parameters of the CKM matrix, producing a net wash or reduction in the number of arbitary parameters of the Standard Model with better theoretical motivation.
Of course, if the radii are all over the place for different particles, or there is not consistency between different CP violations for the some particles for some reason, this approach might not deserve to be adopted.