Wednesday, August 31, 2011

The Experimental Precision of Select Standard Model and GR Constants

How accurately have we determined the fundamental constants of the Standard Model? Less accurately than you might expect.

The data from the particle data group as of 2011 sets out the story.

For quark masses, the range of values within the appropriate confidence interval (generally 95%) is as follows (the most likely value is not necessarily in the middle of the range since some confidence intervals are lopsided):

up quark = 1.7-3.1 MeV
down quark = 4.1-5.7 MeV
up/down quark mass ratio = 0.35-0.60
mean up and down quark mass = 3.0-4.8 MeV
strange quark = 80-130 MeV (central value 100 MeV).
strange quark mass/mean up and down quark mass = 22-30
charm quark mass = 1.18-1.34 GeV
bottom quark mass (by MS definition) = 4.13-4.27 GeV
bottom quark mass (by 1S definition) = 4.61-4.85 GeV
top quark mass = 171.4-174.4 GeV

The strong force coupling constant related to interactions of quarks and gluons with each other and used in quantum chromodynamics (QCD) is known to an accuracy of approximately 0.6%, which at the scale of Z boson mass is 0.1184 ± 0.0007, although the range of values reported from a robust variety of methods suggests an actual value of closer to 0.119 +/- 0.001.

The canonical value of gluon mass in theory is zero, but a gluon "mass as large as a few MeV may not be precluded" by experimental data.

The first principles estimate of proton and neutron masses from QCD, for example, from theory and the known constants, is accurate to only within about 5%. Given the limited accuracy with which the constants upon which these estimates are based, however, this is quite impressive.

For fundamental leptons and fundamental bosons and quantum electrodynamics (QED) we have much more accurate estimates, although our absolute neutrino mass estimates are far less certain:

electron 0.510998910 ± 0.000000013 MeV
muon 105.658367 ± 0.000004 MeV
tau 1776.82 ± 0.16 MeV

Koide's formula for charged leptons which states that the square of the sum of the square root of the three lepton masses, divided by the sum of the lepton masses equals 1.5, is similarly precisely established.

electron neutrinos < 2 eV (probably closer to 10^-5 eV) The difference between the electron neutrino and muon neutrino mass is on the order of 10^-5 eV and the difference between teh tau neutrino and electron and muon neutrino masses is on the order of 10^-3 eV. But, both the mixing matrixes and masses of neutrinos are not known with great accuracy.

W = 80.399 ± 0.023 GeV
Z = 91.1876 ± 0.0021 GeV
Z-W = 10.4 ± 1.6 GeV

The fine structure constant, which is the coupling constant of electromagnetism is α = 1/137.035999084(51).

There are additional constants in the Standard Model, of course, principally, the weak force coupling constant, the four parameters necessary to determine the 3x3 unitary CKM matrix (which can be parameterized in more than one equivalent way), and the four parameters necessary to determine the 3x3 unitary PMNS matrix, but those are less familiar. The first two are known with precision similar to that of the other weak force observable, while the PMNS matrix elements related to neutrino oscillation are known to accuracies only within roughly a factor of two for values close to zero (or with an 1-value factor of two for numbers close to 1).

There are experimental constraints on the minimum masses of hypothetical fourth generation particles, higher generation W and Z bosons, Higgs boson masses, and masses of other hypothetical particles under various extensions of the Standard Model and for a variety of interactions and decay modes prohibited by the Standard Model (e.g. leptoquarks and axions), but, these are largely consistent with zero although recent neutrino oscillation data most strongly supports more than three generations of neutrinos. Some of the most theoretically well motivated extensions of the Standard Model (predicting proton decay, neutrinoless double beta decay, magnetic monopoles, etc.) turn out to have some of the most experimentally prohibitive boundaries on their existence.

One of the implications of this is that there is very little freedom experimentally for innovation theoretically in electroweak interactions, but some meaningful freedom experimentally for adjustments to quark masses or even to QCD equations.

By comparison, the uncertainty in the gravitational constant's measurement is about one part per thousand, the uncertainty in the cosmological constant's value is about 3.3%, the experimental uncertainty in the measurement of the geodetic effect of general relativity is about 0.2%, and the experimental uncertainty in the measurement of frame dragging in the vicinity of the Earth is about 19% (all of which are consistent with general relativity's predictions). The geodetic and frame dragging effects are measured in terms of deviation from the Newtonian gravitational prediction.

The speed of light in a vaccum, necessary in both quantum mechanics and general relativity, is known with such precision that we use it to define the length of the meter relative to the length of the second.

It is also worth noting that it is not possible to do any remotely complicated calculations in QCD, or very exact calculations in general relativity with a number of bodies on the order of the number of stars in a typical galaxy given current computational technologies and mathematical methods. These kinds of calculations are possible in simple, stylized systems, but not in systems even remotely approaching the complexity of those seen in real life.



Distant Quasar Light Doesn't Show Discrete Space-Time Distortions

One observable that might be impacted by a discrete structure of space-time is the way that light from distant quasars appears when observed (i.e. "the degradation of the diffraction images of distant sources"). Experimentally, these effects are not seen. If a=1 is the naiively expected amount of distortion from a discrete space-time structure at a Planck scale, the data supports distortion of not more than something on the order of 0.05 to 0.15 relative to that value.

Opium Used As Drug 4500 Years Ago In Spain

There is evidence of opium use as a drug (medicinal and ritual) from 4,500 years ago in Spain at several sites in Andalusia and Catalonia. The use of opium use as a drug was previously known in the Daunbian Neolithic of Eastern Europe, but evidence of this use had not been found in Southwest Europe. The find predates the arrival of Celtic culture in the region, which is frequently viewed as the point at which the Indo-European languages arrived in Iberia, and instead was present at the mid-to-late stone age farming stage of civilization there.

Tutsi v. Hutu Genetics

Razib at Gene Expression, using the results of a whole genome DNA test performed on an individual who is self-identified as 1/4 Hutu and 3/4 Tutsi does a two part comparison of this individual's DNA with that of other known populations for which data is available. The existence or lack therefore genetic differences between Tutsi populations and Hutu populations is one of great historical and current political interest, but no extant whole genome DNA studies have publicly available data for Hutu or Tutsi individuals.

Tutsis and Hutus are found in Rwanda, where the distinction became know to the world as a result of the attempted genocide of Hutus seeking to slaughter all Tutsis, and Burundi, as well as some neighboring countries. There is a long history, at least dating to Belgian colonial involvement, of minority Tutsis serving as a ruling class that presided over Hutu majorities, and of mass slaughter between these peoples who speak the same language, who share the same religion, and whose visible differences in appearance from each other can be subtle to people not attuned to them (both groups would be considered "black" in either the American or Latin American vernacular racial classification schemes).

A core question in relation to these ethnicities are whether they are a caste system, possibly created by the Belgians from a common population, or the result of once ethnically distinct separate communities that have merged in a way that produced these results. Razib reached the latter conclusion.

He concludes that a Bantu speaking peasant farmer population from Kenya is a reasonable proxy for Hutus and that this individual's DNA is in contrast with the proxy strongly suggestive of a Tutsi gene pool very similar to the pastoralist Masai, the best known of the Nilo-Saharan linguistic pastoral peoples of Africa. He theorizes that the Tutsi people were an intrusive pastoralist people who came to rule the antecedent Hutu farmers of the area, probably prior to significant European colonialist intervention, which is a common historical pattern, but that the Tutsi eventually lost their language to that of the Hutu people whom they ruled.

As an English speaking American of South Asian descent, of course, he is not a personal participant in the controversy and comes to the issue with a relatively open, albeit informed, mind. And, the full posts are full of caveats associated with the single genome data set, the limits of the methodology, and the importance of not make assumptions that amount to Platonic ideals in the admittedly inexact matter of looking for large scale population scale where the predominant factor is admixture of previously more genetically distinct populations.

Tuesday, August 30, 2011

Wash Park Prophet Recent Physics Related Posts Index and PDG Link

I wrote a number of posts at Wash Park Prophet about physics before this blog was created and reference some of those posts from August 2010 onward (excluding some posts with that tag that have only a tangential fit to the scientific developments in physics) here for convenience:

* More Heavy Particle Decay Asymmetry (August 18, 2010)

* Non-Abelian Geometry (August 19, 2010)

* Grad Student Wins Nobel Prize (October 5, 2010) (note that the headline turned out to be somewhat misleading).

* Benoît Mandelbrot R.I.P. (October 18, 2010)

* How Many Atoms Are There In A Mole? (October 18, 2010)

* More Dark Matter Science (November 4, 2010)

* Four Flavors Of Neutrinos? (November 8, 2010)

* Lost Of Dark Matter Actually Just Dim (December 2, 2010)

* Higgs Search Update (December 14, 2010)

* Tevatron Will Shut Down in September (January 11, 2011)

* New Physics and the CKM Matrix (January 14, 2011)

* Metaphysics (January 26, 2011)

* Experimental Tests For String Theory (February 1, 2011)

* Gluons Lose Mass With Momentum (February 1, 2011)

* SUSY Cornered (February 21, 2011)

* Baysean SUSY Parameter Odds (February 25, 2011)

* Cosmological Numerology

* A Strong CP Problem Conjecture (March 9, 2011)

* The Search For Dark Matter Continues (March 14, 2011)

* Superkamiokande: Neutrinos Not Very Weird (March 15, 2011)

* More Physics Scuttlebutt (March 15, 2011)

* Reactor Data Suggest Sterile Neutrinos (March 17, 2011)

* Fundamental Particle Spin As Emergent (March 21, 2011)

* CDF Cries New Physics, It Isn't Impossible That They're Right (April 6, 2011)

* WIMPs Very Weakly Interacting Or Not So Massive (April 18, 2011)

* Light Higgs Rumor (April 25, 2011)

* General Relativity Still Works Perfectly (May 5, 2011)

This post is also as good place as any to put a link to the Particle Data Group which is the standard reference for physicists that summarizes the latest experimental data regarding physical constants pertinent to Particle Physics. For example, it is the place to go if you need the latest experimental determination of the mass of an electron, or the minimum half-life for proton decay that has been excluded by experiment.

Other standard references for physical constants are CODATA and NIST.

Another useful reference: "The Road to Reality: A Complete Guide to the Laws of the Universe" by Roger Penrose (2004).

Sunday, August 28, 2011

LHCb establishes that there is no B meson anomaly

One of the strongest signs to date of beyond the standard model physics has been experimental data not consistent with the Standard Model prediction in the decay of B mesons.

For example, these showed stronger CP symmetry violation that previous Standard Model CP symmetry violating phases derived from neutral kaon decay (a kaon is a type of meson with a strange quark in it) had predicted. The Tevatron data, had it held up, would appear to break the CKM matrix and require two CP violating phases rather than just one to fit the data.

But, the LHCb data contradicts the Tevatron data and confirms the Standard Model prediction on this score and thus elminates the need to develop a beyond the standard model theory to explain the data. As Jester explains at Resonaances:

This result is extremely disappointing. Not only LHCb failed to see any trace of new physics, but they also put a big question mark on the D0 observation of the anomalous di-muon charge asymmetry. Indeed, as can be seen from the plot on the right, the latter result could be explained by a negative phase φs of order -0.7, which is now strongly disfavored. In the present situation the most likely hypothesis is that the DZero result is wrong, although theorists will certainly construct models where both results can be made compatible. All in all, it was another disconcerting day for our hopes of finding new physics at the LHC. On the positive side, we won't have to learn B-physics after all ;-)

Meanwhile, Lubos is steaming angry that the press is reporting that SUSY is near dead when he thinks that there are lots of ways for it to remain alive and kicking.

At this point, the only piece of the Standard Model which has not yet come together experimentally is the detection of a Higgs boson, but there isn't enough data yet to rule out its existence in the 115 GeV to 130 GeV range or even make very good predictions in this mass range, even with 2 inverse femtobarns of data collected.

The B meson data also destroys much of the experimental motivation, for example, for a fourth generation of Standard Model fermions (the SM4).

Those hoping for new physics need not despair unduly, however. Without a light Higgs boson discovery, the Standard Model "blows up" and ceases to make coherent theoretical predictions in the 1 TeV to 10 TeV range. So, either the Higgs boson will be discovered (which is still something new even if it has been predicted), or we will get new physics as a result of it not existing whatever happens at 1 TeV to 10 TeV as the experimental results, whatever they may be, will surely not be indeterminate.

Friday, August 26, 2011

The Third Parent

I've talked about the non-equivalence of genetic conditions and hereditary conditions before at this blog. But, Neuroskeptic really puts it very nicely (emphasis added):

True or false: you inherit your genes from your parents.

Mostly true, but not quite. In theory, you do indeed get half of your DNA from your mother and half from your father; but in practice, there's sometimes a third parent as well, random chance. Genes don't always get transmitted as they should: mutations occur.

As a result, it's not true that "genetic" always implies "inherited". A disease, for example, could be entirely genetic, and almost never inherited. Down's syndrome is the textbook example, but it's something of a special case and until recently, it was widely assumed that most disease risk genes were inherited.

He also doesn't mention, as Razib has in several recent posts at Gene Expression that you don't inherit exactly 50% of your genes from each parent because there are unequal sized chunks and some randomness in germline formation and combination process. You can inherit as little as about 45% of your genes from one parent and as much as about 55%, although those are extreme outliers.

None of this has anything to do with a person "taking after" one parent or another, a phenomena due to someone inheriting more obvious and apparent genes from one parent or by inheriting genes that express more from one parent due to dominant/recessive gene patterns for the phenotypes influenced by those genes.

He also isn't talking about true "third parent" situations where an individual is an extremely genetic chimera who can have different DNA in different parts of his or her body, although this phenomena that is probably underdetected because apart from forensic circumstances DNA samples are typically taken only from a single locus.

Neutrinos and Antineutrinos Have The Same Mass

The MINOS experiment has confirmed the expectation that Neutrinos and Antineutrinos have the same mass.

The same post notes that the apparent top quark, antitop quark mass assymmetry reported at Tevatron by the CDF group there was swiftly contradicted by a D0 group result at same facility.

The Strange Formula of Dr. Koide

This is also as good a place as any to link to a wonderful little paper on Koide's formula from 2005 with the title found in the heading of this part of the post. Koide's formula (set forth before the tau mass was determined with nearly the same certainty) states that the sum of the masses of the three charged lepton masses divided by the square of the sum of the square roots of the charged lepton masses is exactly two-thirds. This phenomenological and not theoretically grounded rule has held with extreme precision.

Two significant generalizations of the rule have been made, a generalization for neutrino masses which is correct within measurement error (it changes the sign of one of the terms in the charged lepton equation), and a generalization for all quarks other than the top quark (called Barut's formula after A.O. Barut who published versions of it in 1965, 1978 and 1979), which is accurate to within 5% or so of measured values (some of which aren't known very precisely) but is greatly off the mark for the top quark. The generalized formula for quarks is that the mass of quark N (for N=0, 1, 2, 3, 4 for the u, d, s, c and b quarks respectively) is equal to the mass of the electron, times (1+3/(2*alpha)*sum((for n=0 though n=N)) of n^4), where alpha (roughly equal to 1/137) is the electromagnetic coupling constant. The precision is 2% for the first three charged leptons and charge quarks (related by a constant), and 2.7% for the down type quarks, but only 28% for the up type quarks (the predicted value is about 101,000 MeV which also happens to be roughly the Higgs boson mass most perferred by electroweak measures although no resonnances show up there in particle accellerator experiments rather than the experimental value of 174,000 MeV), with the top quark mass being most grossly inaccurately determined. The predicted value of the generalized value for quarks is 3/2 and in fact it is very nearly pi/2, so perhaps it isn't an error at all.

Also, interestingly, the square root of the difference between the sum of all six quark masses and the sum of the square root of the first first five quark masses is very close to the square root of the average of the W+, W- and Z boson masses (or to within the accuracy of the model, the average of the W and Z boson masses), suggesting that perhaps that a linear combination of these bosons, rather than the top quark, belongs at the sixth position in the hierarchy of mass particles.

The article also discusses the apparent similarities between the ratios of some of the fermion masses to each other and the coupling constants of the Standard Model forces.

The square roots of the mass matrix for quarks and the square roots of the mass matrix for leptons can also be used to recover the Cabibbo angle (13.04 degrees) of the canonical parameterization of the CKM matrix, suggesting a deep connection between the CKM matrix and the square roots of the masses of the fermions.

Amateur physicist and physics blogger Carl Brannen gets some credit for his explorations on this front (see also this 2005 paper), in this article by Rivero and Gsponer. Leonardo Chiatti has published the most recent scholarly article along the same lines as Barut's formula in 2009 (another 2008 paper is here) Gsponer and Hurni in a 2002 paper, suggested from these formulas that the top quark may be a different animal than is less massive quark companions (indeed, since it has not been observed to form mesons or baryons, possibly due to its rapid decay rate associated with its great mass, one can arguably not say with confidence that it really interacts with the strong force).

Other interesting literature on the subject includes a 2006 paper by Kyriakos (inspired by the methods of de Broglie).

While it is all to some extent numerology, these are also empirically established relationships that any theory that would explain them from first principles would have to somehow replicate.

Thursday, August 25, 2011

Up, Down, Top, Bottom, Left, Right, Plus, Minus, Forward, and Backward

Charge parity symmetry violations (left, right, plus, minus) is equivalent to time symmetry violations (forward and backward), in which quantum processes happen at different rates for particles and anti-particles.

This doesn't happen in strong force or electromagnetic force events. It doesn't happen in Z boson mediated weak force events (see, e.g., here looking at D meson events). It doesn't happen in gravity. It only happens in charged weak force events mediated by W bosons.

It is more specific than that, however. It also has only been observed in events that involve one of two kinds of neutral charge mesons made up entirely of bottom type quarks: kaons comprised of both strange quarks and down quarks (one of which must, of course, being an anti-quark), and B mesons comprised entirely of down type quarks of differing generations (down, strange or bottom). All B mesons, of course, have bottom quarks in them. But, it the latest rumor from LHCb are true only neutral B mesons (one type of which has a down quark and a bottom quark, and the other type of which has a strange quark and a bottom quark) which exhibit oscillation, have been observed.

The possibility of charmed B meson involvement in CP violations had been hinted in the dimuon channel discussed in the previous post at this blog, but it appears that this result from D0 may not hold. I've also never heard of CP violation in B mesons with an up quark and bottom quark. (It is believed that bottom and top quarks never form mesons because the top quark decays to quickly for a meson to form).

Similarly, while all kaons, have one strange quark, CP violation has been observed only in the neutral kaons, with a down quark and strange quark, not charged kaons with up quarks and strange quarks. (Kaons with top quarks don't form for the same reason that there are no B mesons which have top quarks; CP violation is also not observed in mesons with both a strange quark and a charm quark).

There are some notable pieces to this observation.

First, all CP violation has been observed in the context of neutral particles that have oscillations. Neutrinos also oscillate, but have not been observed to be CP symmetry violating at this time.

Neutrons are neutral and have internal flavour, but while there "are theoretical proposals that neutron–antineutron oscillations exist, a process which would occur only if there is an undiscovered physical process that violates baryon number conservation."

Notably, lepton family number violation, which is observed in oscillating neutrinos, has also not been observed in charged leptons (electrons, muons and taus) (lepton number violation as opposed to lepton family number violation has not been observed at all).

The fact that particles that are electromagnetically charged and that electromagnetism is not CP violating could be relevant to this point.

Of course, leptons have not been observed to form composite particles at all. They are also not observed in any charged composite particles.

Second, CP violation has not been observed in the decays of electrically neutral mesons that contain up quarks or charm quarks a.k.a. neutral D mesons (top quarks don't form mesons), despite rigorous searches for this as recently as 2008, even though 2009 research has shown that neutral D meson mixing does take place (without CP violation), just as it does for other neutral particles (see also here).

Third, they are not observed in baryons (i.e. three quark composite particles) that contain up type quarks (or for that matter any type of baryon) or in neutrinos.

Fourth, CP violations have not been observed in mesons with a quark and antiquark of the same type (e.g. the Phi meson made of a strange and anti-strange quark or the J/Psi meson made of a charm and anti-charm quark or an upsilon meson made of a bottom and anti-bottom quark aka bottomium).

Only neutral mesons with different generations of bottom type quarks in them exhibit CP violations.

Of course, this could be because there is no experimental way to observe CP violation in an oscillating same quark type meson. The only way that these mesons can form at all is because they differ in color charge (otherwise that would be antiparticles of each other and annihilate). But, due to quark confinement, it isn't possible to observe color asymmetries.

It is a fine technical point that I could be overlooking about the interaction between the quantum mechanical considerations involved in neutral particle oscillation and the the CKM matrix or the way that the need for two out of phase complex conjugates works to lead to CP violation, but I am not aware of anything in the Standard Model CKM matrix or weak force interaction that makes the presence or absence of CP violation in the weak force interactions of quarks to be influenced in any way by the components of a composite particle. Yet, both the fact that protons do not decay and the absence of CP violation in mesons of mixed up and down type quarks, is suggestive of the possibility that this could be an omission.

The extremely specific and narrow circumstances in which CP violation and lepton family number violation are observed may provide some hints about what is going on in these interactions at a deeper level.

If same type quark-antiquark mesons are truly impossible to observe CP violations in due to confinement at even a theoretical level, and there are only three generations of fermions, as the Standard Model proposes, then we have a complete set of CP violating particles in the neutral kaon and neutral B meson, both of which are extreme exotic and fleeting creatures never seen in anything outside laboratory conditions in the real world. What an oddly limited way for quantum mechanics to acknowledge an arrow of time.

Rumor Mill: Another Anomaly Bites The Dust?

One of the big experimental results that seemed to contradict the Standard Model was the "like sign dimuon asymmetry" observed at an almost 4 sigma level by the D0 experiment at Tevatron starting about four years ago, most likely due to CP violations beyond the predicted amounts in B meson decays.

Rumor has it, however, that LHCb, which is running an experiment to see if it can confirm this observation from Tevatron has seen no deviations from the Standard Model predictions, and more particularly, no like sign dimuon asymmetry. The rumors at a leading physics blog, in particular, were that:

chris said...
just last week i heard from an LHCb guy that they are desperately looking for new ideas of where to look for deviations from the SM. he explicitly said that they are frustrated by not seeing any hint of new physics at all.

23 August 2011 09:26 . . .

Anonymous said...
I also heard from an LHCb guy that they are seeing no deviation in the like sign dimuon asymmetry, completely contradicting D0.

23 August 2011 19:36

The results may be announced at Lepton-Photon in Mumbai on Saturday. We could, of course, simply wait to see what they actually say, but where would be the fun in that?

If the rumored LHCb result is accurate (and it wouldn't be the first time that a D0 experimental indication didn't pan out when attempts were made to confirm it), the motivation to devise beyond the Standard Model particle physics would be greatly reduced, although it isn't clear to me how many different ways there is experimental evidence for beyond the Standard Model CP violation in B meson decay, of which this may be only one example. Still the discounting of this result takes some strain off apparent experimental indications that the CKM matrix is broken, in the sense that no set of entries in this matrix that describes the probability that quarks of one generation turn into quarks of a different generation via the weak force in a way consistent with experiment within the margins of error in those experiments. The high levels of CP violation in B meson decay relative to Standard Model prediction are the main reason that the CKM matrix is out of whack, and if LHCb establishes that those decays aren't as CP violating as this D0 experiment had indicated, then it is much easier to fit all of the remaining experimental data to a single theoretical set of CKM matrix entries that can describe all of the experimental data.

This is one of several experimental results that are on the short list of physics blogger Jester that could contest the Standard Model, and have been seriously called into question in the last few months.

Indeed, in general, LHC has yet to find any compelling evidence of Beyond the Standard Model physics, and it has not ruled out a low mass (114 GeV to 130 GeV) Standard Model Higgs boson, as this is the mass range where the LHC experiment is least sensitive and thus requires the most data to produce a definitive result.

In my mind the most compelling experimental evidence that seems to be an ill fit for the Standard Model is the measured muonic hydrogen atom size, which is notable for the size of the difference and the accuracy of the theoretical expectation, and which is also notable for not being predicted by almost anybody.

Updated August 28, 2011:

Quantum Diaries Survivor tends to confirm the rumors:

[T]he recent searches for Supersymmetry by ATLAS and CMS, now analyzing datasets that by all standards must be considered "a heck of a lot of data", have returned negative results and have placed lower limits on sparticle masses at values much larger than those previously investigated (by experiments at the Tevatron and LEP II).

Similar is the tune being sung on the B-physics sector, now being probed with unprecedented accuracy by the dedicated LHCb experiment (along with again precise measurements by ATLAS and CMS, plus of course the Tevatron experiments). I have not reported on those results here yet, but will duly do so in the next weeks. In a nutshell, anyway, deviations from the Standard Model predictions are all well within one sigma or two; the hypothetical contribution of SUSY particles in virtual loops taking part in the decay of B hadrons must be very small in order to fit in this picture.

Monday, August 22, 2011

Lepton-Photon Conference Results Show Weaker Higgs Signal


Image from here, shows the Quantum Diaries Survivor plot of what a Standard Model Higgs signal would be expected to look like with the amount of data produced to date for a particular SM Higgs mass values compared to the observations from LHC, as opposed to the usual plot which compared the data to a Standard Model expectation in the absence of a Standard Model Higgs. Thus, at the mid-120 GeVs the signal seen and the signal we would expect to see if it was really there are quite similar, while at higher masses, there is huge gap between the significant of the result we have seen and the significance that we would predict that we would see if it was really there. A Higgs at 125-130 GeV would be expected to produce experimental results much more similar to the ones seen (because experimental power is low in that range producing weak signals in general) than one at say 160 GeV.

The Lepton-Photon Conference this week in Bombay has offered little to encourage scientists hoping that a Standard Model Higgs boson will not be ruled out by experimental evidence.

According to the ATLAS experiment from LHC.

In general, the previous Higgs signals have weakened. Some more details: A 35-minute introductory theoretical talk on the Higgs physics was followed by three 20-minute talks by ATLAS, CMS, and Tevatron. ATLAS - see their fresh new Higgs press release - excluded everything at the 95% level except for 114-146 GeV, 232-256 GeV, 282-296 GeV, and above 466 GeV. No new details about the preferred masses inside these intervals, despite 2/fb of data in some channels. . . .

CMS . . . has mentioned that the p-value for Higgs to gamma-gamma grew deeper, more significant, near 140 GeV but more shallow near 120 GeV. Nothing is seen in the tau-tau channel - which will however grow important in the future. The speaker explains the WW channel ending up with 2l 2nu; then ZZ "golden" channel with 4 leptons. Many combinations are shown. In the CMS p-value, the 120 GeV is actually slightly deeper than for 140 GeV - difference from the gamma-gamma channel itself. Surviving intervals (from the 95% exclusion) are 114-145 GeV, 216-226 GeV, 288-310 GeV, above 340 GeV - different small hills than ATLAS. The deviations grew weaker since the last time. . . . Tevatron has collected 11.5/fb or so and will close on September 30th.

From The Reference Frame.

The more than two sigma deviations from the Standard Model expectation are just barely more than two sigma and aren't always consistent between different experiments. They may not exclude at the 95% confidence level, but they probably do at the 90% confidence level.

One would expect a Standard Model Higgs signal to be more clear at this point (as shown in the image above) and show more of a trendline towards increasing confidence in repeated studies, rather than a persistent low grade, not quite excludable bump in a general area that isn't quite consistent. Instead, any Higgs signal is extremely subtle. And, the studies were looking at are simply looking at the comparison betweeen a null hypothesis and "something" to set their confidence level, not a relative hypothesis test between some highly specific predictation and the results seen. At this point the Higgs boson prediction has degenerated from a specific spin zero particle at a fairly specific mass (that has already been ruled out and is close to the Z boson mass) to a prediction that there is something like it out there somewhere.

The "width" of a signal in one of these charts is a measure of its "cross-section" which is quite narrow for all particles in the Standard Model discovered to date. Nothing observed so far has such a shallow, wide bump as opposed to a steep narrow bump. If what Atlas and CMS are seeing as a deviation from the Standard Model is an undiscovered single particle of any kind, it is a very different animal than anything else seen to date or what theorists had expected a Higgs boson to look like (perhaps in ignorance of how a spin-0 fundmamental particle differs from either a higher spin fermion or boson, or a spin-0 meson which is composite).]

Precision electroweak data fitted to the Standard Model and theoretical considerations in models like the mimially supersymmetric standard model (the most famous of the SUSY models) strongly disfavor any of the mass ranges for the Higgs except the 114-145 GeV range that still remains standing, barely. The Higgs boson mass range favored by precision electroweak data of ca. 90-110 GeV has already been ruled out.

Marco Frasca is pretty much ready to write off the Standard Model Higgs boson in favor of "a strongly coupled Higgs boson" within the context of a SUSY model, as he explains in technical detail here. In the sense used "strongly coupled" mere means having a very large, self-interacting coupling constant, rather than specifically having a strong force of QCD coupling through gluons, although this Higgs mass generation mechanism is an analogy to the way that mass is generated in QCD through strong force interactions.

Woit sums it up thusly:

•No Higgs above 145 GeV
•In the region 135-145 GeV, both experiments are seeing somewhat more events than expected from background, but less than expected if there really was a Higgs there.
•Not enough data to say anything about 115-135 GeV, the Higgs could still be hiding there. If so, a malicious deity has carefully chosen the Higgs mass to make it as hard as possible for physicists to study it.
.

Particle v. Antiparticle Mass

Meanwhile, from the same link:

In recent 3 days, the ASACUSA experiment at CERN announced its accurate laser measurements of the antiproton mass. It agrees with the proton mass at the accuracy of 1 part per billion. The CPT symmetry which almost certainly holds exactly implies that matter and antimatter - whenever they can be distinguished by charges to be sure that you made the "anti-" operation correctly - have exactly the same mass. . . .

Just compare the measurement of the antiproton mass with some of the spectacular claims about the top-antitop mass difference. It's almost the same thing but CDF at the Tevatron claimed that the antitop mass differs from the top mass by 3 GeV or so - by two percent.

Try to invent a reason why the top and antitop masses would differ by 2% - but the analogous proton-antiproton mass difference (which may be affected by the top-antitop differences as well) would be smaller by more than 7 orders of magnitude. I don't say that it's strictly impossible to invent such a mechanism but it is probably going to be very awkward.

Claims that someone observes CPT violation are extraordinary claims and they require extraordinary evidence. So if one measures the top-antitop mass difference to be nonzero at a 2-sigma or 3-sigma level (by 3 GeV), he should say that "our measurements were rather inaccurate; our observation of the mass difference had the error of 3 GeV".

On one hand, I don't entirely agree with Lubos Motl (I'd put the accent mark in there if I knew my keystrokes better), in the quote above, that it is so hard theoretically to know that proton and anti-proton mass are within one part per billion of each other (10^-9), while the top and antitop pair differ by two parts per hundred (2x10^-2). The top quark weighs 174,000 MeV/c^2 while is first generation up and down quarks that go into a proton are almost five orders of magnitude lighter each. And, in a proton more than 90% of the mass comes from the strong nuclear force that binds the two up and one down quarks in it together, not from the quarks themselves. Thus, there any mass difference signal is damped by about six orders of magnitude in a proton relative to a top quark and is further damped by the need to disentangle up and down quark mass differences in a proton, when those masses aren't known with as much precision as we'd like since quark confinement and the relative stability of protons and neutrons made of them makes it harder to measure them separately from each other. So it isn't entirely unreasonable to see that level of experimental error in the two systems as close to comparable in some respects.

The fact that top quarks almost exclusively decay first into bottom quarks, without forming hadrons (three quark baryons or two quark mesons) as other quarks do, also could make the signal cleaner and hence make it easier to get a mass difference to appear in these decays that would be hard to seen in particles that have error inducing background from QCD hadron formation possibilites.

On the other hand, the theoretical and experimental motivations for particles and their antiparticles to have the same masses is extremely strong. And there is a huge range of masses between the top quark scale and the components quarks of protons where particle-antiparticle mass equivalency has been confirmed to considerable experimental precision. Theoretically, the need for this symmetry goes all the way back to Dirac's equation and has remains central in all subsequent elaborations of quantum mechanics. So, Motl appropriately invokes the extraordinary claims require extraordinary evidence maxim.

The only person I'm aware of who is real touting a model that entertains this kind of asymmetry is the scientifically trained physics blogger Kea from New Zealand. But, while those models are interesting, I'm not yet convinced that they are sufficiently well motivated to back the existence of a CPT violation supported by only a modest two or three sigma discrepency in pretty much one experiment.

Personal Speculation Regarding Composite Quarks

The attraction of Higgsless models and heavy Higgs models, relative to more traditional Standard Model and SUSY variations, on it continues to surge.

For example, one quite natural way to get mass with fundamentally massless fundamental particles, would be to assume that quarks are composite.

The analogy to the atomic nucleus to proton and neutron to quark relationship could be quite compelling. The nuclear binding force between protons and neutrons is mediated by a composite meson (which is also a boson) called the pion and this effective nuclear binding force is essentially overflow from the strong force interactions between the quarks in individual protons and neutrons mediated by gluons. Gluons are formally fundamental in the Standard Model, but the fact that each gluon must have two color charges, one of which must be an anti-color charge, and exchange exchange color charges between quarks, the model is just a hair short of a composite one already.

The very limited amount of direct observation of the inner workers of protons and neutrons and mesons due to quark confinement, and the fact that QCD predictions from first principles tend to match real world observations only at about two significant digit accuracy, despite its best efforts, leaves a significant gap between the strongly theoretically and experimentally motivated QCD rules we take to be true in the Standard Model and what we have actually confirmed to any precision experimentally. For example, the experimental limitations that compel the gluon to be actually massless, rather than merely having a very low mass (perhaps on the order of a neutrino), are quite weak since the strong force operates at such short ranges.

It would not be all that earth shaking to discover that quarks and gluons are themselves composite and that the strong force is merely a spill over of a preon binding force that holds preons together in a quark or gluon. The natural energy level of such a confining force would be immense rendering this binding force a natural source for all or most of the mass in a composite quark, just as binding force is responsible for 90% of the mass of a proton or neutron and a significant and measurable amount of the nuclear mass of an atom. Even if these preons had some fundamental mass, it would be easy to imagine that this mass was on the same order of magnitude as the leptons, with the rest arising from a binding force.

Preon models aren't that hard to devise, and some fairly good ones have been developed, although it is harder for those models to explain in any way the three generations of fermions that are observed, experimental evidence that would prove those models that is within the realm of possibility, or the way that the weak force decay probabilities seem to put different fundamental particles in the Standard Model on equal footing (although even then, the fact that the weak force treats each color of quark as a distinct possibility relative to a single charged lepton possibility at each generation can be seen as an argument that quarks but not leptons are composite).

It also isn't too hard to imagine that the three color charges might correspond to the three spatial dimensions, while the electrical charge might correspond to the time dimension in some profound way. In this view, the relative strength of color force relative to electrical charge would be related to the fact that the conversion factor of the speed of light relates very large spatial distances to very small time distances. The fact that both color force and electromagnetic force appear to be non-CP violating would also suggest a link between the two. Now, this program of unification would leave the weak force, which has been unified with the electromagnetic force, the odd man out after decades of wedded bliss with electromagnetism, but surely some sense of its role could emerge from this line of thought.

Experimental limitations on composite quarks and gluons also ease up quite a bit if you assume that the effects expects at the Planck scale actually manifest several orders of magnitude smaller than the Planck length.

Friday, August 19, 2011

Does General Relativity Minus Gravity Equal Special Relativity?

An issue of rigor here. Everybody knows that general relativity provides a more accurate version of how gravity works than Newtonian gravity in a way that respects the principles of special relativity relating to the non-linear relationship between velocity, mass and distance.

Gravity in general relativity differs from Newtonian gravity in multiple ways. General relativity gives us concepts like black holes, the Big Bang, and gravitational time dilation. Newtonian gravity is not path dependent, while general relativity is path dependent. In general relativity both mass and energy give rise to and are subject to gravitational effects (allowing for phenomena like gravitational lensing), while only mass matters for Newtonian gravity. And, Newtonian gravity has precisely the same effect in all directions on the Euclidian space sphere surrounding an object, while in general relativity the direction of an accelleration due to gravity depends upon the mix of mass-energy components in the stress-energy tensor – a rotating sphere for example, has a different gravitational effect than the same sphere at rest, even if the sphere at rest has a slightly greater mass equivalent to energy attributable to the angular momentem of the rotating sphere. In a generalized form of Newtonian gravity that looked at total mass-energy rather than merely mass, those two systems would be equivalent, exerting a spherically symmetric force directed at the center of the sphere, but in general relativity the gravitational effect of the rotating sphere will have a “frame dragging” effect that gives the gravitational effect a bit of twist rather than bering directed straight towards the center of the sphere, while the sphere that is not rotating will not.

All of this is by way of prologue. The issue I'm interested in today is whether general relativity is simply special relativity plus a rather complex kind of gravitational force expressed through the medium of space-time distortion, or whether adds something beyond gravity, or distinct from gravity but not independent of it, to physics beyond special relativity.

You don't, for example, need general relativity to come up with the equation E=mc^2. You can get that in the limit of special relativity and it is critical to the formulation of important parts of quantum mechanics like Dirac's equation.

I'm intuition to think that there is more to general relativity (perhaps tucked away in usually unstated assumption) than a quirky law of gravity. But, it is tricky to define it. The equivalence principle, for example, defined to mean that gravitational mass and interial mass are identical, is surely nonsensical in the absence of gravity. But, the notion that there is no preferred reference frame and that all forces are created equal whatever their origin might have deeper meaning. It might be possible to ask what, if anything, is left of general relativity if one were to rest the gravitational force constant to zero, but I'm not entirely convinced that this procedure wouldn't throw out additional innovations of general relativity that are embedded in the field equations of general relativity by zeroing out not just terms of gravitational signficiance, at least in conventional formulations of general relativity, but it might have other terms or concepts or implications that are distinguishable from gravity, per se, if not precisely independent of gravity.

I've heard people explaining a difference between special relativity and general relativity as including the latter's “background independence” which is not present in special relativity. I suppose one way to pose that issue would be, “could you formulate physics that are consistent with special relativity, but which lacks gravity, in a background independent way that would differ in some respect other than providing an exactly equivalent but different mathematical form of the equations, from the ordinary Minkowski space of special relativity and quantum mechanics, that would make it more akin to general relativity.

Similarly, general relativity, in its usual formulation, is stated in terms of a continuous stress-energy field, rather than in terms of point masses. I imagine that one could formulate other physical theories in those terms as well. Indeed, reifying the quantum mechanics particle propogator to think of it as an actual continous smeared out location and momentum of particles in a way proportionate to its amplitudes at each point, rather than as point particles that hop from point A to point B with a certain probability, in relation to forces other than gravity, might very well be a fruitful endeavor.

Honestly, while I know enough to ask the question and even to identify some of the issues inherent in trying to formulate it that might lead to different kinds of answers, I don't know general relativity well enough to answer it.

Thursday, August 18, 2011

Constrained Beyond The Standard Model Theories

There seem to be some problems with the Standard Model, but many beyond the standard model theories allow or predict phenomena that are not well motivated experimentally. One might define a "constrained" beyond the Standard Model theory as one that lacks any of a number of common flaws of such models. Some constraints that might be particularly relevant are that constrained theories predict that:

1. Baryon number is separately conserved.
2. Lepton number is separately conserved. This implies, among other things, that neutrinos have Dirac mass.
3. Neutrinoless double beta decay is not possible.
4. Proton decay is not possible.
5. Flavor changing neutral currents do not exist.
6. Net electromagnetic charge is conserved.
7. Combined CPT symmetry is observed. This implies among other things that particles and antiparticles have equal rest masses.
8. Combined mass-energy is conserved.
9. There are no magnetic monopoles.
10. The strong force does not give rise to CP violations.
11. Photons are massless.
12. "The W is a quite democratic particle: it decays with equal frequency into all the possible lepton pairs and quark pairs that are energetically allowed. Electron-electron antineutrino, muon-muon antineutrino, and tau-tau antineutrino pairs are equally likely; down-antiup and strange-anticharm quark pairs are three times more likely than the lepton pairs, because they exist in three colour-anticolour combinations." This implies that experiments can impose very strict limits on so far undiscovered quarks and leptons, essentially ruling out any new quarks or leptons with lower masses than the third generation of quarks and leptons in the Standard Model, and also probably ruling out novel classes of fermions (at least at spin 1/2), or models with more than three color charges for quarks. More deeply, it rules out any model that disrupts the fundamental equivalency in some sense of lepton and quark types shown by W decay patterns.

Notes On SM4 Limits

Incidentally, some of the precision electroweak data lower bounds on fourth generation lepton masses (from LEP II as of 2007) are as follows: "A robust lower bound on fourth–generation masses comes from LEP II. The bound on unstable charged leptons is 101 GeV, while the bound on unstable neutral Dirac neutrinos is (101, 102, 90) GeV for the decay modes ν4 → (e, μ, τ) + W. These limits are weakened only by about 10 GeV when the neutrino has a Majorana mass."

For comparison, the Tau (which is the heaviest known unstable charged lepton) has a mass of about 1.8 GeV (a bit less that 1/50th of the precision low bound) and the Tau neutrino mass is probably easily a thousand times less than the Tau and probably closer to a million times less than the Tau. The lower limits on neutrino mass, thus, seem stringent, but not insurmountable given some evidence from Neutel 2011 papers that there are more than three generations of neutrinos from more than one type of experiment.

Limits on fourth generation quark masses (256 GeV for t' and 128 GeV for b' as of 2009) were much closer to the third generation quark masses, and hence not very constraining, although the limits imposed by experiment on CKM matrix values in a fourth generation standard model are quite strict.

Baryon Number and Lepton Number Non-Conservation

Baryon number conservation is a pretty simple thing. It states that the number calculated by subtracting the number of anti-quarks in a system from the number of quarks in a system is conserved.

The Standard Model allows certain interactions that do not conserve baryon number to take place. Indeed, while possibility of baryon number non-conservation is pretty much irrelevant to ordinary Standard Model physics in anything remotely resembling ordinary temperatures, where the Standard Model suppresses the behavior to undetectable rarity, baryon number non-conservation at the kind of temperatures assumed to exist by cosmologists in the moments after the Big Bang to produce the mix of particles we see today in cosmologies based on the Standard Model.

In the Standard Model baryon number non-conservation is associated with chiral anomalies and in particular with a process called a sphaleron that cannot be represented by a Feynman diagram.

This particular nuance of the Standard Model has a big problem. It is a feature of the Standard Model, like the potential for CP violation in strong interactions, which has never been observed, even though a modified conservation law (B-L conservation where B standard for baryon number and L stands for lepton number) is ready and waiting to explain it and constrain it theoretically, if it were ever observed. As of a 2006 paper discussing the experimental evidence for baryon number non-conservation (and citing S. Eidelman et al. (Particle Data Group), Phys. Lett. B592 (2004) 1): "No baryon number violating processes have yet been observed." Some of the processes that B-L conservation might make possible in some beyond the standard model theories, such as proton decay, are also not observed.

Thus, while most beyond the standard model physics propose B-L conservation compliant processes that violate baryon conservation number, beyond the standard model physics that prohibit baryon conservation are not prohibited by experimental evidence, although they would require significant revisions in cosmologies based on the Standard Model at unobservable moments in the history of the universe. Still, the cosmology problems associated with absolute baryon conservation don't pose nearly the problem for quantum gravity theories that, for example, propose a "big bounce" as loop quantum gravity does, as opposed to a pure "big bang."

A similar notion called lepton number also exists, and there are also subconcepts of lepton number for the electron, muon and tau generations of leptons called leptonic family number. Lepton family number is apparently violated in neutrino oscillation. But, while lepton number violation is permitted in cases of chiral anomalies, just as baryon number violation is within the context of the Standard Model, it isn't clear from experimental evidence if this ever actually happens. (One-loop Feynman diagrams like the triangle diagram do involve a chiral anomaly, and clearly are necessary to include in calculations that produce the right answers for pion decay, but it isn't obvious to me that these involve lepton number violations, and, if they do, it may be the lepton number violation definition rather than the existence of such a rule that is at fault.)

Lepton number conservation is closely related to a deep issue concerning the nature of neutrino mass (Dirac or Majorana):

Dirac neutrino masses can be generated by the standard Higgs mechanism. Majorana neutrino masses require a new mechanism of neutrino mass generation that is beyond the Standard Model. One of the most popular mechanisms of neutrino mass generation is the see-saw mechanism [51, 52, 53]. This mechanism is based on the assumption that the law of conservation of lepton number is violated at a scale that is much larger then the scale of violation of the electroweak symmetry. The see-saw mechanism allows to connect the smallness of neutrino masses with a large physical scale that characterizes the violation of the lepton number conservation law.

If neutrino masses are Dirac just as all other particle masses in the Standard Model are, then lepton number non-conservation is not necessary. If they have Majorana masses, then lepton number non-conservation is necessary. Neutrinoless double beta decay, which violated lepton number conservation and is also associated with Majorana masses for neutrinos, has not been observed (see also here (2010)). This places significant limits on the nature of Majorana mass if it exists, and is also notable because it is one of the few areas of fundamental particle physics that can be examined experimentally with great precision without the immensely expensive particle accelerators (such as the LHC).

Thus, baryon number non-conservation and lepton number non-conservation, while well motivated theoretically, like SUSY and various grand unified theories, are mere possibilities that experiments continue to offer absolutely no evidence to substantiate. It might be fruitful to focus on theories that instead conserve both baryon number and lepton number because these are completely consistent with experiment and the exercise of developing equations and quantum mechanical rules that observe these symmetries might provide additional useful insights for new beyond the standard model investigations.



Wednesday, August 17, 2011

Proud Men Die Young

People who most believe in a culture of honor -- who agree that "A real man doesn't let other people push him around" or that aggression is a reasonable response to being insulted -- told the researchers they were quite willing to engage in risky behaviors, such as bungee jumping or gambling away a week's wages. This willingness to take risks might well translate into an early death[.] . . .

Honor cultures are more powerful in rural areas, where the influence of personal reputation is higher than it is in cities. Although honor states had a 14% higher accidental death rate in the cities, they had a 19% higher rate of accidental death in more rural areas, compared to non-honor states. More than 7,000 deaths a year can be attributed to risk-taking associated with the culture of honor in the USA.

From here.

FWIW, it could also have something to do with the fact that "cultures of honor" also have disproportionate shares of blue collar workers whose jobs are more dangerous. But, the urban-rural comparison casts some doubt on that economic as opposed to cultural interpretation.

Why is garlic a food preservative?

Garlic has antibacterial properties.

Why?

"[C]ontrary to expectations . . . a group of garlic-derived organosulfur compounds has greater antimicrobial activity than garlic-derived phenolic compounds."

DNA forming biochemicals including exotic version found in meteor.

Two . . . meteorites . . . called Murchison and Lonewolf Nunataks 94102, contained a trove of nucleobases, including those also found in DNA. But these meteorites also held . . . related but exotic nucleobases never seen before, said Michael Callahan​, the NASA scientist who analyzed the space rocks. Analysis of dirt and ice found near the meteorites showed no evidence of these exotic nucleobases. Scientists also have found other building blocks of life — most notably amino acids, the links that form proteins — inside meteorites.

From here.

The case for extraterrestrial life just got a little stronger. If the building blocks for life are floating around in space in a concentrated way, then the task of making the final step to combine them in a way that gives rise to life seems less daunting, a bit like randomly assembling legos instead of randomly assembling a model airplane.

Query what would happen if we tried to create life forms in the lab that had these exotic nucleotides substituted for the four that we know and love.

I'll also mention (and track down the link sometime if I can), an information coding efficiency argument made by a physics blogger who is also an electrical engineer for why DNA has four nucleotides, rather than some other number. Hence, even if there are more possible nucleotides, there might be good fundamental reasons that our DNA doesn't utilize them all.

A Lawyer Who Liked Math

I have always been a fan of Fermat, whose eponomous theorem was only proved a few years ago, although probably not the way that Fermat himself proved it.

What you probably didn't know is that this 17th century intellectual light had a day job. He was a lawyer. One more reason to like the guy.

Tuesday, August 16, 2011

Starling and Fish Flock Behavior Explained By Simple Agent Models

Very simple rules applied to each member of a flock of starlings or school of fish explain quite fully the observed behavior of these groups, which is quite complex.

More Non-Locality In GR and QM

I've been exploring interfaces between general relativity and quantum mechanics, which have deep theoretical inconsistencies despite both working at extreme levels of precision in other contexts, including a recent post looking at issues including entanglement type non-local effects in quantum mechanics.

The Interval and the Feynman Photon Propagator

More pervasive than entanglement or weak force decay patterns in quantum mechanics is a key term in the Feynman propagator, which calculates the probability that a photon which is at point A now will end up at point B at some point in the future.  This probability is calculated with a path integral that calculates a contribution to the final probability that a photon will go from point A to point B from every possible path from point A to point B in a particular way.  The dominant contribution in the final answer except at very short or stylized situations comes from paths that involve photons traveling at the speed of light.  But, there are also contributions necessary to get the right prediction that are attributable to photons travelling at more than the speed of light or less than the speed of light.  This is impossible in the equations of classical electromagnetism, special relativity and general relativity.  The contributions of each of these paths is equal to the inverse of "the Interval" which is equal to the square of the magnitude of the spacelike vector from point A to point  B minus the square of the magnitude of the timelike vector from the source to the destination, with length and time units made equivalent based on a speed of light ("c") conversion factor.

It isn't really obvious what this means.  It works and is a necessary part of the theory.  But, does it really reflect deviations from the speed of light?  It could instead have a root in space-time having an underlying structure that is almost, but not quite local, with some points being adjacent by a certain route to a point that would otherwise be non-adjacent.  Loop quantum gravity, which is four dimensional only emergently as something close to a regular ordering appears from nodes that are connected to each other in networks, is one example of a model with this kind of fundamental non-locality in spacetime.

It could also reflect interactions between photons and the vacuum.  There is a certain probability that the vacuum will simply emit a photon seemingly out of nothing (one possible source of this may be derivable of the excess certainty that comes from knowing the position and momentum of empty space, although there are other ways to get there), and that the new photon rather than the original one is the one that ends up at point B.  Also, photons can spontaneously turn into a particle and antiparticle pair that a massive and then annihilate and turn back into a photon, slowing down the show while they have rest mass.  But, the probability of exceeding and undershooting the speed of light in the calculation are identical, while all of the sensible explanations like transmutation into massive particles and back, can only explain slower than speed of light paths.

Generally, one would associate a speed faster than the speed of light, to the extent that special relativity, which is incorporated into quantum mechanics as well as general relativity, applies with movement backwards in time and the possibility of a breakdown of causality.  So the fact that this possibility must be incorporated into the equations to calculate the movement of every particle in quantum mechanics (massive particle propagation add terms that elaborate the photon propagation formula but don't remove any of the photon terms), seems pretty important in understanding the fundamental nature of time, causality and locality in the universe.

Of course, those equations aren't measurable phenomena.  We don't directly observe probability amplitudes or the paths that go into the Feynman propagator path integral.  We only see where the particle starts and where it ends up and come up with a theory to explain that result.

Still, non-locality or non-causality, which are different sides of the same coin, are deeply ingrained in quantum mechanics, which, as weird as it seems, works with stunning accuracy.

Non-Locality In General Relativity

Now, I've said that general relativity assumes continuity of space-time and I haven't been entirely honest in that.  There is one part of general relativity that some people argue has an element of non-locality to it. 

While general relativity obeys the law of mass-energy conservation with conversions mass never being created or destroyed and energy always staying constant except in cases where one is converted  to the other according to the familiar equation E=mc^2, on a local and global basis, the accounting is less straightforward for one particular kind of energy, to whit, gravitational potential energy.  At least some people who can handle this issue with the appropriate mathematical rigor say that the equations of general relativity sometimes conserve gravitational potential energy only non-locally.  Losses of gravitational potential energy in one local part of a system are sometimes offset by gains in gravitational potential energy elsewhere.  Others doubt that this holding is truly rigorous and wonder if the equations of general relativity truly concern mass-energy at all when gravitational potential energy is considered. 

All of this analysis of GR flows from the equations themselves.  But, a heuristic mechanism to explain what is going on and understand why the equations seem to make this possible in a less abstract way isn't obvious.

Planck Length Revisited

One is about the possibility that a fundamental particle might be 10^-41 meters in radius which might explain CP violation with GR, as I suggested in a previous post, is that this is a length which is considerably less than Planck length.  Is that a profoundly troubling result?

Maybe not.  The fundamental relationship that is a real law of physics is the uncertainty principal which in mathematical form says that inaccuracy in space measurement times inaccuracy in momentum measurement can't be less than Planck's constant.  For a particle of a given mass, that works out to a length times time unit.  Thus, the real laws of physics allow for trade offs between precision in length measurement and precision in time measurement (in speed of light based units).  If it is possible to have well defined distances of say 10^-43 meters (the black hole size for fundamental particles according to GR),  then the maximum precision with which one can measure time is not as great.  There is a continuum between minimum length and minimum time unit and no obvious preferred way to strike a balance between the two.

Planck length and time add one more constant to the mix, the gravitational constant, to strike that balance.  But, the choice of the gravitational constant is simply a product of the fact that it has the right units.  The units of the gravitational constant produce an answer that works with the right units and we don't have a lot of other constants floating around with the units to give us that result when multiplied by Planck's constant and the speed of light in the appropriate way, so as a matter of really nothing more than informed numerology, we use the constants to create the Planck units.  But, there is no real physical reason that a different fundamental unit of length, which would in turn imply a different fundamental unit of time and work just as well.

Indeed, the 10^-41 scale for a fundamental particle radius scale derives indirectly from the gravitational constant which is in the time dilation equation, just as Planck length does.  It just invokes one more term, the CKM matrix CP violating terms and anomalous variations from that in B meson decay, to the mix to provide a suitable length scale that gives us an experimental hook that could ground the relationship of Planck's constant, the speed of light and the gravitational constant at a particular place along the Planck constant trade off between length and time units.  The fact that one other experiment out there also points to a fundamental distance scale (if any) smaller than Planck length but not ruled out for the 10^-41 fundamental particle radius scale, is also encouraging on this front.

Monday, August 15, 2011

Germanic UK Influence Earlier and More Demic Than Previously Believed


The Article's Conclusions

A massive influx from Germanic Europe of whole families, complete with livestock, probably numbering 200,000 people crossing the North Sea in all from about 450 CE to 550 CE, probably started closer to 407 CE, when the Roman armies left Britain after three hundred and sixty years in the face of a collapsing empire, than the ca. 1000 CE Norman conquest of popular historic myth. Germanic populations started establishing cemeteries by 410 CE. Histories and the earliest Old English literature, while somewhat patchy from that era, support this view.

Initially, the newcomers were outnumbered 5-1 or more by native Britons, but the Germanic invaders superiority in combat (and perhaps in running a society not under Roman rule in general) has since left half of British men with their Y-DNA legacy today. "People from rural England are more closely related to the northern Germans than to their countrymen from Wales or Scotland." This was not merely a case of a shift of a thin ruling elite of soldiers. Germanic genetic impact is unsurprisingly lower in Scotland and Wales, where some Celtic linguistic community also survived, is moderate in Northern Ireland (no doubt due to migrations from Britain to to Ireland much later in the historic era), and is almost nil in the Irish Republic, although it is somewhat shocking to see just how much Irish-British ethnic differences are not just religious and cultural and linguistic but are also a matter of population genetics.

The famous legend of King Arthur also originated in that era -- as a form of counter-propaganda. Historians characterize the work as a "defensive myth" created by the original Christian inhabitants (with the Holy Grail possibly symbolizing the communion cup). Perhaps the King Arthur legend is based on a mythical Celtic king who won a victory at Mount Badon around 500 A.D.

In truth, however, the army of the Britons was usually in retreat. Many fell into captivity. . . the captured Britons lived a miserable existence as "servants and maids" in the villages of the Anglo-Saxons. There were two types of grave in the cemeteries of the time: those containing swords and other weapons, and those with none. The local inhabitants, deprived of their rights, were apparently buried in the latter type of grave. . . . [T]he conquerors from the continent maintained "social structures similar to apartheid," a view supported by the laws of King Ine of Wessex (around 695). They specify six social levels for the Britons, five of which refer to slaves.

Analysis

The Context of the King Arthur Myth

The context for the King Arthur myth, sometimes cast as the last defender of a Celtic pagan social order against Christianizing invaders, in accounts rich enough for readers to discern, rather than the product of Romanized Christianized British natives against invading Germanic invaders who may themselves have been pagan, is also something of a surprise.

Layers of Genetic Heritage In The British Isles

There is a strong tendency for invading populations to have more impact on Y-DNA (which is patrilineally inherited) than mtDNA (which is matrilineally inherited) or autosomal DNA (inherited from both parents, more or less equally if natural selection does not intervene). But, even if the patriline was 50% Germanic, while the matriline was 0% Germanic, once would expect the British to have 25% Germanic heritage, and given the apparently more demic than commonly understood structure of the Anglo-Saxon colonization of Britain, estimates of 50% Germanic patrilines and 30% Germanic matrilines, for a combined roughly 40% Germanic autosomal ancestry is probably a more reasonable estimate.

Tools that are designed to measure the autosomal contributions of hypothetical separate ancestral populations like Admixture, however, would probably not give you that figure because the gene pools of Germanic peoples ca. 500 CE and Celts in Britain of the same era probably had significant overlap. Both, for example, were Indo-European language speakers of somewhat related branches of that linguistic family, and language shift probably didn't happen as completely as it did at the time of Celtic settlement without some meaningful demic component. Indo-European Celtic communities achieved linguistic and cultural dominance, although their genetic impact can be debated because we don't know for certain what the genetic makeup of pre-Celtic residents of the British Isles was before they arrived, thousands of years earlier than the Anglo-Saxons.

On the other hand, it isn't entirely possible to rule out the possiblity that some of the Y-DNA attributed to Anglo-Saxon invaders with "Friscan" DNA markers in the study illustrated at the top of this post may have been the product of more peaceful earlier migrations to Britain, either through population transfers in the Roman era, or interchange of people during or before the Celtic era. The close alignment of these DNA markers with areas that kept Celtic language speaking communities into the 21st century, however, disfavors that possibility as an important factor.

Celtic conquerors, at least, who probably started to arrive in force sometime after 1,500 BCE, and their Germanic successors, however, as well as Romans, all probably had more in common with each other genetically than they did with the pre-Celtic people of Britain during the Neolithic era and early metal age associated with megaliths like Stonehenge (whose construction ceased close in time to the arrival of the Celts and whose religion probably ended with their language and culture).

The bearers of Neolithic culture to Britain (most notably farming and herding), in turn, probably in turn had replaced or greatly diluted prior Upper Paleolithic hunter-gatherer societies of Britain around the time range of 5,000 BCE to 3,500 BCE, and were probably closer genetically to later waves of invaders from the European Continent than to the Upper Paleolithic peoples of Britain who had lived there since at least the retreat of the glaciers after the Last Glacial Maximum (a repopulation that would have begun a few thousand years after 20,000 B.C.E.), and possibly was continuous from ten thousand years before that in the first wave of anatomically modern humans that arrived in Europe, who replaced the Neanderthals (the question of how that happened is for another day).

Thus, while Britain has been continuously inhabited by members of the genus Homo for many tens of thousands of years, maybe even hundreds of thousands of years or more, a huge component of the population genetics of England can be traced to migrants from just 1,600 years ago, and very little of the modern English gene pool can be traced back in Britain any further than migrants from 6,500 years ago. In between, starting at about 3,500 years ago (the Celts) and at about 1,900 years ago (the Romans), there were at least two significant infusions of migrants who changed the gene pool, although this is harder to quantify the extent of those impacts off the cuff.

The Romans, however, didn't conquer Ireland (although its non-demic Roman Catholic missionary effort in Ireland was a leading source for the Christianization, or re-Christianization in some cases, of Northern Europe after the fall of the Roman Empire), nor did they conquer Scotland (which was placed in personal union under the same monarch with England in the 18th century), and so presumably didn't leave the same kind of mark there.

It also seems plausible that the early farming techniques of the megalithic peoples of the British Isles were less advanced than that of later migrants, so in some places that were less suitable for their style of farming, the first major demographic transition may have been directly from Upper Paleolithic hunter-gatherers (perhaps admixed with outcast members of the megalithic population to some small extent) to a somewhat more agriculturally advanced Celtic food producing population.

Big Picture Observations and Caveats

My background in history, the advances of the last decade or two in population genetic and even more so in ancient DNA, the absence of excess Neanderthal admixture in Europe, and the archaeological and physical anthropology evidence, as well as the linguistic evidence all incline me to favor an interpretation of ancient history and pre-history that has only weak continuity between present European populations and the Paleolithic era, and increasingly to see significant population genetic shifts in much of Europe since then.

It is also my tendency, within the debates over Indo-European linguistic origins to see the evidence pointing towards a transition to Indo-European languages in most of Europe outside its proto-language area considerably later in history than the more "conservative" Indo-Europeanists see its arrival.

Britain is an attractive area to analyze in this regard because the amount of archaeological research done there has been exhaustive, because it was one of the later parts of Europe to experience the Neolithic revolution of food production, because the gap there between food production and reasonably credible history is narrower than in almost any other place in Europe since it was included in the literate and well documents Roman Empire, and because the linguistic and archaeological and population genetic record can fill in the gaps between the dawn of the Neolithic revolution and the arrival of the Romans pretty completely with some confidence. Its status as an island also tends to make its instances of migration and admixture likely to be less gradual than places on the continent. The English Channel was a powerful barrier to casual admixture at any significant rate for a long time even though England was connected to maritime trade routes from the earliest days of its acquisition of agriculture and probably before then.

Of course, England also has the virtue of being a fairly familiar place that I can claim some understanding of at an ethnographic level and I've been there, so the geographic is not abstract to me.

Food producing pre-history is about three thousand years older in other parts of Europe (and, of course, older than that in Anatolia and the Middle East), roughly doubling the time period between the Neolithic transition and the start of a significant written historical record.

Still, the pre-history of the British Isles is much more comparable to that of the rest of Europe than it is to that of the New World.

There is good reason to think that some other parts of Europe had more continuity between the Upper Paleolithic and the Neolithic, particularly the Cardium Pottery and Iberian Southern coast of Europe, and that some parts of Europe have likewise had more continuity between the early Neolithic and modern times, something that ancient DNA suggests was far less pronounced in the far reaches of the Danubian Neolithic, for example. The greater genetic diversity in Southern Europe relative to Northern Europe, roughly speaking divided at the olive oil-butter line, is also suggestive of these different prehistorical experiences.

Looking at population genetic principal component analysis charts of Spain v. Northern European countries, for example, it is clear that Spain is much more diverse than Northern Europe and the Portugal also has stronger genetic ties to Northern Europe than the rest of Spain. Far lower levels of lactose tolerance in Southern Europe are also suggestive of deeper ancestral roots, and the archaeology suggests that Southern Europe had fairly significant fishing and coastal forager settlements even before terrestrial food production took hold and that it experienced less discontinuity than is seen in Northern Continental Europe and the British Isles.

Much of Continental Europe, particularly further South, also simply has a more tangled population history than the British Isles.

Still, even there, the population history of Europe appears to have involved more demographic upheaval, often of the violent variety, than most anthropologists, archaeologists and ancient historians of my father's generation were inclined to imagine. Britain may be one or the more extreme examples within greater Europe, but it is hardly singular or completely exceptional either.


Dark Matter Not Crucial To Large Scale Structure Formation

Computer models show that dark matter and normal matter produce very similar kinds of large scale structure formation in cosmology models.

Another Possible GR Source For CP Violation

CP violation caused by frame dragging associated with galactic rotation is considered.

ADHD Genetic Inheritance Patterns

Attention deficit hyperactivity disorder (ADHD) appears to involve a large number of rare genetic variants rather than a few common causal genes, much as autism and schizophrenia do, and some of those rare genetic variants appear to overlap between ADHD and a fair number of other conditions. Autism is comordbid with ADHD about 75% of the time.

But, ADHD without a comorbid condition is predominantly familial in inheritance, rather than mostly arising from new mutations appearing for the first time in the person with that condition, reflecting its less genetic fitness impairing character.

Z' Search Still Coming Up Empty

The latest research from CERN has ruled out a Z' boson with a mass of less than 1.1 TeV/c^2, and a "Randall-Sundrum Kaluza-Klein gluon with mass between 1.0 and 1.5 TeV/c^2 is also excluded at 95% C.L."

The analysis of this finding by Tommaso Dorigo is more interesting, both because it explains how distinctive a signature this kind of event would leave, which means that not many "hits" are necessary to constitute a positive experimental result, and because he hints at why one would imagine that a Z' boson might be out there. This is because something so heavy would have a high likelihood of decaying into top and anti-top quark pairs that have a very distinctive signature and can be further analyzed for total system mass-energy in a way that tells us a fair amount about the original source of this decay chain.

New heavy gauge bosons, electrically neutral and quite similar to Z bosons, are the result of adding a simple one-dimensional unitary group to the group structure of the Standard Model. Such an extension, which appears minimal and as such "perturbs" very little the low-energy behaviour of the theory, is actually the possible outcome of quite important extensions of the model. But I do not wish to delve in the details of the various flavours of U(1) extensions that have been proposed, on where these come from, and on why they are more or less appealing to theorists.

He may not want to delve, but I am certainly inclined to look at what would motive this Standard Model extension myself at some point, because we have never found an extra generation of boson before now. It could be a function of the fact that gluons and photons lack rest mass, while W and Z bosons have it, but at any rate it is worth examining. I'm less inclined at first blush to think that a Kaluza-Klein gluon (Kaluza-Klein being famous for bringing us the rolled up extra dimension concept) has much theoretical motivation worth considering.

A collection of Z' annotations called the Z' hunter's guide has a nice review of the literature and competing theories that call for a Z' boson. It provides a way to explain the scale at which ordinary standard model physics segregate themselves from new physics in SUSY models, can help control the proton decay that is endemic to many grand unified theories, can explain anomalous muon magnetic moments, can provide dark matter candidates, is an extra piece that falls out of group theories devised to explain other things that needs to be characterized, and can substitute for the Higgs boson in some respects.

Alternately, a Z' might boost the Standard Model expected Higgs mass which has already been ruled out:

The Standard Model fit prefers values of the Higgs boson mass that are below the 114 GeV direct lower limit from LEP II. The discrepancy is acute if the 3.2 sigma disagreement for the effective weak interaction mixing angle from the two most precise measurements is attributed to underestimated systematic error. In that case the data suggests new physics to raise the predicted value of the Higgs mass. One of the simplest possibilities is a Z' boson, which would generically increase the prediction for the Higgs mass as a result of Z-Z' mixing. We explore the effect of Z-Z' mixing on the Higgs mass prediction, using both the full data set and the reduced data set that omits the hadronic asymmetry measurements of the weak mixing angle, which are more likely than the leptonic asymmetry measurements to have underestimated systematic uncertainty.

In short, efforts have been made to task the Z' with solving a wide range of possible failures to theory to match experiment, but is something of a dark horse in terms of theoretical motivation that is nonetheless attractive to experimentalists because it should be pretty easy to identify if it is out there.

Randall-Sundrum models are models that eschew the full fledged ten or eleven or twenty-six dimensions of M theory (aka String Theory) and instead have just five dimensions - the four that we know and love in an "anti-de Sitter" background in which the strong and electroweak forces operate, and a fifth dimension which some specific characteristics that makes a graviton mediated version of general relativity work at the proper strength.

The search is also notable because it shows particle physics experiments starting to penetrate the 1 TeV energy scale which is where a lot of theory supposes that we should see new physics. For example, it is the characteristic energy scale of a lot of SUSY/String theory new physics that are on the verge of being ruled out by experiment.

Meanwhile, Lubos notes a presentation from Genoble that observes in the BaBar experiment a 1.8 sigma excess in the number of tau leptons produced in charm quark decays relative to the Standard Model prediction. This is probably a case of theoretical calculation subtly or random variations in experimental outputs (i.e. a fluke) or a product of the general failure to the Standard Model to predict sufficient CP violation in decay chains that start with bottom quarks (the decaying charm quarks in this experiment have their source in B mesons). But, it might be supportive of SUSY that features charm quark decays that include the possibility of decay to a charged Higgs boson of somewhat implausible tan beta and Higgs boson masses if it amounted to anything.

Finally, the rumor mill is currently pinning its hopes on a 144 GeV Higgs boson with a measurement suggestive of something going bump in the night at that mass at 3 sigmas, give or take, of significance. Electroweak precision fits argue for something much lighter, which clearly isn't there, but one or two screw ups in the least reliable data points (of a great many) used to make that fit could allow the other data to permit a 144 GeV Higgs boson. The observation is too low in significance (in a place where more observations are being made than anything else, making the risk of finding a fluke much higher than usual), and too vague to characterize what, if anything, is creating the signal, which isn't nearly as definitive as one might hope, because it is over a wider range of energies than one would generally expect (perhaps due to the inability of the methods used to be more precise). This is at the higher end of the permitted range from previous experimental constraints and has the virtue of being somewhat robust as it flows from combined measurements of multiple experiments that reinforce each other. But, the downside is that the experiments individually show fairly low signficance.