Friday, July 29, 2011

Intuitions on a Higgsless Standard Model

Earlier this week I had an interesting conversation at the viXra blog at what in particular is screwed up in the Standard Model by the absence of a Higgs boson in a suitable mass range, that I produce below, omitting some of the comments that don't really get at what I'm interested in for the purposes of this post. I alternate italics and non-italics to create visual clarity about where one comment starts and the other rends.

Then, I describe my intuitions that flow from this discuss and other readings, the pith of which is that I suspect that there is something missing in that renormalization equations of quantum gravity and that it is the absence of these terms, rather than the absence of a Higgs field and Higgs boson that address these issues in the Standard Model, that causes the Standard Model equations to blow up at the 1TeV to 10TeV energy level without a Higgs field and Higgs boson of at an appropriate mass scale.

I further speculate that these problems in the equations may have something to do with the omission of terms related to additional generations of fermions (perhaps even an infinite number at a sufficient high energy level, if there is not some physical limit on maximum energy) and/or the omission of quantum gravity considerations that might have an impact either by imposing quanta of distance and time, or by virtue of the fact that the otherwise Abelian equations of quantum electrodynamics (QED) are operating on a fundamentally non-Abelian space-time whose effects become meaningful at the relevant energy scale by breaking symmetries in the current QED equations that are Abelian to an extent that becomes meaningful relevant to the other terms of the QED equations at that point.

I also consider the less dramatic possibility that the infinite series that is approximated to calculate quantum field probability distributions by summing up a path integral over all possible Feynman diagram paths might have an (as yet undiscovered) exact finite series reduction that lacks the mathematical issues that the existing method of doing the calculations contain, and eliminates the need for a Higgs boson/Higgs field to make the equations stable at higher energies, and describe briefly the considerations that drive that intuition.

Any of these resolutions would imply that an almost invisible tweak in technical side of the approximations inherent in the equations of quantum mechanics used to make calculations today, that are virtually impossible to detect outside of atom smashers and Big Bang conditions, could solve almost all of the ills of the Standard Model without any new classes of fermions or bosons (although possibly with the necessity of additional generations of them), and without a Higgs boson or Higgs field, without extra dimensions or branes.

Now, I would be the first to tell you that I am no more than an educated layman with no formal instruction in physics beyond the undergraduate math major and intermediate physics class level, and that all my reading of the physics literature and popularizations of the field for educated layman in the far too long period since I graduated from college probably brings me only to the point of a first year graduate student in physics (at best) in terms of my knowledge of this part of the field.

I also don't claim to have a theory that solves all of these problems. I merely suggest that I have some intuitions about what form the answer make take and what the gist of the heuristic meaning of it and motivation for it at a level that might be communicated in blogs and science journalism if they were discovered might be. This is a hypothesis generating post, positing a conjecture, not a conclusion in the form of a theory that flows from the hypothesis. Put this post at the point in the scientific process where Einstein starts noodling around wondering what would happen if the speed of light is fixed and the equivalence principle and background independence still hold, rather than the point at which he actually formulates special relativity and general relativity, and before data points are in that could confirm that his thought experiments were physical.

So, with no further adieu, onto the discussion and the analysis.

ohwilleke says:
July 26, 2011 at 10:18 pm

People have been doing quantum field theory without a Higgs mass for a generation. Calculations have been made, predictions have been verified. Apart from indicating that the method used may lack a fully rigorous foundation, is the Higgs mass a bit like a LaPlace Transform or a complex value for current that only matters in intermediate step and doesn’t matter in the final conclusion?

In other words, what are the phenomenological consequences of the Higgs boson mass having one value v. another value apart from the fact that a few of them should be spit out in high energy collider experiments and that doesn’t happen.

The Discussion

Lawrence B. Crowell says:
July 26, 2011 at 10:30 pm

The simple fact is that something has to happen at around 1-10TeV in energy. The standard model of SU(2)xU(1) electroweak interactions has some experimental backing, at least with the massive W and Z bosons. At much higher energy than 10 TeV it is not possible to compute Feynman processes. In effect QFT becomes sick, and something must “happen.” The Higgs field is a form of potential which induces a change in the phase of the vacuum. It is similar to a statistical mechanics phase transition. So “something” does happen, but the basic Higgs theory appears to be in trouble.
. . .

Ray Munroe says:
July 26, 2011 at 11:51 pm

The purpose of the Higgs boson is to supply the couplings that provide for mass, and to explain the longitudinal degrees-of-freedom (parallel to motion dgf’s required for massive particles with intrinsic spin of 1, 2, …) for the W and Z bosons (with respective masses of 80.4 and 91.2 GeV) while simultaneously breaking Electroweak symmetry and explaining the massless photon. There are enough constraining conditions here that a simple Higgs boson cannot have ‘just any mass’.

However SUSY requires two complex scalar doublets (8 degrees-of-freedom) to properly provide for fermionic mass (there is a substantial mass difference between the top and bottom quarks). Of these 8 dgf’s, 3 yield the longitudinal modes for the W and Z bosons while the other 5 dgf’s SHOULD yield physical scalar bosons – the MSSM Higgs sector with Light, Heavy, Pseudoscalar, and plus/minus Charged Higgs. We have more non-constrained degrees-of-freedom, and more open parameter space. . . .

As Lawrence pointed out, new TeV scale Physics must exist, or else Feyman diagrams and the Renormalization Group blow up. . . .

ohwilleke says:
July 27, 2011 at 9:00 pm

It is certainly obvious that a missing SM or light SUSY Higgs has all sorts of implications for the theoretical framework that makes sense to explain particle mass.

My question was much more narrow. What are the phenomenogical implications, for example, of a 116 GeV Higgs v. of 325 GeV Higgs, aside from the fact that we can discern a particle resonnance at that mass?

Crowell seems to be saying that it doesn’t matter much in the low energy calculations but dramatically screws up the business of doing QFT calculations somewhere around 1TeV to 10TeV (forgive me if this is an inaccurate paraphrase) in a phase transition-like. Are there any other implications? What is driving the blow up at 1TeV in the math?

Ray Munroe says:
July 28, 2011 at 1:34 pm

Hi Ohwilleke,

The Standard Model Higgs is highly enough constrained that it could not have a mass of 325 GeV, but we could certainly dream up more complex Higgs sectors that would be consistent with such a mass – for instance the less-constrained Minimal Supersymmetric Standard Model Higgs sector that I described above could have a Heavy Higgs in that mass range.

Radiative corrections (Feynman diagrams and the Renormalization Group Equations) SHOULD drive the Weak Scale mass (W, Z, Higgs? of ~100 GeV) up to the Planck Scale mass of 10^19 GeV. This is called the Hierarchy Problem, and the most generally accepted theoretical fix is Supersymmetry. This extra factor of 10^17-squared might as well be infinity when we are using perturbation theory to try to make accurate experimental predictions. Basically, Radiative corrections will consistently get more and more ‘incorrect’ around the TeV Scale, and will inevitably diverge without new physics at the TeV Scale.

I would agree with your paraphrase of Lawrence Crowell’s comment “it doesn’t matter much in the low energy calculations but dramatically screws up the business of doing QFT calculations somewhere around 1TeV to 10TeV”.

Lawrence B. Crowell says:
July 28, 2011 at 5:03 pm

Something does have to change at around this energy scale. The data so far is lack luster, but we are at about 1/1000 the total data expected, so there are lots more to come. Luminosities will improve and in another year or two the picture should be much clearer. The 2-σ results in the 120-150 GeV Higgs mass range is not a Hindenburg event for the standard model, but it is a Lead Zeppelin. However, Led Zeppelin was always one of my favorite rock bands. It should also be pointed out that the INTEGRAL result on the polarization of light at different wavelengths from a Gamma Ray Burstar indicates there is no quantum graininess to spacetime far below the Planck scale. So a vast archive of physics theory and phenomenology appears to be headed for the trash can. However, at the TeV scale of energy it is obvious that something does have to change in physics, so nature is likely to tell us something. We may find that our ideas about the Higgs are naïve in some way, or maybe that the entire foundations of physics suffers from some sort of fundamental dystufunction..

The Higgs particle is a form of Landau-Ginsburg potential theory used in phase transitions. Phase transitions are a collective phenomenon. With the Higgs field the thing which transitions is really the vacuum. This leads to two possible things to think about. Even if this transition of the vacuum takes place, we expect there to be a corresponding transition with QFT physics of single or a few particles. We might then have a problem that some people are familiar with. In a clean flask you can heat distilled water to above the boiling point with no phase change. If you then drop a grain of salt into the flask the water rather violently bumps. By doing single particle on particle scattering we may not have enough degrees of freedom to initiate the phase transition. The phase transition needs a measure of “noise” we are not providing. It might then be that the Higgs field will turn up in “messier” heavy ion experiments. The second possibility, which frankly I think might turn out to be the case, is that QFT has a problem with the vacuum. The Higgs field occurs in a large vacuum energy density, which in the light of matters such as the cosmological constant seems fictitious. It is the case QFT becomes a mess at 1-10 TeV, where the Higgs field becomes a sort of regulator which prevents divergences. However, the problem might in fact be that QFT is sick period, and the fix might involve something completely different from anything on the archives of theory.

If we are to stay at least somewhat in line with established physical theory, Technicolor is one option for a Higgs-less world. Technicolor is a sort of “transformation” of T-T-bar condensates into another form. Sugawara did this with u-d quarks as a way of constructing meson physics in the .1-1 GeV range back in the 1970s. This is really a similar idea. In the technicolor theory the “meson” is the Higgs boson. The mechanism for Higgs production most often looked for is T T-bar — > H, or equivalently H — > T T-bar, where the latter gives the decay channels one searches for as a Higgs signature. This sounds like a small change, one where the field that induces the symmetry breaking has dynamics, and the symmetry breaking process is not spontaneous.

However, Technicolor might lead to something. Suppose there is some momentum scale horizon, which is due to the end of conformal RG flow. This might also have something to do with the AdS ~ CFT, where gluon chains are dual to the quantum gravitation sector (graviton) on the AdS interior. We live on the boundary of the AdS, where there are no gravitons. We may find that attempting to exceed 10TeV in energy only gives more of the particles we know in the conformal broken phase. However, with the conformal breaking comes mass, and from mass we have classical gravity. So there may still be signatures of this sort of physics. The technicolor condensate might be a form of gluon chain dual to the graviton. If Technicolor leads to this type of physics, we may then have to search for different observables.

Bill K says:
July 28, 2011 at 11:50 pm

“In the technicolor theory the “meson” is the Higgs boson.”

Lawrence, I have a question about this. People often make the offhand comment, “Well, maybe the Higgs boson is composite.” But if you try to build one out of fermion-antifermion pairs, as they do in technicolor, you don’t get scalars, the mesons you get are technipions (pseudoscalar) and technirhos (vector). So it seems to me that a composite Higgs boson is going to be quite a different thing from an elementary one, and there would not be much chance of confusing the two.


My intuition is that if indeed there is no Standard Model Higgs boson, and both SUSY and Technicolor are also unsupported, that the problem may be in the fine details of how renormalization is done.

Feynman himself had a strong intuition that something was wrong with the renormalization process that he developed and that it lacked rigor and hence might be inaccurate in some sets of circumstances. One possibility that he considered was that the infinite series approximation cutoff scale, which simply needs to be done consistently and produces very nearly the same result regardless of the actual scale used within reason might actually be a product of a discrete rather than continuous nature of space-time that could prevent infinities from cropping up where they otherwise could in theory. The latest experiments tending to show that space-time is continuous well below the Planck scale is discouraging on that front, but doesn't necessarily kill that intuition as a viable theory.

Reading Penrose on Quantum Field Theory and considering the fundamental disconnect between quantum mechanics and general relativity at a theoretical level suggests another quantum gravity motivated issue with the renormalization process at high energy levels such as the 1TeV to 10TeV level that the Standard Model renormalization calculations encounter in the absence of a low mass Higgs.

Penrose makes a major thematic point that quantum mechanics is fundamentally a linearized approximation which, due to the magic of calculus that allows us to analyze equations at infinitesimal distances permits us to ignore higher order terms of the infinitesimal and in practice causes us to toss them out of the equations that we work from all together (because the square or higher power exponent of a very small number is much much smaller than the lower powered exponents of it in an equation and generally can be discarded).

It could be that there is a non-linear term in what would be a truly rigorous statement of the renormalization process if it were worked out with real rigor that becomes material at higher energy levels on the order of 1-10 TeV and that the omission of these non-linear terms in the standard issue renormalization process, which was not worked out rigorously from first principles, is what is causing the mathematical glitches that make the Standard Model equations fail without a low mass Higgs.

Perhaps, for example, if you squeeze 1 TeV to 10 TeV of energy into the extremely tiny physical space involved in electroweak processes, general relativistic corrections to the spaces and times involved from a quantum gravity generalization of GR are necessary to keep the equations from blowing up, or perhaps there is some sort of self-interaction term that should be there and is normally irrelevant but is necessary to keep the equations on track at high energies.

Or, perhaps the impact of one or more undiscovered generations of Standard Model fermions with masses in the 1 TeV to 10 TeV range for the heaviest ones is prohibited by conservation of mass-energy until that energy level and add a set of loops to the renormalization equations that stabilize them. That, of course, merely kicks the can up to some higher energy phase state point, but perhaps there are actually a theoretically infinite number of generations with our current three generation standard model as a mere practical approximation, and each new phase state point in any given finite number of generations model is an indicator of the point at which the next generation of Standard Model fermions should be discovered.

The truth of the matter is that the Standard Model provides no intuition regarding what the masses of almost all of its particles should be without putting in physical constants by hand. The Higgs process is that way that those physical constants are implemented in a theory that is fundamentally massless in its construction, but it wouldn't be too surprising if there could be some other way which is a more accurate representation of the real world that generalizes the massless QFT equations in a way more fundamental than the "afterthought" method of Professor Higgs, although probably not a terribly different one as his method works so well up to the 1 TeV phase change region.

The current renormalization equations are analogous (more or less directly) to perturbative QCD, which involves much more difficult calculations than QED because it has a self-interaction term, is chiral and is non-Abelian (these statements aren't entirely independent of each other).* To overcome these difficulties, the alternative approach is to use the exact rather than perturbative approximation of the QCD equations on a lattice that shifts the approximation issues to a place in the calculations where they do less harm.

It could be that the "true" equations of QED have a similarly confounding term of some sort (perhaps even a chiral self-interaction term precisely in analogy to QCD that simply doesn't manifest at lower energy levels in measurable quantities), and that its omission, and our failure to shift from a perterbative regime to an exact regime at the appropriate point is what makes it look like we need a Higgs particle. Indeed, there is a certain elegance and unity in the notion that QCD breaks down an requires lattice approximations rather than perterbative approximations at low energy levels, while QED might require the same calculation method shift at high energy levels.

A quick google scholar look at "non-Abelian quantum electrodynamics" shows that efforts to formulate quantum electrodynamics as a non-Abelian gauge theory were a hot area of inquiry in the 1970s (for example Sidney D. Drell, Helen R. Quinn, Benjamin Svetitsky, and Marvin Weinstein, "Quantum electrodynamics on a lattice: A Hamiltonian variational approach to the physics of the weak-coupling region" (1979), with a few pioneering papers as early as the 1960s. But, apparently the success of renormalization in QED (and no doubt, the lack of the kind of computational power to do those kinds of lattice equations in realistic models that didn't come into being at an affordable price until the 21st century) starved interest from the subject and publishing on the topic appears to have been more sporadic since then.

There have, however, been some papers on the subject since then, such as Stephen L. Adler, "A new embedding of quantum electrodynamics in a non-abelian gauge structure" (1989), and this paper (2005).

Another hint that this might be what is going on in the absence of CP violation in the strong force despite the fact that the Yang-Mills equation that governs it has an expressly chiral term. Even more surprisingly, the also non-CP violating equations of general relativity, when expressed in terms of the Ashtekar formulation is also expressly chiral. The weak force, of course, and for that matter, the combined electroweak unification, where we do see CP violations, is also chiral.

Given all of this, would it be so remarkable to find that QED too has an omitted chiral and non-Abelian term that screws up the perturbative approximation at high energies? After all, one of the well known features of general relativity that has been repeatedly confirmed and never contradicted is that the geometry of space-time is fundamentally non-Abelian. Even if there is nothing about QED itself that is non-Abelian, the fact that it is really operating in a non-Abelian geometry rather than a Minkowski background (which incorporates special relativity but not the non-Abelian geometries of general relativity). And, in a non-Abelian geometry, at some point CP reversibility and symmetries that QED relies upon to provide highly accurate results with relatively simple quantum mechanical equations may become material enough to screw things up.

An effort to explore that line of reasoning (in a variation on the Standard Model with a Higgs boson) can be found at X. Calmet, B. Jurčo, P. Schupp, J. Wess and M. Wohlgenannt, "The standard model on non-commutative space-time" (2002):

We consider the standard model on a non-commutative space and expand the action in the non-commutativity parameter . No new particles are introduced . . . . We derive the leading order action. At zeroth order the action coincides with the ordinary standard model. At leading order in we find new vertices which are absent in the standard model on commutative space-time. The most striking features are couplings between quarks, gluons and electroweak bosons and many new vertices in the charged and neutral currents. We find that parity is violated in non-commutative QCD. The Higgs mechanism can be applied. QED is not deformed in the minimal version of the NCSM to the order considered.

(Calmet published a follow up paper along the same lines in 2007.)

Similarly, one can consider, D.J. Toms "Quantum gravitational contributions to quantum electrodynamics" (2010) arguing that "quantum gravity corrections to quantum electrodynamics have a quadratic energy dependence that result in the reduction of
the electric charge at high energies, a result known as asymptotic freedom."

Indeed, it wouldn't be too surprising to me if all CP violations in all of quantum physics are ultimately at some fundamental level a necessary phenomenological implication of the non-Abelian geometry of the space-time upon which the processes operate.

Another similar Higgs boson free formulation that flows from somewhat different intuitions but ends up in more or less the same place is Theodore J. Allen, Mark J. Bowick and Amitabha Lahiri, "Topological Mass Generation in 3+1 Dimensions." (1991).

Another Less Dramatic Possibility

I have seen others speculate that the "brute force" way that Feynman infinite series of equations that are summed up to get the quantum field in the Standard Model, which involve truly vast numbers of terms that must be added together to get the final result, but can be summed up by dramatically simpler classical electromagnetic laws with great accuracy in the vast majority of circumstances because almost all of those terms cancel out, may be susceptible to being written in a more compact non-infinite series form that would omit the vast number of the intermediate terms that cancel out in the end.

Finding a finite series that is exactly equal to an infinite series is a non-trivial and non-obvious undertaking of creative genius and has only been done in a quite modest number of cases in the entire history of mathematics.

But, if one found the right finite series of terms that was exactly equal to the infinite series Feynman diagram based approach for calculating the quantum field probability distribution, one might be able to remove the intermediate terms that rely on the Higgs boson/Higgs field mass entirely, allowing calculations to be done at arbitrarily high energy levels without a hitch.


Suppose that my intuition is right and that there is a term missing from the electroweak renormalization equations that would prevent them from blowing up in the absence of a Higgs boson. If this is true, it might be the case that there are no fundamental particles left to be discovered other than possible higher generation variations of Standard Model fermions (and perhaps right handed neutrinos) which is something that Technicolor, Supersymmetry, Supergravity, and Sting Theory generically predict.

Suppose too that there are only four space-time dimensions and that brane theory andKaluza-Klein dimensions are not part of the laws of nature as Sting Theory (a.k.a. M-theory) suggests, something that experiment has never given us a reason to doubt.

In that case, the only thing missing from the Standard Model is some means to reduce the number of parameters needed to assign masses to particles, fill in the CKM/PMNS matrix, and set the coupling constants of the three Standard Model forces. And, since each of these things can be empirically determined to the degree of precision necessary for any real world application, those missing pieces are essentially aesthetic consideration in a theory that is actually a complete description of the fundamental laws of nature at any less reductionist level.

Put another way, we could be as close as two or three equations and one or two missing terms in the existing QED equations from the Standard Model being a true Grand Unified Theory, and realistically, it might take only a couple more equations to get quantum gravity and a theory of everything as well. Indeed, quantum gravity, properly formulated, might well be the key to filling in the remaining blanks that prevent the Standard Model not only from being a Theory of Everything, but also a Grand Unified Theory.

* Self-interacting means that the boson that carries the force interacts with the force (e.g. photons don't have an electrical charge and hence aren't self-interacting, while gluons have a color charge and thus have strong force interactions with each other as well as fermions); chiral means that left handed and right handed particles (in an intrinsic spin sense) are treated separately in the mathematics; non-Abelian means that the way that the force works is path dependent because the equations involved don't obey the commutative law of ordinary algebra. The distinctions betwen the two are explored at some depth in this 2006 paper.

Modern Alchemy

It is utterly unsurprising that an anti-proton has precisely the same mass as a proton, to 1.3 parts per billion. The only case where a possible deviation has ever been hinted at experimentally is in the top-antitop quark pair and since some of the experiments that have looked at the relative masses of thoses have seen no statistically significant difference and the difference that has been seen is slight where it has been hinted at, the difference observed by one experimental group in the last year is very likely a statistical or experimental error. The fact that the matter and antimatter counterparts of every particle that makes up a proton has the same mass was confirmed decades ago.

What is amazing is the way that the latest experiment was done. Scientists created "anti-protonic helium."

Antimatter is extraordinarily difficult to handle . . . because upon coming into contact with ordinary matter (even the air molecules in a room), it immediately annihilates, converting into energy and new particles. . . . antiprotons produced in high-energy collisions are collected and stored in a vacuum pipe arranged in a 190-m-long racetrack shape. The antiprotons are gradually slowed down, before being transported . . . into a helium target to create and study antiprotonic helium atoms.

Normal helium atoms consist of a nucleus with two electrons orbiting around it. In antiprotonic helium, one of these electrons is replaced by an antiproton, which finds itself in an excited orbit some 100 picometres (10^-10 m) from the nucleus. Scientists fire a laser beam onto the atom, and carefully tune its frequency until the antiproton makes a quantum jump from one orbit to another. By comparing this frequency with theoretical calculations, the mass of the antiproton can be determined relative to the electron.

You won't find this element in the periodic table. Indeed, it may never have arisen in nature. But, it was possible and was created. Indeed, since the calculations regarding the forces that govern a negatively charged particle in orbit around an atomic nucleus are know with much greater precision than other mathematical constants, it may ultimately be easier to measure an antiproton's mass than a proton's mass.

Hat Tip to Maju.

Tuesday, July 26, 2011

SM Higgs and MSSM Higgs Dead?

Philip Gibbs has combined the various pre- and post-Grenoble Standard Model Higgs search data and concluded that the Standard Model Higgs boson is a bust. He notes in the comments to his post that it is excluded at a 90% confidence level.

The statistically allowed region in the space above is shown in gray with the y axis corresponding to standard deviations in a one sided distribution.

As you can see there is nothing in the gray region that survives at 1 sigma level. At 95% confidence everything is excluded except a small window between 115 GeV and 122 GeV. In this region the Standard Model vacuum is unstable.

Not every SUSY model Higgs boson is ruled out, but the Minimal Supersymmetric Standard Model Higgs appears to be dead and some other SUSY models also appear to be disfavored.

The Higgs sector does not look like what the standard model predicts. There are hints of something in the light mass window but it does not look like the SM Higgs. It does not have sufficient cross-section and may be spread out over too wide a mass range. It is too early to say what that is, or even if anything is really there. Much more data must be collected so that each experiment can separately say what it sees. That could take until the end of next year, but we will certainly have more clues at the end of this year. If the Standard Model is out, then we cannot be sure that some heavier Higgs is not another possibility. It just wont be the SM Higgs.

SUSY predicts a light Higgs but all the searches for missing energy events predicted by SUSY have been negative so far. Does this mean SUSY is dead? Of course is doesn’t. Some of the simpler SUSY models such as MSSM are looking very shaky, but there are other variants.

The increasingly high mass exclusion range for the lighest supersymmetric particles, the hints that there may be more than three generations of neutrinos, and the weakened need for SUSY to explain CP violations and coupling constant unification if there are more than three generations of fermions also all weaken the theoretical motivations for SUSY.

This doesn't mean that the Standard Model itself is a bust. The Higgs boson is a mathematical gimmick to impart mass to particles in a theory that has no other means of doing so. It has the rather ugly feature of providing a source of inertial mass that is distinct from the almost Standard Model way of deriving gravity (the graviton), when general relativity suggests a much deeper connection between the two. It appears that this particular mathematical gimmick is the wrong one. Perhaps loop quantum gravity models will provide some insight into this issue.

Another problem with the Standard Model Higgs mechanism is that it doesn't provide any underlying reasons that particles have the masses (i.e. Higgs field coupling constants) that they do. It simply leaves those masses as experimentally determined constants that have no underlying source in the theory. Yet, there is clearly some rhyme and reason to the particle masses that we observe, but we haven't cracked that code yet. A more satisfactory elaboration of the Standard Model should be able to explain why particles have the masses that they do from a smaller number of more fundamental constants and gravity at a quantum level. The failure to find the most familiar versions of the Higgs boson puts the pressure on theorists to explore new ways to address this problem that most of the theoretical physics community had complacently assumed had been solved and just had to be confirmed by experiment. This pressure may bring results. We'll see what happens next.

UPDATED July 28, 2011:

Slightly modified analysis of the combined exclusion plots here.

If you compare this with my previous Standard Model Killer plot you will see that the black line is slightly lower at the minimum point because of the marginally less restrictive Tevatron combination. The combination uncertainty now added in grey shows that the Δχ2 could go as low as 2.5. Although this is not as dangerous for the Standard Model as before it still corresponds to a 90% or better exclusion for all Standard Model Higgs masses.

Some of the updated SUSY model fits only manage an 85% exclusion and other less restricted supersymmetry models would surely have a better chance. I think it is therefore reasonable to claim on this basis that Supersymmetry is in better shape than the Standard Model Higgs. This is contrary to the slant from the media and some other blogs who suggest that the excesses at 140 GeV are hints of the Higgs Boson while supersymmetry is in more trouble.

Monday, July 25, 2011

Gibbs Bearish On Standard Model Higgs Boson Existence

Philip Gibbs, proprietor of viXra log, notes that there is a still a mass range from about 114 GeV to 130 GeV where a Higgs boson could be found.

So there is hope that some kind of Higgs particle is lurking in that region, but the signal is not strong. Some modified form of Higgs mechanism with multiplets may be a better fit to the data. If my predicted full combinations are correct a standard Higgs may already be all but ruled out even at low mass. A SUSY multiplet can still work but searches for MSSM signals have excluded the best parts of the SUSY spectrum. There is certainly a big conundrum here. Theorists may be sent back to the drawing board, but it is too early to say.

SUSY Models don't play out the same way that the Standard Model does, so a failure to find a Higgs boson has different implications for it than for the Standard Model. But, high mass exclusion ranges for the lighest supersymmetric particle coupled with the absence of a Higgs under the mass range where it has been excluded by direct searches, is a real problem for SUSY as well.

A blanket failure of SUSY models to fit the data has profound implications for the theoretical physics community because SUSY is a necessary (but not sufficient) component of string theory. If SUSY is ruled out, so is string theory. Woit, a long time SUSY critic, chronicles the disappointments that LHC is meting out to SUSY theorists.  As physics blogger Clifford Johnson puts it: "Wouldn’t it be interesting if both the Standard Model Higgs and the simplest models of Supersymmetry were ruled out? (I’m not saying that they are – it’s all to soon to tell – but it is a possible outcome.)"

My personal prediction at this point, admittedly by someone who is no more than an educated layman is that:

1. The Standard Model Higgs will be ruled out in the next six months to a year by LHC.

2. SUSY will effectively be ruled out in most of its permutations, dealing a deep blow to string theory. Other precision measurements over the next half decade or so will confirm that conclusion.

3. Strong indications of a fourth generation of fermions at high masses that have only a modest impact on the CKM matrix will be discovered. The fourth generation of fermions will result essentially all of the unexpected results in the Standard Model except for the missing Higgs boson.

4. Precision astronomy observations of low brightness objects and improved theoretical calculations using the exact equations of general relativity rather than a Newtonian approximation in the weak field for typical galactic and galactic cluster structures will greatly reduces the inferred proportion of dark matter in the universe, but will not eliminate it. Some effects previously attributed to dark matter or MOND will be found to flow from the non-Newtonian component of gravity in General Relativity.

5. A search for a stable, massive, electrically neutral, non-baryonic particle that does not interact through the strong force and perhaps not through the weak force either (along the line of a right handed neutrino) has a reasonable chance of success.

6. Low energy QCD will provide a much deeper understanding of the stong force, driven by computationally intensive latice method simulations that are ultimately confirmed by experiments.

7. In two to ten years, someone will come up with a satisfying alternative to the Higgs model for mass generation, probably drawing on work being done in loop quantum gravity and QCD today. It will not employ extra Klein-Kaluza dimensions, branes, or predict the existence of a multitude of new particles. While "walking technicolor" is probably the most viable Higgsless model out there today, I suspect that it will be ruled out in the next decade or sooner by LHC results and will not be the theory that resolves the Standard Model's missing Higgs problem.

Thursday, July 21, 2011

Major Physics Conference In Progress

The Europhysics Conference in progress right now in France has dumped a vast number of experimental physics papers with the latest results from greatly expanded data sets at LHC and all of the other major experiments going on in the world onto the Internet. viXra log is live blogging some of the results.

Some highlights:

* Previous measured mass differences between top quarks and antitop quarks were probably experimental error. Current results are consistent with no mass difference, a result that is strongly preferred theoretically.

* Muon magnetic anomaly measures may also have been a result of poor theoretical estimates and experimental error, although this is a weaker finding as only one or two papers that weren't definitive addressed this point.

* There are hints of possible new particles at 327 GeV of mass or higher. But, the data is thin. The statistical significance is high, but we are talking about less than a dozen observations out of billions and we can't fit the data to a particular model. For example, the results don't seem to look like the decays of a next generation top quark. The results are also not clearly being replicated.

* The constraints on masses for SUSY particles is greatly increased to ca. 800 GeV or more. In SUSY, the heavier the lighest supersymmetric particle is, the lighter the Higgs boson must be, so the new results disfavor the entire SUSY enterprise if a Higgs boson is not discovered at the very low end of the mass range not yet excluded by experiment.

* Dozens of different studies have been conducted from every angle to find a Higgs boson and not have produced paydirt. While it isn't excluded either, there is not strengthening signal in the data as the data set gets larger and larger. The biggest bumps in the data suggesting a Higgs boson might exist at a particular mass are no bigger than they were when the data sets were much smaller than they are at this conference. The Standard Model may soon have to cope with a way to go Higgless, although this could still come out either way for another six months or so.

UPDATE: A press release and presentation timing and rumors suggest an announcement that the Higgs mass is most likely in the narrow range of 114 GeV to 137 GeV (and if that is the highlight of the conference, as it appears it will be, presumably not a confirmed siting of the elusive boson), based to a fair extent on precision indirect data on top quark mass and other precision electroweak measurements and apparently by excluding areas not previously ruled out rather than actually affirmatively seeing anything. In other words, it is either cornered or absent. The bottom of the range is unchanged (and that is where SUSY fans hope to find it, since the lower it is, the heavier and hence more elusive other SUSY particles can be and thus save the theory). The top of the range has been creeping down from 158 GeV with most effort since earlier this year already focused on the 140 GeV and under range. In other words, the announced rumor, if correct, is only a slight narrowing of the target range from other announcements in the last year and would not be a huge discovery.

* Exclusion ranges for a fourth generation quark have increased to above 400 GeV but fourth generation fermion models are looking increasingly attractive because B meson and other precision experiments are showing that the CKM matrix that governs the probability that quarks turn into other kinds of quarks in weak force interactions is overconstrained in a three generation standard model fermion scenario, and because the changes to the Standard Model involved in adding a fourth generation of fermions would not be radical and also seems to be favored by some neutrino studies.

The apparent impossibility of consisting fitting the experimental data to the CKM matrix is arguably the biggest beyond the Standard Model experimental conclusion of the Conference that is likely to endure for any length of time (also here, "We present updated results for the CKM matrix elements from a global fit to Flavour Physics data within the Standard Model theoretical context. We describe some current discrepancies, established or advocated, between the available observables. These discrepancies are further examined in the light of New Physics scenarii."). The abstract of one of these papers says, in part:

We present the summer 2011 update of the Unitarity Triangle (UT) analysis performed by the UTfit Collaboration within the Standard Model (SM) and beyond. Within the SM, combining the direct measurements on sides and angles, the UT is over-constrained allowing for the most accurate SM predictions and for investigation on the tensions due to the most recent updates from experiments and theory.

Some past findings that spurred beyond the Standard Model theory like top-antitop assymetry and g2muon, as well as numerous "bumps" that didn't pan out in single experiment runs that weren't replicated at below "discovery" level 4 or 5 sigma significance all seem to be going away and the new "bumps" at this conference have not convinced me that they have much staying power yet. But, the CKM matrix has been looking as if it is at risk of being overconstrained for some time and is a good place to look for hints of beyond the Standard Model behavior as a result of its basic structure which would be like a partial periodic table if it turns out that it is incomplete and omits some quark types. Moreover, if the CKM matrix is broken, the strong experimental constraints on the values in the matrix that can be determined with high accuracy provide considerable insight into what the missing values of an expanded CKM matrix must be, and hence which fermions may yet be out there to be discovered and what they might be like.

* There are only a handful of new cosmology and gravity papers, but they seem to support some major refinements to prevailing theories about large scale structure formation in the universe and other big picture issues.

Monday, July 18, 2011

Non-African X Chromosome Heavy In Neanderthal DNA

A previously direct comparison of recovered ancient Neanderthal DNA to the whole human genome found an average of 1.5% to 4% Neanderthal DNA in all non-Africans, with an overall average of 2.5% and no clear regional preferences. A new study (discussed here) finds an average of 9% Neanderthal DNA in non-African X chromosomes with a similar geographic distribution, using a much larger sample size, but without direct comparison to the Neanderthal DNA sample, using another statistical method to identify the ancient X chromosome haplotype.

There is no evidence that any modern humans have either a Neanderthal patrilineal ancestor or a Neanderthal matrilineal ancestor, lineages that are tracked with non-recombining Y-DNA and mitchondrial DNA respectively.

Ancient Denisovian DNA from hominins of unknown affiliatioon who yielded an ancient DNA sample in an Altai Mountain cave from 38,000 years ago or so, when compared to modern human populations, showed Denisovian admixture in Melanesians (who also have Neanderthal DNA components) and populations admixted with them, but not other modern humans.

Friday, July 15, 2011

Genetic Clues Regarding Ancient Intercontinental Admixture

[T]he (preliminary) 1000 Genomes paper came out last fall. . . almost all the variants that have reached fixation in different groups are (or will soon be) known. . . . there are . . . many more differences in fixed variants (72) between E. Asians (CHB+JPT) and Yorubans (YRI) than between any other pair of groups: only 2 between E. Asians and Europeans (CEU) and 4 between CEU and YRI. This means that there are a bunch of variants that are fixed differently in YRI and CHB+JPT but both versions remain in CEU.

From here.

I suggest a cause for this in a comment to the post:

There is a thin but measureable level of admixture between Africa and Europe, probably from the Neolithic, possibly as late as the Roman Empire, the Moorish presence in Spain, and the Ottoman presence in the Balkans. There was probably European introgression into East Asia in the early Bronze Age in Northeast Asia and later via the Silk Road. There are a handful of East Asians buried in Rome from classical times and would have been admixture via circumpolar societies (e.g. Uralic), the Turks, the Mongols and the Silk Road. There was no comparable admixture between Africa and East Asia.

Of course, there are all sorts of genes that have reached fixation (i.e. everyone has the same single variant of the gene), all modern humans, and there are a fair number of genes that have not reached fixation (i.e. there are multiple variants floating around out there in every population).

Selective effects can lead to fixation, but so can founder effects and a certain amount of random chance in small populations. Rate of population change is also important. Expanding populations tend not to lose variants of a gene. Contracting populations do tend to lose low frequency gene variants. Human population history shows strong signs of serial founder effects with fairly low effective population sizes at various points along the way.

It is also worth noting that 1000 genomes still miss quite a few rare variants, which incidentally, have surprisingly little overlap between major continental populations (suggesting low levels of post-divergence admixture).

Thursday, July 14, 2011

Historical Tsunamis

The Tsunami Alarm System has a really beautiful and comprehensive database of historically tsunamis, listing, for example, the place, wave height, and time of all known catastrophic historical tsunamis in the Mediterranean, going back thousands of years, all the way back to the famed tsunami that struck Santorini around 1628 BCE, with the highest waves probably reaching 60 meters. The risk is ongoing and there have been a number of major 20th century tsumanis in the Mediterranean.

There are only two known catastrophic tsunamis in the Atlantic Ocean, one in Puerto Rico in 1918 that killed 116 people, and one in Lisbon, Portugal in 1755 that killed 60,000 people. The greatest modern risk is the possibility that an earthquake could trigger the collapse of one of the Canary Islands (off the coast of Morocco) which would produce a tsunami wave that would have a peak wave height of 156 meters (high enough to drown every structure in downtown Denver) and would have tsumani waves there were still 13 meters high by the time that they reached Boston.

300 000 years ago . . . a part of the island El Hierro slid into the sea, triggering a mega-tsunami which carried rocks as high as a house for many hundreds of metres into the interior of the east coast of what is today the USA. The danger of a similar island collapse is seen by scientists particularly at the island of La Palma in the Canaries. Here, following a volcanic eruption in 1949 almost half of the mountain range of 20 km moved westwards towards the sea, leaving a large tear in the volcanic basalt. In the event of a fresh eruption, a huge part of the volcano could loosen itself due to differences in the types of rock and diverse water deposits within the now active volcano. As a result, the densely populated east coast of America would be massively threatened.

From the point of view of personal safety, the main warning sign of a tsumani, which may appear even if no earthquake is felt, is that the sea recedes rapidly. If you see this, immediately move as quickly as you can away from the sea or ocean to the highest point possible. Many people who die in tsunamis could have saved themselves if they ran for high ground instead of trying to collect stranded fish, tried to surf the wave, or just stayed where they were when it happened.

Wednesday, July 13, 2011

Is Dark Matter Overestimated?

Late last year, a study revealed that the amount of ordinary matter in ellipitical galaxies has been greatly underestimated. The implication was that the amount of dark matter inferred from lensing effects and galactic rotation curves, adjusted for mass attributable to luminous matter, was greatly overestimated. Rather than being much more common than dark matter, there were about equal amounts of visible and dark matter in the universe, as a result of this data point alone.

But, this isn't the only development casting doubt on the dark matter paradigm. A number of physicists, most prominently F.I. Cooperstock and S. Tieu at the University of Victoria in British Columbia, in a series of papers including this one, but also C. F. Gallo and James Q. Feng (outside academia), and German academic scientists Aleksandar Rakic and Dominik J. Schwarz, have suggested that traditional estimates of the amount of dark matter necessary to produce observed galactic dynamics based on a Newtonian gravitational paradigm are overestimates that ignore significant relativistic effects in rotating galaxies.

Researchers like Tobias Zingg, Andreas Aste, and Dirk Trautmann have criticized the models of Cooperstock and Tieu, as have others (to which they in turn have prepared responses), but acknowledge that the models that have been used to data to estimate dark matter amounts and distributions in galaxies have been inadequate, as a result of their failure to consider relativistic effects, to provide sound answers either. A 2009 paper by H. Balasin and D. Grumiller suggests that properly considering relativistic effects reduces the amount of dark matter necessary to explain galactic rotation curves by 30%.

Meanwhile, the cold dark matter paradigm has proven inadequate to fit large scale galactic structure observations and requires a cuspy halo distribution for dark matter that does not arise naturally with any of the prevailing cold dark matter candidates (for example, WIMPs). Direct WIMP (weakly interacting massive particle) searches like Xenon 100 are also increasingly developing the experimental power to rule out the existence of WIMPS with a wide variety of proposed properties.

This doesn't mean that the dark matter paradigm is dead. Observations like the bullet cluster collision have ruled out simple versions of modified gravity theories in which gravity tracks luminous matter, but have also put limits on the cross-section of interaction for dark matter that rule out a typical cold dark matter hypothesis. Theorists seem to be migrating to a "warm dark matter" hypothesis, in which dark matter is both weakly interacting and moving at marginally relativistic speeds, or mixed dark matter models, but still have no good candidate particles that have been shown to really exist.

But, I have yet to see any papers that really integrate these new developments into a comprehensive whole. If the amount of ordinary matter in the universe was previously greatly underestimated, and the amount of dark matter necessary to produce observed galactic rotation effects in galaxies has been underestimated by some amount due to a failure to consider significant relativistic effects in galactic dynamics, then ordinary matter must actually account for most of the matter in the galaxy rather than a mere minority portion of it, as is commonly asserted, even before anyone embarks on new physics. Moreover, the residual distribution of dark matter given these findings should be quite different than in "old school" cold dark matter models that are starting to lose credibility.

More refined estimates of how much dark matter is out there, how it is distributed, and what its cross-section of interaction must be from multiple sources, in turn, will narrow the parameter space for potential dark matter candidates (which are also, one by one, being ruled out by LHC results). And, shrinking estimates of the percentage of matter that is dark, particularly as they fall below 50%, are increasingly hard to fit to theories that naturally include stable, very weakly interacting particles that aren't too massive individually, which are dark matter candidates.

Yet another part of this stew is a conclusion based on gamma ray astronomy observations that predictions that there might be quantum effects at the Planck scale, reflecting quanta of length, have not appeared, suggesting that many string theory based and loop quantum gravity based theories of quantum gravity may be flawed. Polarization effects expected if there were quanta of gravity in very high energy gamma ray bursts were not observed (citing P. Laurent, D. Götz, P. Binétruy, S. Covino, A. Fernandez-Soto. Constraints on Lorentz Invariance Violation using integral/IBIS observations of GRB041219A. Physical Review D, 2011; 83 (12) DOI: 10.1103/PhysRevD.83.121301). This impacts dark matter theories because gravity modifications relative to general relativity in the weak field, if they exist, would likely be a result of quantum gravity effects associated with quanta of length in the fabric of space-time. Continuity at a level below the Planck scale cases doubt on the entire concept of discrete units of space-time because there is no other natural scale at which such discontinuities should exist.

Tuesday, July 12, 2011

Missing Species Probably Mostly In Biodiversity Hotspots

Six regions already identified by conservation scientists as hotspots -- Mexico to Panama; Colombia; Ecuador to Peru; Paraguay and Chile southward; southern Africa; and Australia -- were estimated . . . to contain 70 percent of all predicted missing species.

From here, citing Lucas N. Joppa, David L. Roberts, Norman Myers, Stuart L. Pimm. Biodiversity hotspots house most undiscovered plant species. Proceedings of the National Academy of Sciences, 2011; DOI: 10.1073/pnas.1109389108.

I'm somewhat surprised that such a large swath of Latin America figures so prominently, or that southern Africa and Australia, which have had a British colonial presence for so long would still be expected to have so many undiscovered species. I would have expected more undiscovered species in Southeast Asia and the Congo basin.

Monday, July 11, 2011

Dark Matter/Dark Energy/MOND/Inflation Musings

Dark matter effects look as if at a certain gravitational field strength that the gravitational force shifts from a 1/r^2 effect to a 1/r effect. This effect, which is empirically valid and has produced useful predictions is called "MOND" (for modified gravity).

What could be related to this?

1. It could be a holographic effect, perhaps related to the size of the universe since the cutoff point is roughly equal to the size of the universe times the speed of light.

2. It could be related to self-interaction of a gravitational field with gravitational potential energy, which falls off at 1/r, as this becomes the primary source of matter-energy in the region with weak gravitational fields. A back of napkin calculation ought to be able to determine if this is at all sensible.

3. It could be a function of relativistic graviational effects due to angular momentum that are expressing themselves only in the plane in which the momentum is present, rather than spherically. This could also explain why there are no MOND effects in the solar system (where 99.8% of mass is concentrated in the sun), modest amounts in galaxies (where there is a central black hole and mass bulge, but there are significant amounts of matter beyond the core) and bigger than expected MOND effects in galactic clusters (which may have no center and have massive galaxies circling a center of mass of the system with nothing actually in that center). Some back of napkin calculations ought to be able to see if this makes sense.

Cold dark matter, proposed to explain the situation, has a seemingly intractable problem of requiring cuspy dark matter halos to produce the observed effects that no mechanism has been proposed to explain, in addition to calling for undiscovered types of stable matter that is presumably very weakly interacting. Warm dark matter (basically heavy or numerous objects that would behave like right handed neutrinos) might be a better fit, but also lacks a basis in known processes of baryogenesis or leptogenesis.

Along the same lines, and relevant to both dark energy and dark matter, I wonder if there is adequate accounting in composition of the mass-energy balance in the universe of translational motion, angular momentum, pressure, photons in transit, cosmic background radiation, and the energy content of any other long distance fields of forces (e.g. gravitons, if they exist), stellar gases, and low luminosity baryonic mass (like red dwarf stars) in these calculations.

The Composition of Ordinary Matter In The Universe

The universe (based on the composition of matter in stars and gas giants that seem to make up the vast share of all matter) is about 90% bare hydrogen, and about 10% composed of elements with equal shares of protons and neutrons such as helium, neon, carbon, nitrogen and oxygen. The proportion of the universe made up of elements with more neutrons than protons is tiny and in many of these elements the excess of neutrons over protons is modest. Many of the heaviest isotypes of elements are not found in nature because they are naturally radioactive. So far as we know, there is no sign of large scale charged regions of space, so the number of protons and the number of electrons is extremely evenly balanced on a very fine grained basis over the entire universe. Hence, we have a universe in which there are about 19 protons, 19 electrons and 1 neutron contained in an atomic nucleus, and presumably almost no free neutrons, since they would decay to almost none very quickly in unbound states and there is no known process that produces them in unbound states quickly enough to make up the deficit beyond a very small number. All other mesons, and second or greater generation fundamental particles also decay rapidly, and by hypothesis in QCD there are no free quarks. If protons and electrons were generated from the beta decay of neutrons, then one would expect 19 neutrinos in this proportion of matter in the universe (making up much less than 1% of all matter), but perhaps neutrinos, which oscillate between generations but are otherwise apparently stable, are generated in other processes that a signficant. Neutron stars are presumed to be generated by reverse beta decay (i.e. proton plus electron plus neutrino plus energy produces a neutron). Neutrino mass may also be underestimated, however, if neutrinos are typically moving at relativistic speeds with lots of linear momentum that has a gravitational effect.

Even with recent mass accounting errors that suggest that scientists previously underestimated the amount of ordinary matter in ellipical galaxies so much that ordinary matter and dark matter proportions are actually identical, we still lack good candidates to fill the void. There are indications that dark matter at the supergalactic scale is organized in filaments, but we have no really clue about what those filaments are made out of.

Potential Energy

Could dark matter and dark energy be constant (at least once most baryonic matter forms) and gradually be converting from the former to the latter? If dark energy is proportionate to the size of the universe at a constant density per volume, it would be ever growing, while the matter-energy attributable to dark matter will be ever shrinking. What would a running dark energy amount look like (perhaps via a cosmological constant) and how would corresponding dark matter effect scale shifts impact structure formation in the universe?


Cosmology points towards inflation, which is hard to explain, a fraction of the second in the first second of the big bank witnessing massive expansion.

Penrose says that this hypothesis flows from inferences from the implied excessively high uniformity of the universe if one tracks it back that far in time from presently observed states. He suggests based on Second Law of Thermodynamics grounds that the Big Bang should be extremely low in entropy and hence prone to extreme uniformity that does not require a thermal process to reach equilibrium, and that the Big Bang should thus be unlike high entropy black holes. Hence, in his view inflation is not a necessary assumption in Big Bang cosmology. He would rather tinker with the initial conditions of the universe at the dawn of the Big Bang than the laws of nature of inflation seems to.

One stray thought about MOND and inflation is that up to a certain point, there would be no place in the universe where there was a field weak enough for MOND effects to be present. Inflation could be the period until that point was reached. This would take a bit longer to have the same effect, but it would be interesting to model.

I'd also be curious to know how many meters across the universe would be before and after the Big Bang by this hypothesis in a concrete manner. If it is expanding at the speed of light, and is from 10^-35 to 10^-15 or less seconds (approaching a Plank time unit), this is quite small as c equals 186,000 miles per second and 10^-10 seconds would be about six inches at the speed of light. At 10^-15 seconds it should be 100,000 times smaller than that, and at 10^-35 seconds, it should be 100,000,000,000,000,000,000 times smaller than that. It seems that the initial blob doesn't have to be uniform until a very large size at all (all of the matter and energy in the universe condenced into a space much smaller than a grain of sand) to dispense with inflation entirely, and presuming to make meaningful statements that reach back into the first fraction of a second of the Big Bang without more direct evidence seems quite presumptuous. Is it any more of a leap in logic to assume that the Big Bang started from a homogeneous matter-energy spot the size of a tiny grain of sand than to assume that it started to an absolute singularity?

John Hawks on the European Middle Pleistocene hinge point

John Hawks made the following series of tweets from the Altai region of Russia on July 8 (links inserted editorially):

The hinge point in paleoanthropology right now is the European Middle Pleistocene. Neandergenes don't fit fossil record. That is, Neandergene analysis seems to rule out substantial Neandertal ancestry from Atapuerca et al. Instead, Neandergenes appear to derive from Africa after 250-400 kya. Is Atapuerca/Petralona/Arago a dead end? Or can we find a model that fits data and allows some substantially deeper Neandertal local ancestry? And while we're at it, can we get any Denisovan ancestry to be consistent with Asian Homo erectus?

The Denisovan genome analysis seems to rule out any substantial mixture with Neandertals...but Okladnikov is literally 3 days' walk. There's simply no biogeographic barrier. The populations need not have been here at same time, but if not where were they? If we can't resolve the European Mid Pleistocene problem, fossils may never help with Denisovan problem.

Basically, Hawks is pointing out that three groups of fossil remains of ancient hominins, (1) the non-Neanderthal archaic homo of Europe in Atapuerca (Spain), Petralona (Greece), and Arago (France) from ca. 300,000 years ago to 200,000 years ago (I call them archaic homo out of respect for Hawks who states in one post: "The "Homo heidelbergensis" model is in such utter disarray right now, I'm not sure many paleoanthropologists have realized the full extent of the problems. You should know that I don't believe in Homo heidelbergensis, never have."), (2) the Neanderthals whose European centered range extended to as far east as a few days walk from Denisovia in Siberia in a find from 38,000 to 30,000 years ago, and (3) the Denisovian remains whose affiliation is unclear but genetically disjoint from Neanderthals according to the ancient DNA, don't seem to have had any known ancestral or admixture links to each other except as possible link of Denisovians to either Asian Homo erectus (or something else, such as population (1) in this list?).  As he noted earlier the same day in a tweet: "With Ngandong date change last week, no H. erectus fossil is late enough to be part of a Denisovan population."

The main traces of the Denisovian genome in modern populations appear to be mostly restricted to Melanesia and populations with Melanesian admixture, rather than having a global distribution, and seem to be absent in North and Central Asia which are the source of the Denisovian fossils that were the source of the Denisovian genome, and those fossils aren't extensive enough to allow for any serious reconstruction of their large scale appearance. (Hawks is skeptical of claims that immune system genes bearing similarity to the Denisovian genome are really Denisovian in origin.)

Dating of fossils from India has also muddied the story of Eurasian hominins in the early and middle Paleolithic.

Friday, July 8, 2011

Lost and Found

People like me sometimes misplace their car keys or cell phone or glasses. Everyone once in a while at a major festival or sporting event or large mall it will even take me a little time to locate my car.

But, I clearly have nothing on the Bulgarians. They misplaced a "monastery church, a small cemetery chapel, and a feudal castle" that were only about six hundred years old and built by a well attested historical dynasty in a medium sized city, until archaeologists rediscovered them this year.

Thursday, July 7, 2011

Why Are Lesbians Lesbians and What Does It Mean To Be Lesbian?

A new open access British twin study at PLoS One that also contains a fairly comprehensive review of the prior literature on the subject examines the genetic and environmental contributions to female sexual orientation and also examines the extent to which childhood gender typicality and adult gender identity assessment are related to sexual orientation.

This fills a major gap in the study of sexual orientation because lesbians have been the subject of less intensive research than gay men, despite indications that the genetic and environmental factors, and the nature of the sexual orientation phenotype may not be strictly parallel or identical in men and women. The sample size was 4,066 women who were British twins.

* The study finds that a common genetic source is involved in both childhood gender typicality and sexual orientation (as measured by attraction), with a hereditary component of about 32% of variation in childhood gender identity and 24% of the variation in sexual orientation (measured by attraction) (each of which the study views as continuous rather than yes or no variables).  Some other studies using different measurements to identify the phenotype have found up to twice as much of a hereditary component (up to 60%), while others have found less (as little as 17%), all serious studies have found a substantial hereditary genetic component to female sexual orientation.  The stronger genetic component of childhood gender identity suggests that this phenotype might be more useful in identifying one or more "lesbian genes" than actual self-understood sexual orientation.

* Childhood gender identity and sexual orientation as measured by attraction as an adult are strongly correlated with each other due to both shared genetic factors and shared non-genetic factors (including, but not limited to, in utero androgen exposure).  They appear to be basically different manifestations of the same thing.

* The genetic effect appears to be additive rather than involving a dominant gene pattern.

* The study finds few shared environmental effects on sexual orientation or childhood gender typicality: "shared factors such as home environment and parenting styles have little impact on human sexual orientation."

* The study reviews the literature regarding in utero androgen exposure which it finds persuasive but not definitive. Thus, exposure to male hormones in the womb may have an important effect on sexual orientation in women. (This is an example of a cause which would be congenital, i.e. permanently present from birth, but not hereditary.)

* The instrument it uses to measure "adult gender identity", which measures self-perceptions of masculinity or femininity, in the study is found to be only weakly linked to the genetic factor and sexual orientation and childhood gender typicality and suggests that other instruments may be better. This instrument also had lower internal consistency than the other measures.  Put another way, sexual orientation and childhood gender identity are not strongly associated with a butch v. a femme self-identification as an adult.

* The study finds evidence of "sex-atypicality" as a possible intermediate phenotype in women between a strictly heterosexual phenotype and a strictly homosexual phenotype, which is probably more driven by in utero androgen exposure than by genetics.

* The study does not appear to have really evaluated bisexuality as a potentially separate construct from a lesbian sexual orientation and "sex-atypicality" despite at least one prior study showing this sexual orientation to be stable and distinct from a lesbian sexual orientation (in a larger share of women who are not clearly heterosexual than in men). A bisexual individual in this study would probably be treated as more homosexual than heterosexual but not completely homosexual.

When Is a Genetic Condition Not Hereditary?

Normally, people think of genetic conditions and hereditary conditions as identical. But, this isn't necessarily so. There are two main ways that someone can end up with genes that do not come from their parents.

First, there can be a mutation in a germline cell (i.e. a sperm or egg) that is not present in the rest of the genome of the parent. This is the mechanism suspected in almost all congenital conditions that are associated with advanced paternal age (i.e. with old fathers).

Second, a retrovirus can infect a person and change their genome during their life. These kinds of viruses are rare, but not unknown and their existence is fundamental to almost all proposed gene therapies, something done routinely in lab rats but only a few times on a therapeutic clinical basis in human subjects.

The distinction is important in interpreting a recent study of autistic twins in in California and receiving developmental disability assistance from the state that purport to show about 55% of autism is attributable to shared environment, rather than to hereditary causes or non-shared environment. Previous studies had suggested that autism was 90%+ genetic.

If there really is a large shared environment effect, it is probably a neo-natal environmental exposure situation, perhaps, for example, due mostly to pregnant women taking SSRI drugs. But, genetic factors like first generation germline cell mutations attributable to advanced paternal age and possibly also to the environmental exposures that the father has received would look mostly like a shared environment effect in the simple heredity analysis used in twin studies.

This method of analysis ignores first generation germline mutations, which is often a sensible thing to do, but is probably not appropriate in the case of conditions where there is an epidemiology that shows a strong advanced paternal age effect. Most serious dominant gene developmental disorders are probably predominantly due to first generation germline mutations, so excluding that possibility in a twin study analyzing autism causation is probably not reasonable.

The emerging consensus model of autism causation sees this syndrome as being caused by mutations in any of a very large number of genes (perhaps hundreds) that must all be perfectly in harmony to carry out the part of brain function that is atypical in autism, but can be mitigated if one or more "protective" genes are present (possibly an X-chromosome linked gene that could account for the differing rates of autism in boys and girls, since boys are less likely to have at least one copy of the protective X-linked gene).

In this model, most autism cases arise from first generation dominant gene germline mutations, and a minority of cases arise from inheritance from a parent who may be a carrier due to the presence of a protective gene that silences or mitigates the effect of the germline mutation that person received. The relative number of inherited and first generation mutations can be inferred by the extent to which advanced paternal age is a risk factor in autism.

Wednesday, July 6, 2011

Altai Once Warmer

Denisovia, a cave in the Altai region of Russia is famous as the source of non-Neanderthal, non-modern human bones that left genetic traces in Melanesians, and much later, as a cradle of major language families like those that include Mongolian, Turkish, Manchurian, and perhaps Korean and ancient Japanese.

According to John Hawks, who is there right now and tweeting from his Kindle 3G, it is also a rich source of new paleoclimatic data. The region appears to have had hominin occupation from about 250,000 years ago, and to have been much warmer than modern Siberia for most of the period prior to the last glacial maximum around 20,000 years ago.

This is quite surprising, since the conventional account is that Siberia was mostly uninhabited until relatively late after the Out of Africa period and made only a minority contribution to the modern populations of the Americas and East Asia, which show strong genetic indications of Southern Coastal Route origins. Indeed, some genetic haplotypes appear to have made their way along the Eurasian coast to North Asia and back to Northern Europe again via a basically circumpolar route. Megafauna extinction also seems to come fairly late to Siberia.

UPDATED in response to comment (since the comment function seems to be cranky this morning):

Until the Denisovian site was discovered, there was no evidence of Neanderthals or anything similar much further east than the Caucusas and Persia.

I'm not aware of any Homo Erectus sites in mainland Asia dated later than 400 kya to the present, with the possible exception of one from about 100 kya that might actually be a very early AMH or hybrid individual. Their presence has been inferred from the presence 1.8 mya to 400 kya and the absence of anything else we knew of until AMHs arrived, but there was very little evidence of pre-AMH hominins in mainland Asia for any of this time period from ca. 400 kya to 50 kya.

There was some thin evidence of AMHs in Siberia pre-LGM, but IIRC this isn't reliably earlier than about 30 kya in Eastern Siberia when a megafauna extinction makes their presence known. As I understand it, the conventional wisdom is that most of Siberia, like Northern Europe, was abandoned in favor of refugia during the most recent major ice age, and then repopulated from those refugia. (With entry of proto-Americans into Bergingia not entirely obviously just before or just after the LGM.)

The Denisovian find, of course, changes that picture immensely, both by providing evidence of continous occupation from the early Paleolithic to the early Upper Paleolithic and by providing DNA whose legacy that has turned up in Melanesia and to a lesser degree in populations admixed with them. Homo Florensis also provides archaic hominin evidence in the Middle to Upper Paleolithic era in Asia (if not mainland Asia) again corroborating the inference that archaic hominins were probably not absent for a 350,000 year period in Asia.

The sense that I get from John Hawks account is that Denisovia's paleoclimate evidence is also supporting an Altai that was much warmer than it is today in the time period from ca. 250 kya to 30 kya (i.e. pre-ice age). Naiively, we would have expected temperatures pre-ice age to be similar to what they are now, post-ice age. I'm not aware of any obvious mechanism that could make the Altai that warm from 250 kya to 30 kya, but nature didn't ask me what it should do back then and we'll have to figure it out.

Friday, July 1, 2011

More CP Violations In B Meson Decays That Produce Muons

When you smash a punch of particles together sometimes produce two quark particles that include bottom quarks called B mesons that in turn decay to produce two muons (heavy electrons).  Sometimes they will be negatively charged, just like electrons.  Sometimes they will be positively charged, just like positrons.  Other things produced in these collisions, like kaons (another kind of two quark particle) also produce two muon decays.

The Standard Model predicts that negatively charged muons should outnumber positively charged muons due to known CP violations in the weak force by a factor of just 0.01%. The data from the almost full run of the D0 Tevatron experiment show a 0.787% excess of negative muons with an standard deviation of error of about 0.2%, implying a deviation of about four sigma from the Standard Model expectation. Thus, experiment implies that the 95% confidence interval is that there are something like 29 to 119 times too many excess negatively charged muons.

Further analysis of the data reveals that B-meson decay, rather than other possible decay pathways, seem to be behind the anomalous results.

Of course, in this case, like so much of the high energy physcis field, a deviation of less than 1% from the predicted value in a calculation that has lots of steps that depend on QCD calculations, imperfectly known experimental estimates for quantum physics equation constants, and what have you could easily arise from very subtle issues involved in generating the theoretically expected value of either the end result or the "background" that is subtracted out from the total result for processes we understand to get the part of the result attributable to the interesting part.

The four standard deviations of variance from the expected result only considers differences from the theoretical prediction that arise from fundamental quantum randomness. The error estimate in the result likewise considers only known levels of uncertainty in the inputs to the theoretical calculations and the current experimental process, not analytical errors or systemic problems with the laboratory setup that no one had considered when doing the calculations.

But, excess CP violations in heavy meson weak force decays in excess of those predicted by the Standard Model are something that we've observed repeatedly before (that's the main reason physicists knew to put any CP violations into the Standard Model at all), and by five sigma, observed effects tend not to vanish when efforts are made to replicate the effect, so there is good reason to believe that this is the real deal, the holy grail of HEP: New Physics!

Now, there are more theories out there to explain where this excess CP violation could come from than there are different kinds of hats in a Georgia church on a Sunday in June. But, for the most part, you can't fit the Standard Model to the data simply by tweaking the values of a few constants in the CKM matrix. Most of the proposals involve new particles that require a deep conceptual break with the structure of the Standard Models chart of fermions and bosons. It is the moral equivalent of trying to put together some assembly required furniture with a ordinary screwdriver and hex wrench only to discover as you try to assemble the final critical joint to complete the job that you also need a three foot long magnetized titanium corkscrew to connect parts you hadn't previously noticed were even in the box.

For my druthers, the most attractive extension of the Standard Model that could resolve the conflict between experiment and theory, because it is the least radical in its implications, is the possiblity that there are four rather than three generations of quarks and leptons, such as in this proposed Standard Model extension. A model with at least four generations of fermions could also explain data pointing to the possibility that there are more than three kinds of neutrinos. It also would make it possible to unify the gauge couplings without supersymmetry.