Pages

Wednesday, September 28, 2011

String Theory Taxonomy, Musing On SUSY and Neutrino Condensates

String Theory Taxonomy

Lubos has a helpful and somewhat lengthy taxonomy of the main subtypes of string theories, explaining in broad general terms how they can be seen as manifestations of the same thing and which of the various types are most attractive to fit to the Standard Model and General Relativity, although even his explanation is a bit heavy on abstract algebra jargon and topology jargon for the average lay reader.

Ultimately, the bottom line points are that string theory versions that make any sense imply some sort of maximal supersymmetry with supergravity as an approximation in reasonably observable situations.

General Observations About String Theory

My gut instinct is to ask myself is one really needs a 10-11 dimensional manifold with six or seven compactified dimensions and a group theory that predicts twice as many particles as we've ever had any experimental indication actually exist, sometimes with more bells and whistles to replicate the Standard Model in four dimensions and four dimensional relativity with ten element tensors that capture all of the different ways that a point in a mass-energy field may acquire mass-energy from non-gravitational sources.

Sting theory introduces a lot of subtle moving parts that may not have any obvious experimental justification. The elaborate topologies needs to make all those dimensions work in anything approaching a real world scenario are anything but natural.

Also, after a while, it becomes clear that the main impetus for the very elaborate many dimensional space-time structure that string theory proposes is basically to make gravity, which typically operates in the whole of this elaborate space-time, weak relative to other Standard Model forces.

String theory also seems to unduly reify the stress-energy tensor of general relativity, which is really just an algebric accounting tool for keeping track of all of the mass and energy contributions to gravity at a point and then have gravity respond to the type of these mass-energy contributions as well as their absolute value. Space-time in general relavity twists and turns like the matter-energy fluxuations that give rise to it, rather than simply attracting with a scalar field proportional to the absolute value of the mass-energy stuff at a point as Newtonian gravity does. But, it seems like we ought to be looking for a model in Einsteinan four dimensional space (which due to special relativistic boost factors of momentum are represented as three dimensions of their own in general relativity) that recognizes that all the elements of the stress-energy tensor flow out of different actions in four dimensional space, rather than arbitrarily compactifying them.

Surely there must be some way to simply state a general relativistically consistent variation in the Standard Model to embed in ordinary four dimensional General Relativity, perhaps formulated in a more parallel way, since many string theories seem to find the Standard Model portion of the theory to a four dimensional brane or manifold or extended set of dimensions that it can't depart from anyway.

In other words, while String Theory is attractive largely as a means to unify fundamental physics, the glue holding the parts together seems to show if you look closely anyway, in addition to predicting esoterica that we don't see and not predicting anything new that we do see.

String Theory Is Fundamentally A Generalization Of SUSY

It simply isn't obvious to me that we need to resort to SU(32) Lie algebras and groups in eleven dimensions in order to figure out how the Standard Model and/or General Relativity need to be reformulated to make the theory consistent.

I suspect that the problems with the Standard Model which String theory solves with Supersymmetry have another solution that it different in kind and flows from some subtle point in the Standard Model that is not quite right the way that we have proceeded so far.

We've probably got something not quite right in our equations or the way that we are manipulating them that makes terms that actually cancel out seem like they don't because they are mere close approximations that don't hold to far from the numerical values of the formulas that they were derived from in the first place.

Put another way, historically supersymmetry came first, and string theory was invented to reveal deeper connections and implications resting in SUSY, which is all good and well if SUSY is solving the problems of the Standard Model in the right way. But, a theory of everything built to integrate SUSY with General Relativity makes no sense if SUSY is the wrong solution to the problems with the Standard Model that it addresses.

Why SUSY (Or Technicolor)?

What are those motivations for SUSY?

In a nutshell, per Wikipedia:

It is motivated by possible solutions to several theoretical problems. . . .

If supersymmetry exists close to the TeV energy scale, it allows for a solution of the hierarchy problem of the Standard Model, i.e., the fact that the Higgs boson mass is subject to quantum corrections which — barring extremely fine-tuned cancellations among independent contributions — would make it so large as to undermine the internal consistency of the theory. . . . Other attractive features of TeV-scale supersymmetry are the fact that it allows for the high-energy unification of the weak interactions, the strong interactions and electromagnetism, and the fact that it provides a candidate for Dark Matter and a natural mechanism for electroweak symmetry breaking. . . . Another theoretically appealing property of supersymmetry is that it offers the only "loophole" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories like the Standard Model under very general assumptions. . . . In general, supersymmetric quantum field theory is often much easier to work with, as many more problems become exactly solvable.

Technicolor too (at this point "walking technicolor" since trival versions of the theory have been ruled out by experiment) seems to be trying to solve the same problems as SUSY rather than considering that the problems themselves may be artificial consequences of a not quite right formulation of the forces and mass generation mechanisms whose apparent flaws it is motivated to resolve.

Reasons To Doubt SUSY and Technicolor

But for the TeV scale pathologies of the Standard Model equations as currently formulated, nobody would consider SUSY or Technicolor and we would be left with a far less elaborate theory to figure out how to embed in general relativistic gravity.

Some of these theoretical motivations are looking increasingly less convincing.

Warm dark matter theory which appears to work better than cold dark matter calls for a dark matter candidate with properties, like a keV scale mass that supersymmetry does not naturally provide. The LENS experiment that is in the process of being set up is designed to directly detect dark matter if it is in this mass range (perhaps, for example, as a composite particle made up of neutrinos).

The high energy unification of the three forces of the Standard Model may be a category error (i.e. the three forces may not operate at energy ranges great enough to allow them to converge) or may be a product of something as subtle and technical as the proper non-linear formulation of one or more of the three beta functions that describes how those constants run with energy.

Our difficulty in solving the existing equations may mean that we lack a helpful component in our mathematical tool kit to work with the equations that we have, rather than that the Standard Model is fundamentally flawed. Some mathematician may come up with the next Fourier transform or Laplacian equation or Hamiltonian or finite formula equivalent to some class of infinite series, and with this one new trick we may be able to calculate with these equations much better.

After all, we've seen this happen before. The practical possibility of using QED and the unified electroweak force equations relies heavily on the mathematical trick of renormalization, which Feynman, one of its creators, believed to his deathbed contained some subtle lack of rigor that made it not perfectly valid. If some mathematician could figure out in what way renormalization was not rigorous and slightly tweak it to resolve that issue, our calculation problems and TeV instabilities might be promptly resolved and the new formulation of the renormalization process might even provide us with some deeper insights. This is not necessarily a hopeless effort.

For example, numerical approximations have confirmed in the last few years that a non-perturbative solution to the QED equations does not contain the Landau pole the appears at very high energies in the ordinary renormalized calculations. See, e.g. here ("In the case of the asymptotic behavior β(g) ~ g, the Landau pole is absent and internal limitations on the applicability of the Standard Model implied in this estimate really do not exist.") and here.

The fine tuning of the Higgs boson mass that is present in the current equations may be because we have the wrong formula for calculating it (which we haven't observed yet), or because the Higgs mechanism is not how mass is generated, not because it lacks supersymmetric particles.

LHC experiments have put increasingly high minimum mass limits on particles predicted by SUSY and Technicolor, which pushes these theories into less natural parameter spaces.

My suspicion is that a lot of the pathologies that SUSY and Technicolor are trying to solve, fundamentally, have something to do with the formulation of the Standard Model, or perhaps the formulation of General Relativity, not being quite right in a way that has few phenomenological implications in practical settings but has theoretical rigor issues that prevent an elegant unification.

For example, it may very well be that there is a non-perturbative way to do QED without renormalization that we have not yet discovered that rids it of its theoretical pathologies and insecurities and also happens to be more easily formulated in a general relativistic rather than Minkowski special relativistic background.

Doubling the number of particles and adding a large number of free parameters to those found in the Standard Model seems like a very high price to pay to get an afterthought method for generating particle mass whose formula could be not quite right to balance properly and to break an electroweak symmetry that is put into their unification "by hand" in a not entirely elegant or natural formulation in the first place. Maybe electromagnetism and the weak force shouldn't be unified at all and it is merely possible to do so because of some similarity of all three forces understood at a deeper level.

What The Loop Quantum Gravity Program May Reveal

Attempts to embed the Standard Model into loop quantum gravity, which is an emergently four dimensional space-time background, looks like more fruitful avenue to explore than String Theory.

It could be that space-time really is analog rather than digital and that it is actually wrong. But, even if space-time is continous rather than discrete, if loop quantum gravity motivates even a non-unified discrete variant of general relativity space-time consistent formulation of the Standard Model, that formulation of the Standard Model may make clear in what ways some or the other parts of the existing Standard Model formulation are flawed, and the reformulated new and improved formulation that is consistent with General Relativity in the first place may be easier to unify into a grand unified theory.

Resolving these subtle issues may also resolve existing issues in the Standard Model which leave too many unexplained coincidences and question marks, like the almost pattern fitting mass matrix, the not quite coinciding running gauge coupling constants of the three Standard Model forces, quark-lepton complementarity, the prospect of unpredictable high energy electroweak behavior due to theoretical instabilities, and the unexpectedly small number of parameters needed to specify the CKM and PMNS matrixes.

In other words, we should fix the theoretical pathologies in the Standard Model to the extent possible before we try to embark on grand unification or a theory of everything, and the loop quantum gravity program seems to be advancing the cause of fixing those pathologies.

We Don't Understand Mass Well

Let's face it. We really don't understand the source of mass in the Standard Model well at all.

We are just beginning to come to terms with the way in which the Yang-Mills equation for QCD turns bare quarks and gluons, components that individually have almost no mass, into protons and neutrons and other hadrons in which the interactions give rise to something like 90% to 99% of the baryonic mass in the universe. But, we are almost certain that it arised from the QCD equations, not primarily from a Higgs mechanism. A similar mechanism for mass generation in neutrinos that might be superluminal was explored in an October 2010 paper.

We don't have a good explanation for the reality that all weak force interacting particles have mass while all particles that don't interact through the weak force lack mass.

We can't explain why the fermions of the Standard Model show the mass relationships to each other that they do, despite the fact that there is clearly some sort of pattern involved and we can even use Koide's formula to show what appears to be an exact theoretical relationship between the three charge lepton masses.

There is a striking apparent connection between the Einstein-Hilbert general relativity equations in certain formulations and a certain formulation of the square of a Yang-Mills equation that seems to parallel seeming relationships between the square root of the fermion mass matrix and the CKM and PMNS matrixes.

We can't definitely resolve the question of whether dark matter effects arise from the highly predictive modified gravity theories, or from warm dark matter particles of keV masses with neutrino-like properties that aren't a match to any theoretically and experimentally well supported particle from high energy physics (new experimental evidence strongly disfavors the traditional cold dark matter theory relative to either of the other options). Likewise, there is more than one mechanism that can explain dark energy.

New Neutrino Physics Discoveries

We just discovered pretty definively that the neutrino has mass. We are still working hard to determine if neutrinos acquire mass by a different mechanism than the Higgs mechanism we've tried to use to explain it in other fermions. We have also determined, as one would expect in a massive neutrino scenario, that there are probably at least three different masses for each of three generations of neutrinos. But some of our data show closer fits to there being four or five experimentally detectable neutrino types which is not a good fit to the canonical "periodic table" of fundamental fermions at all. Even problems deeply intertwined with mass in general relativity and special relativity, like the speed of light relative to the maximum speed of a massive neutrino, have suddenly started looking like islands of mystery instead of the settled points of certainty that they seemed to be just a year ago.

All of these new developments in neutrino physics within the last few years create new problems for the Standard Model that make the aesthetic problems that motivate SUSY look trivial. Most of these new developments are deeply intertwined with what we don't understand about mass generation in the Standard Model and the Higgs mechanism as currently formulated in the Standard Model doesn't really squarely address these questions.

Saving Physics With Neutrino Condensates

One suggestion for several of the problems that are addressed by SUSY and Technicolor, is that the much sought after Higgs boson, electroweak symmetry breaking, and perhaps dark matter as well, can be described by adding right handed neutrinos to the Standard Model and assuming composite mass entities made up of neutrinos in a superfluid state called a neutrino condensate. Adding right handed neutrinos to the Standard Model is a well motivated extension of it that does little to upset its basic structure, although we have yet to see any hard evidence that even these exist and condensed matter physics provides ample tools to analyze these states.

It might even be possible for these condensates to arise due to ordinary gravity. Neutrino condensate theory also suggests that neutrinos may uniquely lead to Lorentz symmetry violations. It has even been suggested that even that gravity itself could arise from weak force interactions of relic neutrino condensates left over after the Big Bang that also dynamically give rise to gravitons as Goldstone bosons. Versatile creatures that neutrino condensates are, they can also be used to explain dark energy and the Hubble constant from first principles, or provide a cause for cosmological inflation and explain why we have more matter than antimatter in the universe. Neutrino condensates could also help explain the internal workings of neutron stars, neutral kaon decays, and top quark decays.

Some of the neutrino condensate predictions associated with these theories might be possible to examine experimentally with precisely measurements of low energy beta decays and in neutron star behavior. While hardly cheap, either kind of experiment would probably be much less expensive to build and operate than a successor to the LHC. Like efforts to observe neutrinoless double beta decay in radioactive elements, this is medium budget physics instead of big budget physics.

My point is not necessarily to actually argue that neutrino condensates are the answer to every unsolved physics question out there. Instead, it is to illustrate that it is conceptually possible for a relatively minor tweak to the Standard Model (adding just three fermions that already seem to be "missing" in the current fundamental fermion chart and possibly no new fundamental bosons, although some versions of these theories do require a new neutrino specific interaction), when supported by a bit of analysis based on existing facts we know about condensed matter physics, could conceiveably solve the Standard Model pathologies that motivated SUSY in a far more parsimonious manner.

Even if neutrino condensates are experimentally proven not to exist or have the effects predicted, the fact that a tweak of this kind could solve many Standard Model pathologies suggests that physicists who limit their set of potential solutions to them to variants on SUSY and Technicolor have their blinders on too tightly and need to be more open to other potential remedies to these issues.

Tuesday, September 27, 2011

Latest Maybe Higgs Bumps

There are bumps in LHC diphoton data that could be a Higgs (under 3 sigma) at 118-120 GeV mass and around 140 GeV in the lastest data released today. Nothing really striking, but for the physics groupies out there who are starving for a fix of some information about the main event in the world's biggest atom smasher and repeatedly finding themselves disappointed, that is the latest. Patient readers can wait another few months to a year and get far more definitive results, but then you won't be the first on your block to know.

The 119 GeV bump is the more interesting one, because the experiment is incapable of providing a very strong signal there right now, even if there is a particle at that mass that has not yet been detected. The similarly mild 140 GeV bump is less impressive because the experiment has far more power in that region so a ho-hum signal where there should be a screaming clear indication is more likely to be a fluke or miscalculated bit of background noise.

Monday, September 26, 2011

Waves Of Asian Migration and Elsewhere

Recent insights into autosomal DNA lineages with Denisovian admixture have renewed discussions about multiple wave migrations into Asia. John Hawks sums up this model:

This model depicts (a) an early divergence of an African (represented by Yoruba) and Asian/Australasian populations. These mix with first Neandertals and then (for the Australian/New Guinea/Mamanwa populations) with Denisova-like people. Later (b), after the initial habitation of the Philippines by the ancestors of Mamanwa, a population like Andamanese Onge pushes into the islands, mixing with the ancestors of New Guinea and Australian populations. Later still (c), a population ancestral to today's Chinese people mixes with Philippines and other Southeast Asian people.

This is not really particularly new, however. A 2006 analysis of mtDNA in Malaysia concluded that: "Phylogeographic analysis suggests at least 4 detectable colonization events . . .respectively dated to over 50,000 years ago, ∼10,000 years ago, the middle Holocene, and the late Holocene." Similarly, a 2008 paper on Y-DNA lineages in Asia also suggested multiple waves of migration.

What the Denisovian ancestry data most help establish is the order of the layers and their geographic extent. This data set suggests that the wave associated in Papuans and Australians and earliest Philippino Negritos preceded the wave that brought the ancestors of the Andamanese, Malaysian Negritos and probably the Ainu as well (associated in part with Y-DNA haplogroup D). At least one other major wave of migration followed these two waves in Asia prior to the Neolithic era.

The first wave in Asia is distinctively non-African. The second, associated as it is to some extent with Y-DNA haplogroup D, might be more closely related to the the dominant Y-DNA haplogroup of much of Africa which is E, than the first wave. The third pre-Neolithic wave in Asia, however, which is the greatest ancestral contribution to the Chinese, is also distinctly non-African.

Genetics points to both Neanderthal admixture at a point in time shared by all non-Africans, and subsequent to them Denisovian admixture with first wave Asians - although it isn't entirely clear which extinct species of hominin to associate with the Denisovians. And, the exact place of Homo Florensis, a hominin species that co-existed with modern humans until 18,000 years ago or perhaps even more recently in Flores, Indonesia, is not entirely clear. Recent inferences from African genetics, meanwhile, are suggestive of admixture with probably two archaic African populations with Khoisan and Pygmy peoples in Africa, quite possibly as recently as the dawn of the Holocene. So, there may be evidence for as many as five different populations of archaic hominins co-existing with and admixing with modern humans in the last 100,000 years or so.

The Major Continental Divisions

European population genetics show a narrower range of lineages than seen in Asia that reflect a subset of ancestral South Asian genetic diversity. It has two of the three main non-African Y-DNA types and one of the two main non-African mtDNA types.

The relatively full fledged mix of Y-DNA and mtDNA haplogroups that made it to Asutralia and Papua also suggest either considerable time for the development of this DNA structure in India or elsehwere in Eurasia, after the population emerged from a common L3* group before dispersing widely in which Neanderthall genes were also acquired.

The Major Divisions of mtDNA

All ancestral non-Africans have mtDNA from macrohaplogroups M or N (which includes R) which are likely to have originated in South Asia, and both of which derive from African haplogroup L3*, with roots probably in East Africa.

But, all of the West Eurasian mtDNA haplogroups (with the exception of a couple like M1 that show signs of early Holocene or later back migration) derive only from mtDNA macrohaplogroup N which is found in both Europe and Asia.

The second wave of Asian migration associated with the Andamanese, Malaysian Negritos, Paleo-Tibetans and Ainu appear from strong circumstantial evidence to have comprised predominantly or exclusively of people belonging to mtDNA macrohaplogroup M.

The largely private mtDNA haplogroups of Africa are lumped into macrohaplogroup L. L2 and L3 are the predominant mtDNA haplogroups of black Africans, with L2 having a more West African, and L3 having a more East African orientation. L0 is roughly associated with the Khoisan, and L1 is roughly associated with the Pygmies. L4 has a strongly East African affinity. L5, which is a lineage more basal than L2 but less so than L1, is particularly to minorities in Sudan, Ethiopia, and the Congo. L4, a siser clade to L3, has an East African distribution from Sudan to a maxmium concentration in Angola.

The Major Y-DNA Divisions

Y-DNA macrohaplogroups C and D are likewise found in Asia, but not in West Eurasian populations where all Y-DNA haplogroups (with exceptions for relatively recent migration) seem to derive from Y-DNA macrohaplogroup F, which probably originated in South Asia, which is found in both West and East Eurasia, from which Europe was a "receiver." Haplogroup C probably originated somewhere in Asia. Haplogroup D's place of origin, given its erratic Asian distribution, is more obscure as its sister Haplogroup E shows strong signs of African origins.

As noted before, C and F are distinctively non-African lineages somewhat more closely related to each than they are to D and E, the former Asian and the latter African, with the former associated with a second wave of Asian migration. A few subtypes of Y-DNA haplogroup E, which is found across Africa and is dominant in much of sub-Saharan African, particularly in West African and Bantu populations, are also found in Southwest Asia and Southern Europe, but the phylogeny clearly indicates that these are isolated migrants to neighboring regions.

Macrohaplogroups A and B, the most basal Y-DNA types, are associated with the preagriculture peopling of sub-Saharan African according to a 2011 paper, The C, D, E and F lineages all break off from the B branch of Y-DNA phylogeny.

[W]e carried out a phylogeographic analysis of haplogroups A and B in a broad data set of sub-Saharan populations. These two lineages are particularly suitable for this objective because they are the two most deeply rooted branches of the Y chromosome genealogy. Their distribution is almost exclusively restricted to sub-Saharan Africa where their frequency peaks at 65% in groups of foragers. . . . [T]heir subclades reveals strong geographic and population structure for both haplogroups. This has allowed us to identify specific lineages related to regional preagricultural dynamics in different areas of sub-Saharan Africa. In addition, we observed signatures of relatively recent contact, both among Pygmies and between them and Khoisan speaker groups from southern Africa, thus contributing to the understanding of the complex evolutionary relationships among African hunter-gatherers. Finally, by revising the phylogeography of the very early human Y chromosome lineages, we have obtained support for the role of southern Africa as a sink, rather than a source, of the first migrations of modern humans from eastern and central parts of the continent.

The Americas, of course, we peopled from Asia via the Bering Straight ca. 17,000 years ago when there was a land bridge, possibly by a population with an initial effective size as small as 70 reproducing adults, who may have spent a period of time in isolated in Beringia from later arrivals. This group origins was mostly from the general direction of China, although a minority of Paleosiberian ancestry was also present. There were at least a couple of subsequent waves of major population upheaval in the circumpolar area of the Americans after the initial peopling of the Americans, and genetic diversity was considerably reduced in what would become Latin America by serial founder effects relative to North America.

All of this, moreover, fails to reflect the massive reshufflings that would take place in the Neolithic.

The Search For The Denisovians

New data on traces of Denisovian DNA in Asia makes the effort to try to link that pattern to the hominin fossils in the region more interesting.

The Denisovia cave DNA is dated to ca. 40,000 years ago and admixture with hominins of with similar DNA is thought to have occurred ca. 75,000-45,000 years ago in Southeast Asia. The earliest hominins in Asia, Homo Erectus, date to not quite 2,000,000 years ago. There is some evidence for intermediate types in Indonesia and China, and Homo Florensis from ca. 100,000 to 18,000 years ago in the same general region of Indonesia where Denisovian ancestry is found argues that Homo Florensis may have been a relict dwarf population of Denisovians as they are an archaic hominin group in the right place at the right time.

Estimates from genetics are that Neanderthals and Denisovians have a common ancestor ca. 1,000,000 years ago, around the time that intermediate species between Neanderthals and Homo Erectus emerge, such as Homo antecessor, Homo heidelbergensis, and Homo rhodesiensis - the extent to which these archaic hominins are different species from each other or from Homo Erectus is disputable. If these intermediate hominins in (mostly) Europe are the predecessor species for Neanderthals, and there was a single Homo Erectus species before that point, the emergence of these intermediate species could be the moment of the Denisovian-Neanderthal split.

The closest in time, arguably intermediate hominin fossil in Asia other than Homo Florensis is the Dali fossil of China. Ngandong 7 in Indonesia, the Hexian fossil from China, Sangiran 17 in Indonesia, Peking Man, and Java Man are all at dates consistent with potentially being Denisovians (1,000,000 to 40,000 years ago), or at least ancestors of Denisovians. These later specimens in Indonesia and China, particularly those with characteristics somewhat diverged from the earlier Homo Erectus fossils, are the logical matches to the Denisovians.

Non-Community Gravity And The Special Relativity Speed Limit

Lubos has an interesting post on how a 1999 paper on non-communitive geometry could produce results like those seen in the OPERA neutrino speed experiment.

The essence of the argument is that different kinds of particles have different speed limits. The fastest would be the graviton whose speed limit would be the "c" of special relativity (since they are massless and don't have strong, weak or electromagnetic interactions). Neutrinos and photons would have different speed limits that might be a hair smaller, due to the interactions of a coupling constant and a "B" field of a particular strength that was present as opposed to entirely empty space-time. At an appropriate B field strength, roughly 0.001 in the appropriate units, the peak speed of the photon would be reduced by more than the peak speed of the neutrino, since in this theory a slightly modified Lagrangian reduces has a term that reduces particle speed by a factor of the square of (2*pi*coupling constant*time B field strength).

This is essentially a more sophisticated version of the idea that I explored previously that the interactions of an average photon with local electromagnetic field from charged matter and photons in open space from the Earth's magnetic field, radiowaves, etc. could increase the length or time of an average photons travel over a distance and thereby reduce its effective peak speed. Since the neutrino interacts more weakly than a photon, it would have fewer interactions and a higher effective speed, despite having a mass.

In the same vein, it is notable that the observed deviation is on the order of the square of the anomalous electron magnetic dipole moment which is at a first order approximation, the electromagnetic coupling constant (alpha) divided by 2*pi. Thus, (2*pi*alpha)^2 would be just about the right factor.

Of course, the true value of c for special relativity and general relativity is a bit higher than the speed of light due to this correction, then all of the horrible problems that tachyons could cause in physics go away.

I'm not convinced the non-communitive gravity itself is key, but the notion that a non-true vacuum (due to matter or fields) could slow down photons to make them slower than highly energetic neutrinos by a rate independent of photon energy does make sense.

OPERA is a pretty simple experiment. If they are wrong, they are wrong with distance, they are wrong with duration of the trip, they have the wrong value of "c", or there are tachyons. Any problem with the first three has to be quite subtle. Any problem with the last is a big problem theoretically.

I'm inclined to think that the actual measurements of distance are right with the possible exception of problems in the GPS distance formula particularly the possibility that time dilation effects associated with distance from the center of the Earth are not properly considered. I'm inclined to think that similar time dialation effects could impact the clock synchronization, or that a variety of other effects in the timing of the movement of electricity through the equipment.

Lubos sketches out in a post at list of candidates for problems that I largely agree at the most likely suspects for conceptual or experimental problems in the result, although the speed of the signals within the electronic equipment at either end causing systemic issues with the timing is one he discounts more than I would:

inconsistencies in the whole GPS methodology of measuring space and time coordinates . . .
•subtle old-fashioned physics issues neglected by GPS measurements: the index of refraction of the troposphere and (even more importantly) ionosphere that slows down and distorts the path of GPS signals; confusing spherical Earth and geoid; neglecting gravitational effects of the Alps; neglecting magnetic fields at CERN that distort things; and so on
•forgetting that 2 milliseconds isn't zero and things change (e.g. satellites move) during this short period, too
•subtle special relativistic effects neglected in the GPS calculations
•subtle general relativistic effects neglected in the GPS calculations
•wrong model of where and when the neutrinos are actually created on the Swiss side . . .


This is just a partial list but I feel that most people who have tried to find a mistake will prefer and focus on one of the categories above. Recall that to find a mistake in the Opera paper, you need to find a discrepancy comparable to their signal of 18 meters (60 nanoseconds times c). Some of the proposed mistakes lead to much too big effects relatively to 18 meters and it's therefore clear that Opera hasn't made those errors; on the other hand, some errors and subtle effects are much smaller than 18 meters and may be ignored.

I have completely omitted the technicalities of their timing systems (their local, "lab" properties) because even if they're wrong about them, they're vastly more professional in dealing with them than all of us and we're unlikely to find a mistake here.

He also appropriately notices that systemmic errors in GPS which influence accuracy but not precision could be adapted to by ordinary people in a wide range of contexts much as one might to a slight redefinition of the units that you are using.

But, I think that there is a quite decent chance that the error is in the value of "c" used for the special relativity limit purposes that prior measurements based on measuring the speed of light failed to capture because the speed of light differs systemmically from "c" in the places where we measure it. The canonical value of "c" is here.

Defining Modern Humanity Ecologically

Hominins have been around for millions of years, and for something close to two million years in parts of Eurasia. Modern humans have been around for something like a quarter of a million years, and for something close to 100,000 years in parts of Eurasia (and beyond).

What distinguishes modern humans from archaic hominins? More than anything else, the defining big picture issue has been ecological impact. Modern humans rapidly exterminated megafauna everywhere they spread (African megafauna were presumably not exterminated to the same extent because they co-evolved with modern humans giving them time to adapt to them).

Archaic hominins, in contrast, don't seem to have done more than tweak the mix of top predators in the ecosystem.

Something about the modern human lifestyle's impact on the ecological balance was crossed a tipping point that the lifestyles of Homo Erectus, the Denisovians, Homo Florensis, and the Neanderthals had not. Perhaps it was our more advanced tool kit. Perhaps it was our improved hunting tactics and strategies. Perhaps it was our greater level of group cooperation. Whatever the reason, modern humans killed far more big animals than any of our predecessors.

Our inability to stay in ecological balance with megafauna, as Neanderthals apparently did, may also have created the necessity that made a wider range of food sources a survival advantage for modern humans in a way that it was not for archaic hominins.

Sunday, September 25, 2011

Heavy Higgs?

Cea and Cosmai argue in a recent paper entitled "The trivial Higgs boson: first evidences from LHC" that the low expected Higgs boson mass of most current theories is a product of flawed perturbative approximations, and that a non-perturbative approach yields an expectation that the Higgs boson would have a mass of approximately pi times the vacuum expectation value of the Higgs field (246 GeV), suggesting a Higgs mass of about 754 GeV +/- 40 GeV, and that this resonnance would have a (very large) 320 GeV width. They argue that this result is not inconsistent with Atlas results to date from LHC.

Hat Tip to this post at the Gauge Connection blog of Marco Frasca.

Frasca, himself, and some of his close colleagues, are looking at the interactions of magnetic fields with low energy chiral QCD behavior using latice methods. He also notes that sophisticated QCD latice work is becoming much more affordable as the price of heavy duty computational power falls.

There remain rivalies between different ways to turn the QCD equations into workable computational approximations so it is hard to know if the predictions from the computer models match the physical reality. But so far as I can tell, often enough the different methods produce the same predictions, and when they do not, the experimental evidence that would distinguish the models is not only hard to come by, but even presents questions that are hard even in principle to distinguish with experiments that approach what is possible since the interesting results arise in low energy QCD where confinement obscures our ability to directly observe the behavior predicted.

Saturday, September 24, 2011

Lots of interesting new prehistoric DNA data points

It is becoming more and more clear that there are been multiple episodes of profound population genetic shifts driven by migration in human prehistory. Ancient DNA analysis and more sophisticated techniques for analyzing the entire genomes of large populations have made it possible to get some sense of the broad outline of these migrations, although the data is fragmentary, so it is a big, incomplete puzzle to fit together in time and space, according to rules that we are only beginning to grasp.

* A comparison of the Denisovian ancient genome from Siberia to various world populations confirms the hypothesis that Papuans have the highest proportion of Denisovian ancestry (probably about 7% before admixture with outsiders), that Australian aboriginal populations have the same amount of Denisovian ancestry as Papuans, and that the proportion of Denisovian admixture in other populations is quite tightly aligned with their proportion of Papuan and Australian admixture.

Some Philippine Negrito populations also have elevated proportions of Denisovian admixture (60% of that found in Papuans) relative to their proportion of Papuan and Australian admixture, and some other Philippinos show elevated Denisovian admixture probably attributable to admixture with those Philippine Negrito populations.

Other Asian Negrito, Asian Aboriginal and Indonesian populations have slightly elevated levels of Denisovian admixture, but in these populations the proportion of Denisovian admixture doesn't exceed 0.8% and averages something on the order of half of that percentage of less.

There is little or no Denisovian admixture in other Asian populations, which suggests that there was a wave of modern humans in or around Southeast Asia ththat acquired Denisovian admixture, and then a much larger and later population wave that did not from which most modern mainland Asian populations overwhelmingly descend.

* An Australian aboriginal whole genome has been published. "We show that Aboriginal Australians are descendants of an early human dispersal into eastern Asia, possibly 62,000 to 75,000 years ago. This dispersal is separate from the one that gave rise to modern Asians 25,000 to 38,000 years ago. We also find evidence of gene flow between populations of the two dispersal waves prior to the divergence of Native Americans from modern Asian ancestors. Our findings support the hypothesis that present-day Aboriginal Australians descend from the earliest humans to occupy Australia, likely representing one of the oldest continuous populations outside Africa."

The age estimates are a tad old compared to estimates from non-genetic means, who put the first modern humans in Australia and Papua New Guinea ca. 45,000-50,000 years ago, making a 62,000-75,000 years ago wave, if there is one, based on an assumption that it traces back to a predecessor population in South Asia where one sees modern human traces that old. I'm not convinced that there wasn't a thin wave of modern humans into Asia prior to the Australian/Papuan one, although I'm not conviced that there was a prior wave either.

* Ancient DNA from "the Lower Xiajiadian culture (LXC) population, a main bronze culture branch in northern China dated 4500–3500 years ago, . . . from northern Asia who had lived in this region since the Neolithic period, as well as genetic evidence of immigration from the Central Plain.

Later in the Bronze Age, part of the population migrated to the south away from a cooler climate, which ultimately influenced the gene pool in the Central Plain. Thus, climate change is an important factor, which drove the population migration during the Bronze Age in northern China. Based on these results, the local genetic continuity did not seem to be affected by outward migration[.]"

Query if this population is better described as "proto-Altaic" than it is as "proto-Chinese." There is circumstancial evience that a lot of Bronze Age technology was added to the indigeneous Northeast Asian Neolithic technological mix via the Silk Road from Indo-Europeans who were basically European genetically. But, this evidence suggests that the technology transfer was largely cultural rather than demic (i.e. technology transfer was not due to population replacement or conquest by an elite minority population).

* Ancient DNA from Neolithic Hungary (i.e. pre-Bronze Age, post-farming and herding), shows a surprising large East Asian influence and surprisingly little continuity with modern Hungary population genetics. As Dienkes comments:

We have genetic discontinuity between Paleolithic and Neolithic, and between Neolithic and present, and, apparently, discontinuity between Neolithic cultures themselves, and wholly unexpected links to East Asia all the way to Central Europe.

When faced with data such as this, one can only say: what the hell happened during European prehistory?

Surprising East Asian genetic links are also found in the Ukraine that Dienekes discussed in a post on Neolithic and Bronze Age Ukraine.

In truth, the Paleolithic/Neolithic genetic discontinuity in much of Europe (although there is great regional variation within Europe on the extent of this discontinuity) has been apparent for some time from early ancient DNA results. But, evidence of major population shift between the Neolithic and the present, and of East Asian genetic influences in the Neolithic as far as Hungary are surprising. The East Asian genetic influence in Europe corroborate physical anthropology of the late Paleolithic and early Neolithic in the area where in the "northwest of Eastern Europe Mongoloid component is detected 10000–8000 years ago; in Dnepr–Donetsk tribes, 7000–6000 years ago, and on the territory of Ivanovo oblast (Sakhtysh), 6000–5000 years ago."

Some Analysis

In my view, the post-Neolithic discontinuity and East Asian influences are both signs of an Indo-European expansion that pushed East Asian influence from Hungary all the way back to the Altai and Mongolia and greatly reshaped the genetic makeup of much of what is now Indo-European Europe, probably mostly in the Bronze Age (perhaps as far back as the Copper Age in core proto-Indo-European areas) and Iron Age, and the discontinuities we see in Europe between different areas reflect Bronze Age communities that had not yet succumbed to this wave of expansion that was beginning around that point in time.

East Asian components to modern European populations are pretty much limited to Uralic language populations in what look like traces of circumpolar population interactions - and all put a very small number of those components are almost entirely absent from European populations. But, this new genetic evidence adds a new twist to the already confusing history of Hungary, which is widely thought to have adopted the Uralic Hungarian language in the Middle Ages as a result of conquering invaders from the East of Hungary (fleeing forces pushing them out of their homes there) whose ancient DNA has little or no surviving remanants in Hungary today.

Siberia, Eastern Europe and Central Europe seem to be a region particularly impacted by the East to West seesaw. In the Upper Paleolithic, one sees European influences to the West and also Paleo-Siberian populations ancestral to Native Americans with stronger East Asian affinities that have left just the slightest Yenesian traces that are probably mostly replaced after the Last Glacial Maximum from both West and East in largely non-overlapping areas. Just before and after the Neolithic revolution arrives in this region, there is a substantial East Asian component in ancient DNA that must have resulted from a population expansion at least 10,000 years ago. The Indo-Europeans seem to push back from the West starting perhaps 6,000 years ago, the Turks push back in the first millenium of the current era, there is a little European pushback with the Slavic expansion, only to be overwhelmed (or at least halted) by the expanding Mongol Empire, and after the Mongol empire collapsed, the Russians expand from West to East again - while the Chinese eventually strengthen their grip over their interior territory, leaving us with the current status quo.

Friday, September 23, 2011

Neutrino Speed and CP Violation

I've suggested in previous posts at this blog that the lack of CP violation in the electromagnetic and strong forces may be fundamentally due to the fact that their force carrying bosons, the photon and the gluon respectively, have no mass and therefore travel in a vacuum at the speed of light, in which they do not experience time and hence should not experience an arrow of time.

Neutrinos have mass, but in practice, the mass is so low that neutrinos of any decent energy travel at something extremely close to the speed of light and hence experience the passage of time very, very slowly. This implies, if my hypothesis is correct, that the CP violating phase in the PMNS matrix ought to be very, very small in magnitude, probably to the point of being non-observable. The lack of charge in neutrinos also means that any CP violation is indistinguishable from a P violation by itself or a T violation by itself.

Also, if QCD lattice computations are correct in concluding that gluons have a momentum dependent mass in the infrared, then even though high energy QCD may not be CP violating at an observable level, there might be CP violation in low energy QCD that is extremely hard to observe due to quark confinement.

An approximation of the PMNS matrix as lacking a CP violating phase entirely, whether or not it is truly exact, therefore, may be very servicable. This, in turn, would mean that the matrix should be capable of being approximated accurately with three rather than four parameters, and if the real theta angles in the PMNS and CKM matrixes respectively form unitary triangles, then you can actually describe the CKM matrix with three parameters (two of three thetas and a CP violating phase) and the PMNS matrix with two (two of three theata and no CP violating phase). Assuming that quark-lepton complementarity is accurate, moreover, both of the PMNS matrix parameters can be precisely determined from the three CKM matrix parameters alone. Thus, a theoretically possible eight parameters in the PMNS and CKM matrixes may actually be describable with just three parameters due to the interrelationships of the matrixes.

The even bigger step is that there is good reason to think that the mass ratios of the fermions may have a functional relationship to the elements of the CKM and PMNS matrixes, which would suggest that one could describe the 12 fermion mass parameters and 8 CKM/PMNS matrix parameters in the Standard Model with just four parameters (two of three CKM theta angles, one CP violation phase in the CKM matrix, and one mass parameter from which the others are derived; the W/Z boson masses can be derived from these other parameters). Of course, since the functions are related to each other that are multiple possible parameterizations with that many degrees of freedom. Still, eliminating 11 experimentally determined fermion mass parameters and five CKM/PMNS parameters from the Standard Model, if it could be accomplished, would be a profound leap and would have the practical effect of making it possible to use theory to exactly determine parameters that have to be estimated only roughly right now. This is particularly an issue in QCD because the lighter quark mass estimates are very rough and this leaves QCD with a far less solid foundation upon which to do calculations with the equations that have so far proved to be accurate.

One also needs three coupling constants in the Standard Model (for the electromagnetic, strong and weak forces), bringing you to seven parameters that must be experimentally fitted, and one probably needs to have at least one experimental fit parameter for each of the beta functions for the running of those coupling constants, although fundamental explanations for those functions have been suggested.

On the other hand, truly exteme CP violation in neutrinos, which is hard to detect due to lack of charge, could look like non-CP violation, but could perhaps lead to superluminal outcomes.

More on Superluminal Neutrinos

The vixra blog has the most comprehensive update on today's official announcement and scholarly pre-print from the OPERA experiment that their data supports an inference that neutrinos are moving at speeds faster than the speed of light.

The Effect Observed

The news is now officially out with a CERN press release and an arxiv submission at http://arxiv.org/abs/1109.4897 The result they have obtained is that the neutrinos arrive ahead of time by an amount 60.7 ns ± 6.9 ns (statistical) ± 7.4 ns (systematic). On the face of it this is a pretty convincing result for faster than light travel, but such a conclusion is so radical that higher than usual standards of scrutiny are required.

The deviation for the speed of light in relative terms is (v-c)/c = (2.48 ± 0.28 ± 0.30) x 10-5 for neutrinos with an average energy of 28.1 GeV The neutrino energy was in fact variable and they also split the sample into two bins for energies above and below 20 GeV to get two results.

13.9 GeV: (v-c)/c = (2.16 ± 0.76 ± 0.30) x 10-5

42.9 GeV: (v-c)/c = (2.74 ± 0.74 ± 0.30) x 10-5

These can be compared with the independent result from MINOS, a similar experiment in the US with a baseline of almost exactly the same length but lower energy beams.

3 GeV: (v-c)/c = (5.1 ± 2.9) x 10-5

. . .

We also have a constraint from supernova SN1987A where measurement of neutrino arrival times compared to optical observation sets |v-c|/c < 2 x 10-9 for neutrino energies in the order of 10 MeV.










Note that an electron neutrino at rest is believed to have a rest mass on the order of not more than about 1 eV. So, the supernova energy neutrinos have 10,000,000-1 relativistic kinetic energy v. rest mass. The Earth based experiment neutrinos have a 3,000,000,000-1 to 43,000,000,000-1 kinetic energy v. rest mass ratio.

Kinetic energy in special relativity is (mc^2/(1-v^2/c^2)^-1/2)-mc^2, where m is rest mass, c is the idealized speed of light and v is velocity. So the differences in speed between 10,000,000-1 kinetic energy, 3,000,000,000-1 kinetic energy, and 43,000,000,000-1 kinetic energy is a far smaller difference in terms of difference in velocity. In general, the greater the energy level the more slight the difference in velocity between one level of energy and a higher level of energy should be.

As a comment at the post notes: "For CNGS neutrino energies, = 17 GeV, the relative deviation from the speed of light c of the neutrino velocity due to its finite rest mass is expected to be smaller than 10-19, even assuming the mass of the heaviest neutrino eigenstate to be as large as 2 eV [4].Ch. Weinheimer et al., Phys. Lett. B 460 (1999) 219; but this one is old, used here in this new paper?" Thus, we do not naiively expect to see any measurable deviation from "c" in neutrino speed for neutrinos of this energy at all, in this experimental setup, and probably don't even expect to see a difference that is measurable for the 10 MeV neutrinos of the supernovae experiment.

If c is truly about 1+3*10^-5 times the value for c used in these calculations, and if the very high energy earth based calculations are only infintessimally and experimmentally invisibly different from c in that entire energy range, while the lower energy supernovae based calculation is a notch lower than c, then one could in principle infer the mass of the electron neutrino from the difference and that inferred mass is about right given measurements based on other methodologies.

Theoretical Analysis

If we believe in a tachyonic theory, with neutrinos of imaginary mass the value of (v-c)/c would decrease in inverse square of the energy. This is inconsistent with the results above where the velocity excess is more consistent with a constant independent of energy, or a slower variation. . . . For smaller energies we should expect a more significant anomaly . . . perhaps the energy dependence is very different from this expectation.

So if this is a real effect it has to be something that does not affect the cosmic neutrinos in the same way. For example it may only happen over short distances or in the presence . . . a gravitational field. It would still be a strong violation of Lorentz invariance of a type for which we do not really have an adequate theory. . . .

The most striking thing for me was the lack of any energy dependence in the result, a confirmation of what I noted this morning. The energy of the neutrinos have a fairly wide spread. If these were massive particles or light being refracted by a medium there would be a very distinct dependence between the speed and the energy of the particles but no such dependency was observed. . . .

Most physical effects you could imagine would have an energy dependence of some sort. A weak energy dependence is possible in the data but that would still be hard to explain. On the other hand, any systematic error in the measurement of the time or distance would be distinguished by just such a lack of energy dependence.

The only physical idea that would correspond to a lack of energy dependence would be if the universe had two separate fixed speeds, one for neutrinos and one for photons. I don’t think such a theory could be made to work, and even if it did you would have to explain why the SN1987A neutrinos were not affected. I think the conclusion has to be that there is no new physical effect, just a systematic error that the collaboration needs to find.

The are several theoretical concepts that make the most sense to me, if they are real:

(1) the notion of the role of non-speed of light paths (greater than and less than the speed of light that is proportionate in importance to the inverse of "the Interval" (i.e. the deviation of the squared space distance less the squared time distance) in the photon propogator that could carry over to the neutrino's quantum mechanical amplitude to appear at new locations (this could flow for inherent uncertainty in time and space, or from not quite perfect locality in time and space - we might be learning that there are an average of 20 meters of "wormholes" over 750km average and the law of averages would make the long range deviation much smaller than the short range deviation),

The problem with that idea, however, is that this effect should influence only the statistical variability of the observed speed, not the average speed observed.

(2) the possible effects of a gravity field that insert general relativity effects into the mix. In general relativity, time moves more slowly the deeper you are in a gravity well. The neutrinos are traveling through that gravity well at the level of the Earth's surface. The GPS synchronization signals and the precision distance measurements are made by light in a more shallow part of the Earth's gravity well where time passes somewhat faster. Depending on the specifics of the synchronization and precision distance measurement layout, one can imagine that general relativistic differences due to gravitational field strength on the rate of which time passes causes a systemic underestimate of distance and underestimate of elapsed time in the relevant reference frames. While the calculations aren't quite back of napkin, an order of magnitude estimate of this effect should be possible in a quite short academic paper and could materially increase the effect.

I don't have a good intuition on how strong this effect could be in the scenarios where we have data. But, gravity well effects are the rate at which time passes are observable directly with portable atomic clocks over distances on the order of tens of meters, and given the accuracy of these clocks described above, gravity well effects might theoretically, at least, be of an order of magnitude large enough to affect the measurements in this experiment that need to be accounted for expressly.

(3) The platonic ideal physical constant "c" called the speed of light differs from the actual speed of photons in the real world when there is some medium other than a pure vacuum through which it passes, electromagnetic fields have an impact, and they may perhaps be an adjustment due to the graviational well time dilation effect if there are discrepencies between the measuring reference frame and the calculational reference frames. It could be that experimental measurements of "c" by neglecting these effects, that is used in these calculations, is measuring photons that are actually travelling at something slightly less than "c" and that at the 750km scale, high energy neutrinos are not as strongly influenced by these effects.

For example, experimental measurements of "c" may not account for very small but measurable impacts on the measured speed of a photon of Earth's magnetic field, or interactions with the electromagnetic fields of protons and neutrons that have magnetic dipole moments, that do not affect electrically neutral neutrinos.

Perhaps in Earth's magnetic fields and cluttered mass field that are locally not electromagnetically neutral (and in similar fields found in supernovae), extremely high energy neutrinos actually do travel faster than photons because like the turtle and the hare, the theoretically faster photons get bogged down in "conservations" with other photons and electrons in the vicinity of their path, while slightly slower high energy neutrinos that are not diverted by colliding with other matter are not, even though in the mass free, charge free vacuum itself, photons actually travel slightly faster than high energy neutrinos.

In this case, the issue is not that neutrinos travel faster than "c" which is a huge theoretical quandry, but that photons in real life settings that are less than ideal matter free vacuums frequently travel more slowly than "c" which doesn't post the same deep theoretical issues. After all, we already know that photons travel at speeds slower than "c" in all sorts of non-vacuum media already.

Since most speed of light experiments involve photons in less than idealized conditions and not every experiment may adequately adjust for these effects, estimates of "c" from photons may systemically show greater consistency if the adjusted rather than true value of "c" is used in engineering applications.

If this is what is happening, the supernovae effects at a slighter degree can be explained by a very brief part of the neutrino's trip (at the beginning in the star where it originates and at the end in the immediate vicinity of the Earth) taking place in less than idealized conditions where neutrinos move faster than photons since photons interact more with the things around it, while the vast majority of the neutrino's trip takes place in a nearly ideal mass free, electromagnetic field free vacuum, where high energy neutrinos travel at a velocity only infinitessimally different from photons.

NOTE: The more I think about it, the more I like theoretical scenario (3).

Indeed, theoretical scenario (3) could also explain "the Interval" effect incorporated in QED not as being truly fundamental or reflecting the non-locality of space-time, but as reflecting the fact that for QED applications the "effective" value of "c" in normal Earth vicinity applications that is lower than true "c" varies randomly above and below "effective" values of "c" due to slight differences in photon and charged matter density. Real world QED applications don't involve true vacuums and in deep space astronomy observations, the scales are so great that the Interval effect that is relevant only at small distances disappears in all observables to the level of accuracy possible in those observations.

Absent this term in the QED propogator, more and more evidence seems to point to spacetime not being discrete at even a scale as fine as the Planck scale, although a "point-like" fundamental particle still creates general relativity contradictions.

In favor of the pro-"new physics" conclusion (maybe), another comment to the blog post notes that "An exact value for this ratio greater than light speed and with the 3 neutrino flavor is: (v-c)/c =3 x EXP-[SQR (alpha electromagnetic coupling constant]^-1)] = 2,47 x 10^-5" But, a "slow photons" in low density photon and matter fields scenario does make an effect that has some functional relationship to the electromagnetic coupling constant make sense as well, even without new physics, although the relationship would not be so clean and exact in that scenario.

Error Source Analysis

So obviously there could be some error in the experiment, but where?

The distances have been measured to 20cm accuracy and even earthquakes during the course of the experiment can only account for 7cm variations. The Earth moves about 1m round its axis in the time the neutrinos travel but this should not need to be taken into account in the reference frame fixed to Earth. The excess distances by which the neutrinos are ahead of where they should be is in the order of 20 meters, so distance measurements are unlikely to be a source of significant error.

Timing is more difficult. You might think that it is easy to synchronous clocks by sending radio waves back and forward and taking half the two way travel time to synchronise, but these experiments are underground and radio waves from the ground would have to bounce off the upper atmosphere or be relayed by a series of tranceivers. . . . the best atomic clocks lose or gain about 20 pico seconds per day, but portable atomic clocks at best lose a few nanoseconds in the time it would take to get them from one end to the other. . . . the best way to synchronise clocks over such distances is to use GPS which sends signals from satellites in low earth orbit. Each satellite has four atomic clocks which are constantly checked with better groundbased clocks. The ground positions are measured very accurately with the same GPS and in this way a synchronisation of about 0.1 ns accuracy can be obtained at ground level. The communication between ground and experiment adds delay and uncertainty but this part has been checked several times over the course of the experiment with portable atomic clocks and is good to within a couple of nanoseconds.

The largest timing uncertainties come from the electronic systems that are timing the pulses of neutrinos from the source at CERN. The overall systematic error is the quoted 6.9 ns, well within the 60 nanosecond deviations observed. Unless a really bad error has been made in the calculations these timings must be good enough.

The rest of the error is statistical [and the variation does not obviously suggest an error]. . . .

[Background sources:] The speeker showed how the form of the pulse detected by OPERA matched very nicely the form measured at CERN. If there was any kind of spread in the speed of the neutrinos this shape would be blurred a little and this is not seen.

The error source analysis is sufficiently convincing to suggest that the effect observed may not dervive from errors in distance measurement, syncronization of clocks, equipment timing, background sources of neutrinos, or statistical variation. Put as the comment below from the blog post notes, precision and accuracy are not necessarily the same thing and a bad distance formula could lead to this result:

According to the paper the distance measurement procedure use the geodetic distance in the ETRF2000 (ITRF2000) system as given by some standard routine. The european GPS ITRF2000 system is used for geodesy, navigation, et cetera and is conveniently based on the geode.

I get the difference between measuring distance along an Earth radius perfect sphere (roughly the geode) and measuring the distance of travel, for neutrinos the chord through the Earth, as 22 m over 730 km. A near light speed beam would appear to arrive ~ 60 ns early, give or take.

Also, as I noted above, the lack of experimental error does not necessarily imply that we truly have tachyons, as opposed to some other, less theoretically interesting effect.

Wednesday, September 21, 2011

Another W Boson v. Z Boson distinction

Only the charged weak force bosons, W+ and W-, give rise to inelastic neutrino scattering as well as to elastic scatter in electron neutrinos. Inelastic scattering is not created by the electrically neutral Z boson which produces only elastic neutrino scattering, continuing the basic picture in which the W bosons do all manner of things that nothing else does, while the Z boson's effects are pretty ordinary. Z bosons also manifest phenomenologically in electroweak interference and the proper calculation of anomalous magnetic moments.

Also on the subject of the weak force, it turns out to be horribly difficult to locate easy to understand descriptions of the "force", i.e. mass moving momentum, aspect of the weak force in layman's descriptions. With some real effort you can dig through some of the more techical literature to identify the charged and neutral current components of the electroweak Lagrangian, and with a bit more effort you kind dig up the potential function of the weak force field, which in practice, is a short range force that is approximately equal in strength to the electromagnetic force at 10^-18 m, "but at distances of around 3×10−17 m the weak interaction is 10,000 times weaker than the electromagnetic.", in general, the weak force is stronger at shorter ranges, and weaker at longer ranges. More generally:

Due to their large mass (approximately 90 GeV/c^2) these carrier particles, termed the W and Z bosons, are short-lived: they have a lifetime of under 1×10^−24 seconds. The weak interaction has a coupling constant (an indicator of interaction strength) of between 10^−7 and 10^−6, compared to the strong interaction's coupling constant of about 1; consequently the weak interaction is weak in terms of strength. The weak interaction has a very short range (around 10^−17–10^−16 m). . .

The weak interaction affects all the fermions of the Standard Model, as well as the hypothetical Higgs boson; neutrinos interact through the weak interaction only. The weak interaction does not produce bound states (nor does it involve binding energy) – something that gravity does on an astronomical scale, that the electromagnetic force does at the atomic level, and that the strong nuclear force does inside nuclei.

All told, the weak force has a rather modest impact on the way the universe behaves at the macrolevel.

It isn't clear to me if the lack of weak interaction bound states is a theoretical result, or an absence of empirical evidence, or both.

At the distance scale at which the nuclear binding force (mediated by pions and derivative of the strong force within protons and neutrons) operates, the weak force is a quantitatively negligable component factor, that is much weaker than either the strong force or the electromagentic force.

But, I have yet to see anything really credible that more than hints that the weak force is generally repulsive, and not attractive, based on the amateur authors assumption that it acts in opposition to the generally attractive strong force (although, of course, the strong force switches from a repulsive to an attractive regime with distance) and I don't have enough confidence that I understand the normal numerical values and sign conventions of the terms in the Lagrangian to say with confidence that I fully understand how they play out. It also seems from the neutral current portion of the Lagrangian that local electromagnetic field strength interacts to some extent with neutral current Z boson activity.

Musing On Composite Neutrino Dark Matter

What follows is pure speculation and conjecture.

As noted in the previous post, astronomy suggests that dark matter ought to have a mass on the order of 1 keV to 13 keV, which is less than the mass of an electron and more than the likely mass of the neutrinos (there should be about one per cubic centimeter of space in our galaxy).

Yet, the evidence from weak force interactions is that all particles of less than 45 GeV (half the mass of a Z boson) are fully accounted for in the Standard Model, unless the W and Z bosons don't decay into them, violating the "democratic principle" that seems to apply to all other fundamental particles.

Dark matter ought to be stable and experience no significant decay. It ought to have a neutral electromagnetic charge.

But, there is really no experimental limitation that says that dark matter has to be a fundamental Standard Model particle. And, it would hardly be remarkable if a composite particle had a mass greater than the sum of its fermions, something that is generally true of every meson and baryon made of quarks. Indeed, more than 90% of the baryonic mass in the universe is attributable to the binding energy carried by gluons in protons and neutrons. Protons are stable, even in isolation, so it would hardly be surprising if some other particle could also be stable, even in isolation.

Imagine a force that only couples to fundamental particles that have neutral electromagnetic charge that is purely attractive with no repulsive component. It would be simpler than electromagnetism, or the weak force, or the strong force, or gravity in general relativity. Lacking electrical charge, photons would not interact with it. Lacking color charge, gluons would not interact with it.

This force would be strong, but wouldn't have to be confining or chiral, unlike the strong force. This force would operate only at short ranges, so the boson that carried it could be massive, perhaps on the same order of magnitude in mass as the W and Z bosons. Indeed, perhaps this boson, perhaps it could be called the X boson, could "eat" the fourth Higgs boson, just as the other three Higgs bosons are "eaten" by the W+, W- and Z bosons and eliminating the need for a fundamental particle with a spin of other than one or one-half. This would be in accord with electroweak fits for the Higgs boson suggested by LEP that put it at masses below that already excluded by collider experiments.

If an X boson had a mass very similar to that of the Z boson, the decays of the two electromagnetically neutral bosons might be indistinguishable in collider experiments.

I don't know precisely what this means in terms of group theory representations or the spin of the carrier boson, although I am tempted to imagine that it might correspond to the seemingly trival SU(1) or U(1) group, making for a neat SU(3) x SU(2) x SU(1) (or U(1)) x U(1) combination. This would leave four categories of bosons (photon, W/Z, gluon, and X), just as there are four categories of fermions (up, down, electron, neutrino). Alternately, perhaps it would be another manifestation of the weak force that would make the notion that the photon and Z boson are linear combinations of a W0 and B boson, as proposed by electroweak unification theory, unnecessary. They W0 could be the Z and the X could be the B.

The deficit of neutrinos that would arise from neutrinos binding into composite dark matter particles held together by X bosons would be negligible enough to escape notice in cosmological efforts to account for missing mass. Indeed, since the most recent census of baryonic matter suggests that it makes up closer to 50% of all non-dark energy in the universe, rather than the lion's share, and neither electrons or neutrinos in isolation make up much of the mass of the universe, composite neutrino dark matter bound by X bosons might mean that quarks and leptons would contribute roughly equal amounts to the total quantity of mass in the universe.

Since an electron neutrino and an electron antineutrino ought to annihilate if they were in close proximity, and hence would not be stable, composite neutrino dark matter might be made of either two electron neutrinos or two electron antineutrinos, which given the mass of the composite particle ought to make it possible to infer the coupling constant of the neutrino binding force carried by the X boson. Perhaps it would "coincidentally" be identical to the coupling constant of the Z boson with neutrinos. Indeed, perhaps the X boson equations would simply be a degenerate form of the equations that describe the W and Z bosons.

In this scenario, we would already have discovered all of the fundamental fermions and all but one of the fundamental bosons and there would be no free Higgs boson to discover, although we would be gaining one more fundamental force to the extent that it was not considered unified with the weak force. Since the weak force would cease to be even superficially unified with the electromagnetic force, the need for electroweak symmetry breaking through mechanism such as those found in supersymmetry and technicolor theories would be less necessary. There would be no need to have right handed sterile neutrinos either, since the composite ones could serve their role. Leptoquarks would also be unnecessary.

One of the problems with sterile neutrinos, particularly when there is just one kind of them, is where they fit in the Standard Model chart of fermions while remaining stable. If there is more than one generation of them, and they do not interact with the weak force, that presents its own issues. All other higher generation fermions are unstable and rapidly decay, and neutrinos oscillate incessantly. The odd keV sized composite dark matter particle would also be hard to distinguish experimentally from a plain old high energy neutrino.

One vaguely similar idea in the literature is found in this 2011 paper and here (also in 2011). The characteristic energy scale of the composite neutrinos in that model is also suggestively close at ca. 300 GeV to the vacuum expectation value of the Higgs field of 246 GeV.

Other non-neutrino composite dark matter proposals are found here and here and here.

Tuesday, September 20, 2011

Dark Energy, Black Holes, Dark Matter, Neutrinos and More

* Dark energy data are still consistent with a cosmological constant and a topologically flat universe.

* In galaxies, "black hole mass is not directly related to the mass of the dark matter halo but rather seems to be determined by the formation of the galaxy bulge."

* Black holes don't get bigger than about 10 billion times the mass of the sun. "One possible explanation . . . is that the black holes eventually reach the point when they radiate so much energy as they consume their surroundings that they end up interfering with the very gas supply that feeds them, which may interrupt nearby star formation. The new findings have implications for the future study of galaxy formation, since many of the largest galaxies in the Universe appear to co-evolve along with the black holes at their centres. . . . 'Evidence has been mounting for the key role that black holes play in the process of galaxy formation . . . But it now appears that they are likely the prima donnas of this space opera.'"

* Excessively high dark matter halo densities near black holes lead to results inconsistent with experiment, constraining the range of possible dark matter distributions and properties.

* "[G]alaxies are more clustered into groups than previously believed. The amount of galaxy clustering depends on the amount of dark matter." It takes about 300 billion suns of dark matter to form a single star-forming galaxy according to a survey of galaxies and cosmic background radiation patterns published in February of 2011.

* Newtonian approximations of astronomy scale many body problems aren't as horrible as one might suspect compared to a pure general relativity approach according to this analysis. It isn't entirely clear to me that its assumption that almost everything is moving at speeds much lower than the speed of light is accurate, or that this analysis properly accounts for angular momentum effects related to general relativity in galaxy scale systems.

* Experimental data rule out much of the dark matter parameter space.

* Warm dark matter models fit reality better than cold dark matter models:

Warm Dark Matter (WDM) research is progressing fast, the subject is new and WDM essentially works, naturally reproducing the astronomical observations over all scales: small (galactic) and large (cosmological) scales (LambdaWDM). Evidence that Cold Dark Matter (LambdaCDM) and its proposed tailored cures do not work at small scales is staggering. . . LambdaWDM simulations with keV particles remarkably reproduce the observations, small and large structures and velocity functions. Cored DM halos and WDM are clearly determined from theory and astronomical observations, they naturally produce the observed structures at all scales. keV sterile neutrinos are the leading candidates, they naturally appear extensions of the standard model of particle physics. Astrophysical constraints including Lyman alpha bounds put its mass in the range 1 < m < 13 keV.

For comparison sake, an electron has a mass of about 511 keV.  So, a keV scale sterile neutrino would be not more than about 3% of the mass of an electron, and possibly close to 0.2% of the mass of an electron, but would have a mass on the order of 1,000 to 50,000 times that of an ordinary electron neutrino, or more.  Warm dark matter candidates have much greater velocities than cold dark matter candidates.

[T]here are many signs that the simple picture of CDM in galaxies is not working. The most troubling signs of the failure of the CDM paradigm have to do with the tight coupling between baryonic matter and the dynamical signatures of DM in galaxies, e.g. the Tully-Fisher relation, the stellar disc-halo conspiracy, the maximaum disc phenomenon, the MOdified Newtonian Dynamics (MOND) phenomenon, the baryonic Tully-Fisher relation, the baryonic mass discrepancy-acceleration relation, the 1-parameter dimensionality of galaxies, and the presence of both a DM and a baryonic mean surface density.

The strangest of these relations is the “Bosma effect”: the centripetal contribution of the dynamically insigificant interstellar medium (ISM) in spiral galaxies is directly proportional to that of the dominant DM. The constant of proportionality has been determined for about 100 galaxies, with dwarf galaxies showing a smaller and late-type spirals showing a larger factor.

Hoekstra, van Albada & Sancisi set out to test the Bosma effect, showed that it indeed allowed a very detailed fit to the rotation curves of many well-studied galaxies, but concluded that it was not real. Reviewing their arguments, it is clear that their negative judgement was very conservative - by the normal standards of rotation curves, the results were, in fact, very convincing. Since their fits were performed by hand and not compared with the corresponding CDM model fits, it was not possible to make any formal conclusions and certainly not possible to reject the effect as non-physical.

Using the much better data made available by the Spitzer Infrared Nearby Galaxy Survey, The HI Nearby Galaxy Survey, and the analyses by de Block et al., we have tested the Bosma effect and compared the results against standard CDM models. The use of infrared photometry and colours permits formal fits to the stellar components nearly independent of extinction corrections and with reasonably reliable mass-to-light estimates. In addition to the standard bulge, stellar disk, and visible HI components, we fitted the rotation curves with the addition of either one or two Bosma components, using either the HI disc (so-called “simple Bosma” models) or both the stellar and the HI discs (“classic Bosma” models) as proxies, where the stellar disc is used as a proxy for the molecular gas obviously present in regions of previous and current star formation. For comparison, we also fit the data with self-consitent NFW models, where the compactness of the halo is a function of the halo mass or with the Burkert halo mass profiles used in the “Universal Rotation Curve” model. The “simple” Bosma models using only the HI as a proxy are remarkably good in the outer discs, as shown by Bosma, independent of the shape of the rotation curve. However, the inner disks are not well-fit by pure “HI-scaling”: the saturation of the HI surface densities above levels around 10M⊙/pc2 results in centripetal contributions which are clearly too small. This problem is not present in “classic” Bosma models, since the saturation of the HI profiles occurs exactly where the stellar component starts to dominate, permitting a perfect compensation.

Furthermore, hot dark matter and cold dark matter produce results contrary to the observed large scale structure of the universe (same source):

WDM refers to keV scale DM particles. This is not Hot DM (HDM). (HDM refers to eV scale DM particles, which are already ruled out). CDM refers to heavy DM particles (so called wimps of GeV scale or any scale larger than keV).

It should be recalled that the connection between small scale structure features and the mass of the DM particle follows mainly from the value of the free-streaming length lfs. Structures smaller than lfs are erased by free-streaming. WDM particles with mass in the keV scale produce lfs ∼ 100 kpc while 100 GeV CDM particles produce an extremely small lfs ∼ 0.1 pc. While the keV WDM lfs ∼ 100 kpc is in nice agreement with the astronomical observations, the GeV CDM lfs is a million times smaller and produces the existence of too many small scale structures till distances of the size of the Oort’s cloud in the solar system. No structures of such type have ever been observed. Also, the name CDM precisely refers to simulations with heavy DM particles in the GeV scale. Most of the literature on CDM simulations do not make explicit the relevant ingredient which is the mass of the DM particle (GeV scale wimps in the CDM case).

The mass of the DM particle with the free-streaming length naturally enters in the initial power spectrum used in the N-body simulations and in the initial velocity. The power spectrum for large scales beyond 100 kpc is identical for WDM and CDM particles, while the WDM spectrum is naturally cut off at scales below 100 kpc, corresponding to the keV particle mass free-streaming length. In contrast, the CDM spectrum smoothly continues for smaller and smaller scales till ∼ 0.1 pc, which gives rise to the overabundance of predicted CDM structures at such scales.

CDM particles are always non-relativistic, the initial velocities are taken zero in CDM simulations, (and phase space density is unrealistically infinity in CDM simulations), while all this is not so for WDM.

Since keV scale DM particles are non relativistic for z < 10^6 they could also deserve the name of cold dark matter, although for historical reasons the name WDM is used. Overall, seen in perspective today, the reasons why CDM does not work are simple: the heavy wimps are excessively non-relativistic (too heavy, too cold, too slow), and thus frozen, which preclude them to erase the structures below the kpc scale, while the eV particles (HDM) are excessively relativistic, too light and fast, (its free streaming length is too large), which erase all structures below the Mpc scale; in between, WDM keV particles produce the right answer.


The big picture evidence for warm dark matter as opposed to cold dark matter also suggests that the recent experimental signs of 10 GeV dark matter (contradicted by other experiments that should have validity in that mass range) are probably wrong. But, plain old electron neutrinos are too light (comparable to hot dark matter models), and the tau neutrinos that would seem to be closest to the right mass range aren't believed to be very stable.

* It may be possible to detect traces of a sterile neurtino warm dark matter candidate in beta decays of radioactive elements like Rhenium 187 and Tritium.

* The cosmic microwave background data appear to be inconsistent with a cold dark matter model.

* Some decent empirical fits to observed dark matter profiles are obtained with the Einasto halo model (which has two continous and one low whole number parameters): "This is not in agreement with the predictions from ΛCDM simulations."

* New data disfavor previous results that suggests that there were five generations of neutrinos but can fit a 3+1 generation neutrino model with a fourth generation sterile neutrino as heavy as 5.6 eV (which is still lighter than one might hope to fit warm dark matter models).

* An electron neutrino mass estimate of 0.25-1 eV is obtained based upon consideration of the observed diffraction behavior of neutrinos. This approach "resolves anomalies of LSND and two neutrino experiment."

* A four generation extension of the Standard Model (SM4) with two Higgs doublets can produce a sterile neutrino that could account for 1% of cold dark matter.

* A seesaw mass model for neutrinos is explored in a noncommunitive geometric context.

* Distinctions between what a PMNS matrix for neutrinos should look like in the cases of Dirac and Majorana masses are compared.

* Dynamical mass generation in gluons and its link to chiral symmetry breaking is explored.

* An effort to compute fundamental particle masses from a preon model is set forth.

* An ad hoc effort to look at gravitational effects on gauge couplings suggests that there is "a nontrivial gravitational contribution to the gauge coupling constant with an asymptotic free power-law running." Thus, one reason for the "running coupling constants" of electroweak theory and QCD may be quantum gravity effects. Coupling constants could also be caused by a fractal structure of spacetime.

* Quantum entanglement could be disfavors the existence of a "long range force" that operates on scales greater than the strong and weak forces but not greater than the micrometer scale.

Monday, September 19, 2011

Tachyons At Opera?

Rumor has it (from an anonymous source on September 15 here) that the OPERA neutrino detector at CERN is seeing a six sigma signal (above the usual rule of thumb standard for "discovery" in physics) of neutrinos traveling at more than the speed of light, which means that they would be tachyons, the name for particles that travel at more than the speed of light. The experiment's main purpose was to detect tau neutrinos. Data on the speed at which neutrinos travel is simply a bonus (although neutrinos speed, correlated with missing energy in the source reaction, in theory, ought to provide some direct constraints on neutrino mass).

Tommaso Dorigo who blogged this rumor, and Lubos, who blogged Dorigo's gossip, like all good physicists, are deeply skeptical of the result on the theory that extraordinary claims require extraordinary proof. This is only the second credible experiment (the other looked at neutrinos leaving a 1987 supernovae, which has many more uncertainties involved, for example, concerning the distance and the nature of the underlying source), ever, in the history of the universe, to see anything that might be a tachyon at statistically significant levels. There are good theoretical reasons to doubt the result, despite the fact that some quantum mechanical formulas like the Feynman photon propogator in QED recognize that there is a slight statistical probability of particles traveling at more than the speed of light at short distances incorporated into the equation that are interpreted by some as evidence of a slight amount of non-locality in space-time.

A delay in publication of the results or a press release is no doubt due to the frantic efforts of CERN scientists to find some other explanation of the results (the fluke could be as simple as a slight mismeasurement of the 732km distance from the neutrino source to the point at which they were detected, or a clock miscalibration in measuring events about 3 milliseconds long, or an underestimate of statistical uncertainty) so that they don't look stupid when someone else discovers one. But, the experimental setup is generally sound and much of the theoretical analysis is relative straight forward since it does not involve the arcane and complex QCD background calculations that go into making sense of the debris of high energy proton-proton collisions, so this is not an easy result to hand wave away.

The most plausible answer would be that the neutrino impacts that look superluminal didn't actually come from the neutrino generating source that was inferred, and instead came from nuclear reactions either in the Sun, or radioactive isotypes in the Earth (perhaps related to the geological activity of volcanos in central Italy dredging up an excess of radioactive isotypes from the center of the Earth aligned with the neutrino beam), or from some unaccounted for man made source like a nearby smoke alarm with a radioactive isotype that is decaying in it. A failure to properly account for these kind of neutrino backgrounds has already been suggested in the direct detection of dark matter experiments showing a positive detection that have been conducted so far.

But, information on the direction in which the debris from the neutrino impact travels, the momentum of that debris, and statistical tests ought to be able to rule out those kinds of neutrino backgrounds very effectively in a series of results numerous enough to provide a six sigma signal.

UPDATE: The comments at Resonaances and post at Dorigo's blog regarding the rumored results have both been taken down.