Pages

Tuesday, February 28, 2012

Progress In Loop Quantum Gravity

Over at physics forums, a discussion thread notes some important technical achievements in the process of formulating a model of quantum gravity within the spin foam formulation of the loop quantum gravity idea, where gravity arises from the structure of a discrete space-time.

For those familiar with the topic the key developments are that:

He shows that the method used by EPRL and FK is not sufficient to suppress the quantum fluctuations related to these constraints and that their method does not lead to the (correct) Crane-Yetter model! In addition he shows that the Immirzi parameter drops out in the final theory and that effects regarding its quantization are artificial.


EPRL is the closest approach that loop quantum gravity has to the leading formulation of the theory within this approach to developing a theory of quantum gravity, so pointing out a defect in the quantization method that it uses is not desirable.

The ability of a purportedly correct approach to formulating the equations to cause the Immirizi parameter, which is the characteristic physical constant of loop quantum gravity theories to drop out of the equations all together is absolutely unexpected and also encouraging as it narrows opportunities for fine tuning in a quantum gravity theory formulated in this way. The Immirizi parameter in this model looks a bit like the renormalization scale used in QED, necessary to intermediate calculations despite the fact that the value used is independent of the final outcome. In a related point, the author argues that this is suggestive of the possibilty that this solution is unique way to correctly fit the equations of general relativity to a spin foam model.

The remark that "effects regarding its quantization are artificial" seems clear at first read but grows rather cryptic as one really tries to articulate what it really means. This seems to be making a statement regarding phenomenology, but an examination of the pre-print itself suggests that at this stage that there are few phenomenological implications and that a true final spin foam theory, even in the absence of a matter field, is still a step or two away from being achieved even with this substantial step towards a final theory.

Forum moderator Marcus provides a more complete review of the prior literature related to this mathematical approach to spin foam models that is actually more complete to some extent that the citations in the pre-print itself.

The pre-print is: Sergei Alexandrov, "Degenerate Plebanski Sector and its Spin Foam Quantization" (22 Feb 2012).

The paper is also exceptionally well written for a paper in this field although at the bottom of the first page of body text the word "affordable" is used when the word "amenable" would be the correct useage. Similarly, it uses the non-ideomatic phrase "we went through a long way to get the partition function" at the start of Section 3.5 when the intended sense phased to avoid technical meanings of the word "path" in the field is that "we took a long road to get the partition function." These nits are obviously non-exhaustive, but aside from a few useage and errors in ideomatic English useage, is very well organized and laid out.

To back up and contextualize just a little more, there are about half a dozen different, possibly equivalent or nearly equivalent ways to formulate gravity in terms of the structure of space-time the way that Einstein's classical general relativity theory does, with a discrete space-time structure rather than a contious one. Spin foams is one of the more well established approaches to doing so.

Loop quantum gravity is only a theory of quantum gravity, and not a grand unified theory or theory of everything that describe fundamental forces of physics found in the Standard Model of particle physics. But, a discrete space-time formulation of general relativity at a quantum scale is not "allegeric" to the mathematics of quantuum mechanics reformulated into this discrete space-time the way that quantum mechanics is to a formulation in classical general relativity. The same basic quantum concepts apply to both.

Assuming one can iron out the remaining mathematical challenges associated with loop quantum gravity with matter fields, the path to formulating the Standard Model equations of particle physics within the discrete loop quantum gravity space-time that is four dimensional is considerably more straight forwards than the task of reconciling the Standard Model to general relativity, even though neither project has been successfully completed. One would hope, in turn, that the steps necessary to unify the Standard Model forces when formulated in LQG and the high energy modifications to the effective low energy theory that is the Standard Model might be more obvious in an LQG formulation. Preliminary suggestions have been made regarding how to proceed towards this end. If one leaps over that hurdle, moreover, one really does have something closely approximating a "theory of everything" applicable in all observable conditions. This would indeed be truly huge.

Monday, February 27, 2012

Quote of the Day

[W]e can paraphrase the Italian geneticist Guido Barbujani: imagine that at some time in the future Indian astronauts colonise Mars, and geneticists then type their Y chromosomes. We may well find that their lineages date back to 9,000–20,000 years ago. But we would not be wise to infer that they have been living on Mars for 9,000 years.


From DENISE R CARVALHO-SILVA, TATIANA ZERJAL, CHRIS TYLER-SMITH, Ancient Indian Roots? J. Biosci. 31(1), March 2006.

Galactic Structure Caused By Black Hole Barf

The behavior of stars in the central bulge of a galaxy is closely related to the size of its central black hole, despite the fact that gravity alone is too weak by six orders of magnitude to explain the range of the effect. Why is this the case?

It looks increasingly plausible that the link between central black hole size and the behavior of stars in the central bulge of those galaxies may be due to flows of matter away from the central black holes called "ultra-fast outflows" (consider the acronym) distinct from the X-rays that shoot out in both directions in narrow jets from the axis of a central black hole, that are more aptly described as black hole barf, in amounts roughly comparable to the amount of new matter that they gobble up each year.

It turns out the the black holes at the center of galaxies often seem to have bulimia, and are not simply gluttons.

Near the inner edge of the disk, a fraction of the matter orbiting a black hole often is redirected into an outward particle jet. Although these jets can hurl matter at half the speed of light, computer simulations show that they remain narrow and deposit most of their energy far beyond the galaxy's star-forming regions. . . .

At the centers of some active galaxies, X-ray observations at wavelengths corresponding to those of fluorescent iron show that this radiation is being absorbed. This means that clouds of cooler gas must lie in front of the X-ray source. What's more, these absorbed spectral lines are displaced from their normal positions to shorter wavelengths -- that is, blueshifted, which indicates that the clouds are moving toward us. . . .

The outflows turned up in 40 percent of the sample, which suggests that they're common features of black-hole-powered galaxies. On average, the distance between the clouds and the central black hole is less than one-tenth of a light-year. Their average velocity is about 14 percent the speed of light, or about 94 million mph, and the team estimates that the amount of matter required to sustain the outflow is close to one solar mass per year -- comparable to the accretion rate of these black holes. . . .

By removing mass that would otherwise fall into a supermassive black hole, ultra-fast outflows may put the brakes on its growth. At the same time, UFOs may strip gas from star-forming regions in the galaxy's bulge, slowing or even shutting down star formation there by sweeping away the gas clouds that represent the raw material for new stars. Such a scenario would naturally explain the observed connection between an active galaxy's black hole and its bulge stars.


Also, any time we understand more about what is causing the structure we observe in galaxies, it follows that before long our understanding will also reduce the need for black box alternatives, like exotic dark matter, to explain it. It is possible that a meaningful share of dark matter halo effects may actually consist of a halo of ordinary matter barfed out by central black holes in galaxies at unexpectedly high speeds in unexpectedly high volumes, something that is rare at the galactic fringe and not accounted for in previous matter census efforts unaware that ultra-fast outflows existed. This would also explain why this potential contributor to dark matter has been so hard to directly observe on Earth.

Higgs Prediction Revisited

Last year, I predicted at this blog that a Standard Model Higgs boson would have been discovered or ruled out by the time that the Academy Awards were awarded in 2012, which happened yesterday. Overall, I give myself credit for being more right than wrong. While we don't have a five sigma "discovery" in hand, the evidence at the end of last year, which when updated early this year approaches 4 sigma, makes the case that a Higgs boson of approproximately 125 GeV exists quite strong, and strongly disfavors a Higgsless model. Moreover, the details of that discovery rather strongly support the notion that this particle lacks a net electromagnetic charge and tends to show rather strongly that it has spin zero.

The discovery to date doesn't definitively rule out the possibility that there may be additional Higgs bosons out there, as in SUSY models, or that the discovered particle is actually composite, but there is no evidence strongly pointing towards the ressonance seen having any characteristics inconsistent with a Standard Model Higgs boson.

Friday, February 24, 2012

W Boson Mass Refined

One of the parting boons provided by Tevatron experiment (now shut down) to the world is an updated and more precise measurement for the mass of the W boson. The Tevatron result, combined with the past measurements leads us to conluded that "the new world average for the W boson mass is . . . 80390 +/- 16 MeV." Before this result, the accuracy was +/- 23 MeV.

Neutrinos Don't Move Superluminally After All

Since September, scientists have been scratching their head over results that appear to show neutrinos traveling between Switzerland and Italy faster than light would. . . . there was a good reason the measurements and reality weren't lining up: a loose fiber optic cable was causing one of the atomic clocks used to time the neutrinos' flight to produce spurious results. If the report is confirmed (right now, there's only one source), then it provides a simple explanation for the fascinating-yet-difficult-to-accept results. According to the new report, researchers are preparing to gather new data with the clocks properly hooked into computers, which should definitively indicate whether the loose connection was at fault.

From here via Gauge Connection.

Just to recall the situation, the initial report has shown neutrinos travelling at speeds a detectable, but tiny percentage rate in excess of the best measured values of the speed of light, when special relativity and what we know about the kinetic energy of the neutrinos and their rest masses would have caused us to expect them to travel at speeds below the speed of light by an amount too small to be experimentally detectable.

Everyone in the physics community had suspsected as a most likely explanation experimental error, which appears to have been the cause of the superluminal speeds detected, to be at fault, but until no, no one had been able to point to a source of experimental error that would create a discrepency sufficiently large to explain the result.

A variety of theoretical explanations for variations of the theory of special relativity had been advanced to explain it (I myself viewed the possibility that the measured speed of light is actually slightly lower than the true value of "c" in the equations of general and special relativity due to interactions with electromagnetic phenomena unaccounted for previously in the vicinity of massive bodies of ordinary matter, i.e. an error in past rather than current experiments, as most likely if the experimental result from OPERA were confirmed). But, it turns out that those theoretical explanations will end up being disfavored going forward since their prediction that indeed OPERA could be seeing superluminal neutrinos didn't turn out to be accurate.

Thursday, February 23, 2012

Drought Doomed The Mayans

A prolonged decline in the frequency of summer storms that produced a drought in Central America (and beyond) lead to the demise of Mayan civilization around 800-950 CE. Their demise, in turn, cleared the way for the ascendancy of the subsequent Aztec civilization, that started to develop in what is now Southern Mexico in the late 1200s CE, which was in place when Columbus arrived in the New World (in 1492 CE) until the Aztec empire of Moctezuma II was toppled by Conquistador Hernán Cortés with the empire having collapsed by 1521 CE. In between, in what is known as the "Post Classical" period, the Mayan empire fragmented into successor city-states that gradually recovered and were consolidated into the renewed Aztec empire. The Mayans were successors to the Olmec civilization from ca. 2000 BCE to 400 BCE.

Tuesday, February 21, 2012

Predictors of Phoneme Inventory

Why do different languages have different numbers and kinds of vowels and consonants?

The data point to latitude and the number of people who speak a language as having statistically significant impacts on consonant inventories and word length as related to vowel inventories (multiple citations are omitted in the blockquote below without editorial indication):

[C]onsonants are more likely to be found further away from the equator and . . . more vowels are associated with shorter word lengths. . . .

The clustering of a large number of consonants north and south of the equator might be the result of climatic zones. Cold-climate languages, for instance, are found to have a high frequency of consonants at a very low frequency (obstruents). By contrast, languages in warm climatic zones possess sound classes exhibiting moderate to high levels of sonority (rhotic consonants, laterals, nasals and vowels).

As for word length: It is widely acknowledged in the literature that vowels and consonants subtend to different linguistic functions. Consonants are often associated with word identification, whereas vowels contribute to grammar and to prosody. Vowel quality is also implicated in regular sound change, which results in paradigmatic changes, whereas consonants are frequently cited as being changed through lexical diffusion. Lastly, consonant systems expand with “minimum articulatory cost”, whereas vowel systems appear to follow pressure for maximal perceptual differences.

All of this, in turn, is part of a larger slew of recent rebuttals of Atkinson's 2011 paper purporting, quite unconvincingly with statistical tools and a popular online atlas of language features, to show serial founder effects in the global patterns of phoneme inventories, which would suggest that as people moved further from Africa that they lost sounds from their languages without gaining new ones.

Another key point made in the whole post, but not really captured in the quote above, is that the data aren't as independent as they seem, because phonemes are embedded in an interdependent set of language features such that particular language features bias languages towards having many other complementary features, and when one of these features changes, other features typically change in response to them. Put another way, most language features are functional rather than neutral, even though they show great diversity from language to language.

I Come To Praise SUSY, Not To Bury It.

Another summary of the latest non-detection and expanded exclusion range for stops (supersymmetric partners of top quarks) at Resonaances spells out the situation:

To a casual observer, the seminar presenting the results may have resembled a reenactment of the Saint Valentine's day massacre. Nevertheless I will argue here -- more out of contrariness than conviction -- that SUSY may be battered, bruised, covered in slime, and badly bleeding, but formally she is not yet dead. For most particle theorists, the death will be pronounced scalar partners of the top and bottom quarks are excluded up to ~500 GeV, the precise threshold depending on taste, education, and diet. That's because excluding light stops and sbottoms is going to squash any remaining hopes that supersymmetry can address the naturalness problem of electroweak symmetry breaking, for which purpose it was originally invented. However, the LHC collaborations have yet not presented any robust limits on the stop masses. Instead, here's what the LHC have taught us about SUSY so far:

* Since summer 2011 we know that in a generic case when all colored superpartners including stops have comparable masses and decay to the lightest supersymmetric particle (LSP) producing a considerable amount of missing energy then stops have to be heavier than about 1 TeV.

* Since last Tuesday we know if the only squarks below TeV are those of the 3rd generation then gluinos have to be heavier than about 600-900 GeV, depending on the details of the supersymmetric spectrum. . . .

To summarize, the year 2012 will surely go down in history as the Higgs year. But among more studious future historians of science it may also be remembered as the year when SUSY, at least in its natural form, was finally laid to grave...

Another major report on experimental support for SUSY is due in a couple of weeks.

An experimental exclusion of SUSY entirely is quite a bit more profound than it seems at first glance. Nobody but particle physics geeks knows much about SUSY. But, string theory necessarily implies SUSY, so an experimental exclusion of SUSY is also an experimental exclusion of string theory. And, string theory is a beyond the Standard Model theory with a much better P.R. agent than SUSY.

Let me repeat this simply so that those of you who aren't paying attention can get it:

The Large Hadron Collider is on the verge of experimentally ruling out all of the most plausible versions of string theory.

Why is this huge?

Probably about half of the theoretical physicists in academic positions in the United States right now are string theorists. Most of the beyond the Standard Model theories being tested at the LHC are predictions of SUSY or String Theory. String theory, despite its waning prominence, has far more published work and far more adherents among professional physicists than any other beyond the standard model theory that could unify the three Standard Model forces into a "Grand Unified Theory" (i.e. GUT) or unify these with gravity as well into a "Theory of Everything" (i.e. TOE).

At one point people thought that there were a lot more alternatives to String Theory, but then, somebody figured out that all of the different versions are just different ways of describing what is called "M theory".

Sometimes people talk about Loop Quantum Gravity (LQG) as a competitor to String Theory, and as a quantum gravity theory, it is a competitor. But, LQG doesn't even aspire to be a GUT or a TOE.

Various versions of a beyond the Standard Model theory called "Technicolor" are on life support too, because fundamentally, Technicolor is a theory designed to solve theoretical problems that arise if it turns out that there is Higgs boson at a mass on the same order of magnitude as the W and Z bosons. But, the likely discovery of a 125 GeV +/- Higgs boson pretty much makes Technicolor obsolete. The failure of experiments to reveal magnetic monopoles or proton decay has also culled a lot of naively attractive GUT models other than string theory.

Thus, disproving SUSY, together with other recent developments, basically brings theoretical physicists back to square one in the errand of trying to unify all of the fundamental forces found in physics and explain why we have the particles that we do from a more fundamental basis.

Monday, February 20, 2012

Neanderthal Social Structure Summarized

A new open access paper summarizes what we can infer about the social structure of the Neanderthals, which looks a lot like the social structure of modern human hunter-gatherers.

I would suggest: 1) that the 5 km radius from which the vast bulk of raw material was obtained corresponds to the normal subsistence exploitation range for bands while they occupied a given site, as is consistent with ethnographic observations of hunter/gatherers' usual foraging ranges of up to 5 km (per Higgs and Vita-Finzi 1972; Hayden 1981, 379–81); 2) I would interpret the tools and blanks (generally without bulk raw lithic material) from the 5–20 km radius (and possibly up to 50 km – as seems to be the pattern in Figure 7) as curated tools carried from site to site by individuals belonging to a single local band and travelling within the full band's normal subsistence territory (i.e. c.1250–2800 sq km); 3) I would view the very small but consistent number of finished lithics derived from more than 50 km, even up to 300+ km, as most likely representing curated tools transported, used and discarded in the course of episodic interactions between bands, e.g. during multi-band aggregations or alliance visits. Band ranges of 1250–2800 sq km within a 13,000 sq km interaction network would result in 5–10 or more small local bands forming a larger ‘macroband’ or mating network as previously discussed. . . . there were linguistically distinctive Neandertal groups of about 200–500 that preferentially interacted and intermarried with each other and maintained some sort of group identity, which also distinguished them from enemies whom they occasionally killed and cannibalized. . . .

A number of sites from Europe, the Near East and the Ukraine all exhibit occupation floor areas and hearth patterns that indicate the existence of local bands as the basic, year-round social units with about 12–25 members, including children and the aged, probably organized into nuclear family groups. The use of some sites by smaller hunting or task groups, or even by occasional individual families foraging temporarily on their own, is probably also represented in the archaeological record. . . .

[V]isiting between bands, fluidity in band membership between allied bands, preferential intermarriage between allied bands, and periodic aggregations of a number of allied bands probably involving ritual sanctifications, all make good sense in the European Middle Palaeolithic. The transport of stone tools beyond 30–50 km from sources appears to reflect periodic visiting with other bands or the aggregation of several allied bands. Kill sites such as Mauran, representing over 1000 tons of meat butchered over a few centuries or millennia, and ritual sites such as Bruniquel may reflect such aggregation and social bonding events between local bands or their representatives.

There is clear evidence of enemy relationships among Neandertals in terms of cannibalism, and it seems most likely that there were conscious social distinctions between allied local bands and enemy bands, probably also expressed in terms of dialectical or linguistic differences, similar to those exhibited among the low-density ethnographic populations in Australia and Boreal North America. Thus, there are compelling reasons to conclude that there were, indeed, ethnic identities among Neandertals. . . .

There are also indications of the elevated status of some individuals in Neandertal communities, including preferential treatment in life of some aged or infirm individuals, preferential burial treatments, skull deformation, skull removal, special clothing or painted body designs, personal adornments or prestige objects, and the use of small exclusive ritual spaces. . . . status was probably . . . linked to one or more other domains such as ritual, war, kinship or intergroup relations.

While some researchers have questioned whether Neandertals had a significant sexual division of labour, there are good reasons for assuming that such divisions were just as strongly developed, if not more so, as those among ethnographic hunter/gatherers.

Cemeteries, although rare, seem to have existed in the most productive environments and may reflect corporate kinship groups that owned specific resource locations, or, more conservatively, may have simply symbolically expressed membership in a consciously recognized social group such as the local band.

Overall, the cognitive and social differences between Neandertals and anatomically modern humans that are highly touted by some researchers seem relatively insignificant, if they existed at all, at the level of basic cognitive and sociopolitical faculties. . . .

The main differences that are apparent between Middle and Upper Palaeolithic groups seem to be related to the development of complex hunter/gatherer social organization and economies in some areas of the Upper Palaeolithic versus the simpler hunter/gatherer economies and societies of the earlier Palaeolithic (aspects of which continued to characterize ethnographic hunter/gatherers in resource-poor environments).

Tuesday, February 14, 2012

Italians Detect 1.4 MeV solar neutrinos

Italian scientists have detected solar neutrinos with an energy of 1.4 MeV (i.e. mega-electron-volt) which had been predicted to arise from proton-electron-proton interactions that give rise to deuterium in the sun.

About 1 in 400 solar deuterium atoms are made this way, rather than through the usual proton-proton fusion process that produces much higher energy neutrinos. Neutrinos arising from interactions at particle colliders are likewise much more energetic (on the order of 140 MeV). About three such neutrinos are detected each day at their lab placed deep underground to filter out interference from sources other than neutrinos which interact very weakly with ordinary matter, confirming our understanding the nuclear physics that are going on in the sun.

High energy neutrinos move so close to the speed of light that it is virtually impossible to make meaningful determinations about neutrino rest mass, since their relativistic kinetic energy so profoundly dwarfs their very small mass. In principle, measurements of the speed of low energy solar neutrinos traveling at a speed distinguishably less than that of the Lorentz transform speed limit of special relativity could make it possible to directly infer the rest mass of a neutrino with much greater accuracy.

The energetic equivalent of the mass of an electron is about 0.5 MeV/c^2, and the mass of the up and down quarks respectively are on the order of single digit MeV equivalents. The various varieties of ordinary neutrinos are believed to have rest masses in the KeV to eV range, closer in mass-energy to photons than other fermions.

SUSY Rumors A Bust

Woit explains at his blog that rumors of a major experimental indication of supersymmetric particles at the Large Hadron Collider, which would have been announced today, turned out to be unfounded. The latest LHC data simply increased the mass threshold below which a couple of types of proposed bosons in supersymmetric theory, stops (the supersymmetric partner of the top quark) and gluinos, have been ruled out experimentally.

South Asian Paleoclimate Documented

How was the South Asian paleoclimate data gathered?

A new study provides about ten thousand years of continuous climate data for South Asia.

This research, lead by researchers from the Woods Hole Oceanographic Institute, analyzed evidence from a Bay of Bengal sediment core, which captures discharges from the large Godavari river system. The core data comes from carbon isotopes of leaf waxes, reflecting the amount of arid-adapted/savannah vegetation in the Godavari catchment, and oxygen isotopes from a marine microfossil that record salinity. This points to a general aridification trend over the course of the middle and late Holocene, supporting what we already would infer from pollen data in Rajasthan or monsoon proxies in the Arabian Sea, but this time providing more direct evidence from South India.


Given the strong role of seasonal monsoons in the climate of South India that is lacking in the Mediterranean basin and West Eurasia, it wouldn't have been surprising if South India's climate was quite distinct, even at the level of general trends, from the climate of places further to the West.

What does the study tell us about South Asian pre-history?

The two particularly big events in South Asian pre-history that one would like to establish the role of paleoclimate in motivating are the collapse of Harappan civilization in the Indus River Valley that preceded the rise of Indo-Aryan expansion from the same vicinity (ca. 1500 BCE, give or take), and the rise of agriculture in South India (ca. 2500 BCE, give or take), which is likely to be associated with the rise of the culture that led to the expansion of the Dravidian languages. Towards this end, the conclusion of the study states that:

The significant aridification recorded after ca. 4,000 years ago may have spurred the widespread adoption of sedentary agriculture in central and south India capable of providing surplus food in a less secure hydroclimate. Archaeological site numbers and the summed probability distributions of calibrated radiocarbon dates from archaeological sites, which serve as proxies of agricultural population, increase markedly after 4,000 BP in peninsular India. . .

In contrast, the same process of drying elicited the opposite response in the already arid northwestern region of the subcontinent along the Indus River. From 3,900 to 3,200 years BP, the urban Harappan civilization entered a phase of protracted collapse. Late Harrapan rural settlements became instead more numerous in the rainier regions at the foothills of the Himalaya and in the Ganges watershed.

There are some indications supported by Rig Vedic legend, archaeological ruin distribution, and satellite imagery that the event in the Indus River watershed may have concluded more dramatically, with one of the major tributaries to the Indus River shifting to a new course and leaving a huge expanse of the old Hakra river basin that was once at the heart of Harappan civilization suddenly dry in what is now the Cholistan Desert. For example, a recent search of sites in this desert found a Indus Valley Civilization seal from golden age of that culture.

The dates mentioned by the paper (ca. 2000 BCE for the South Indian Neolithic and ca. 1700-1200 BCE for the demise of Harappan civilization) are a bit later than the dates I have seen based on the oldest known remains in each region. I've seen estimates dating the oldest South Indian Neolithic sites to 2500 BCE, and there are traces of Indo-Aryan influence in the Cemetery H culture back to about 2000 BCE in the Northwest India/Northern Pakistan, and 1900 BCE is frequently used as an end point for Harappan civilization. But, the differences aren't huge and seem to be fairly consistent in magnitude and direction. They could simply reflect dates that are trying to capture the very earliest points at which a new civilization appears and the point in time at which it really starts to thrive.

Were there major droughts in South India, or was the trend a gradual one?

One curious point that isn't mentioned in the blog post on this study (but may be hidden in the data) is that in the Fertile Crescent region there were two very distinct severe droughts in the same general time period that punctuated the general trend towards aridity in the Holocene there.

One was happened at roughly 2000 BCE and is associated with the collapse of the Akkadian Empire in Mesopotamia, the First Intermediate Period in Egypt, and the beginning of the rise of Hittite power in Anatolia. The other was around roughly 1200 BCE is associated with the sweep of significant historical events across the ancient West Eurasian world known as Bronze Age Collapse. It remains unclear if these droughts extended as far east as South Asia, and this data set would be the obvious place to look for these punctuated period of drought as opposed to a more gradual trend towards aridity.

We also know from North American pre-historic paleoclimate correlations with archaeology that prolonged droughts are the sorts of things that can cause civilizations to fall and lead to dramatic upheavals in human affairs.

The answer to this question calls for a closer analysis of the data than I can do right now, and may require access to non-open access sources. Insights in the comments on this point would be greatly appreciated.

An Early Human Role In Climate Change?

The data from South India are also relevant to the extent to which Holocene climate change may have been driven by human activity. The new data tend to argue for human activity changes as an effect rather than a cause of early Neolithic era climate change.

The trend towards aridity in the Holocene corresponds with the rise of the Neolithic era when humans started farming and herding, activities that had significant ecological effects locally. But, the question of cause and effect arises.

Did farming and herding arise and start to become more important out of necessity because increased aridity made hunting and gathering lifestyles less viable, and perhaps favored agriculture in other ways, for example, by producing fewer deluge storms that could destroy crops)?

Or, did farming and herding disrupt ecosystems that were critical to maintaining regional homeostasis and stabilizing weather patterns, for example, by preventing soil from turning to dust and entering the atmosphere. This would be early human activity driven climate change, and in West Eurasia, where the prevailing winds can carry Fertile Crescent climate effects across the rest of the region, a causal connection isn't implausible.

But, South India developed agriculture about four or five thousand years after it appeared in the Indus River Valley and five or six thousand years after it arose in the Fertile Crescent, and aridity in South India is mostly driven by monsoon rains from the Southeast, so the human activity in the Fertile Crescent and Indus River Valley shouldn't have much impact on its aridity, and agricultural activity in China (both North and South) would have been sufficiently remote that it would be surprising to see early agriculture there influencing climate in South India whose whether patterns are more tied to conditions in Indonesia and Southeast Asia than to China.

If South India was seeing the same kinds of climate change trend towards aridity as West Eurasia at about the same times, the case that the rising trend towards aridity in West Eurasia was the product of human activity is weakened as a hypothesis.

Note: Minor updates to language and links added on February 20, 2012.

Monday, February 13, 2012

Evidence Mounts For Massive Gluons

Quantum Chromodynamics (QCD) is the study of how quarks and gluons interact, something about which there is a great deal of experimental evidence and consensus in high energy settings (aka ultraviolet), but about which there is less of a consensus in low energy (aka infrared) where quarks and gluons are confined within composite particles whose internal activity is harder to probe experimentally without increasing the energy of the system to UV levels. We have QCD equations from the Standard Model that make it possible to calculate this behavior in principle, but the math is very, very hard to compute with, even with supercomputers.

Conventionally, we talk about gluons as having a zero rest mass, but that analysis is model dependent (the model assumes a constant mass subject only to Lorentz adjustments) and based on a fit to exclusively UV range data.

The emerging consensus is that faster moving gluons approach zero mass, while slower moving gluons approach a finite mass on the order of 600 MeV +/- about 15%.

The interpretation of the Landau gauge lattice gluon propagator as a massive-type bosonic propagator is investigated. Three different scenarios are discussed: (i) an infrared constant gluon mass; (ii) an ultraviolet constant gluon mass; (iii) a momentum-dependent mass. We find that the infrared data can be associated with a massive propagator up to momenta ~500 MeV, with a constant gluon mass of 723(11) MeV, if one excludes the zero momentum gluon propagator from the analysis, or 648(7) MeV, if the zero momentum gluon propagator is included in the data sets. The ultraviolet lattice data are not compatible with a massive-type propagator with a constant mass.

The scenario of a momentum-dependent gluon mass gives a decreasing mass with the momentum, which vanishes in the deep ultraviolet region. Furthermore, we show that the functional forms used to describe the decoupling-like solution of the Dyson–Schwinger equations are compatible with the lattice data with similar mass scales.


From O Oliveira and P Bicudo, Running gluon mass from a Landau gauge lattice QCD propagator (2011) J. Phys. G: Nucl. Part. Phys. 38 045003 doi:10.1088/0954-3899/38/4/045003.

At any rate, that is my take on what to make from analysis of the current state of research that is often stated in terms that make less direct statements about quantities and properties of QCD entities familiar to non-QCD specialists than the abstract from a 2011 paper cited above.

This "decoupling solution" to the QCD equations appears to yield a stable result which should be confirmed by physical results if they can ever be made, while the alternative, a "scaling solution," in which a zero momentum gluon is massless, is apparently unstable and hence unlikely to have a physical manifestation.

The implications of the decoupling solution are weird. In special relativity, a massive particle gets gains effective mass as it goes faster. In the QCD decoupling solution, a massive particle loses effective mass as it goes faster. Special relativity "feels" a bit like air resistance. QCD "feels" a bit like the relationship betweeen the friction experience by something moving along a snow covered surface and its speed.

But, there are elements of a massive gluon approach that are quite attractive and intuitive in light of the other data and constraints. It helps explain why the mass of ordinary protons and neutrons and also more exotic combinations of quarks seems to come mostly from the gluons and not the quarks. It fits the intuition that massive bosons should be associated with short range forces, while massless bosons should be associated with long range forces. It fits the notion that the strong force (which gets more insurmountable with distance and nearly vanishes at short range) looks a bit like the inverse of gravity (which gets weak at long range and strong at shorter ranges). It provides a way to reconcile disparate data from low energy and high energy contexts that appear to be inconsistent with a constant gluon mass or a massless gluon.

Not every little glitch in the theory has been neatly ironed out and presented definitively, but the trends towards this interpretation becoming dominant seems to be mounting.

Friday, February 10, 2012

Descendants Of Confucius

Confucius lived in China about 1500 years ago. How many of the 1.2 billion living people in China have him somewhere in their family tree today?

Probably, almost all of them.

A couple of days ago, Victor Mair wrote about some provocative behavior on the part of "Kŏng Qìngdōng 孔庆东, associate professor in the Chinese Department at Peking University, who also just happens to be the 73rd generation descendant of Confucius (Kǒng Fūzǐ 孔夫子 ; Kǒng Qiū 孔丘), or at least he claims to be a descendant of Confucius." . . .

[In numerical geneological models there was] a threshold, let us say Un generations ago, before which ancestry of the present-day population was an all or nothing affair. That is, each individual living at least Un generations ago was either a common ancestor of all of today's humans or an ancestor of no human alive today. Thus, among all individuals living at least Un generations ago, each present-day human has exactly the same set of ancestors. We refer to this point in time as the identical ancestors (IA) point. . . .

Within China, there's been more than enough mixing to ensure that by now, if anyone is a descendent of Confucius, everyone is.

From here.

The date for a most recent common ancestor of all members of a population by any line of descent is generally (and logically) considerably more recent than the identical ancestor point.

Also, realistically, there is enough non-randomness and structure in human reproductive choices that "almost everyone is" to a very high percentage figure (perhaps 99.999%+) is probably quite more likely to be the case than literally "every single person is." There might be only 12,000 people or less in China who are not descended from Confucius at all, but in real life, there are probably some people who are not.

An exploration of the kind of assumptions one has to make to reach these conclusions is explored in the linked post. Suffice it to say that the results are quite robust with range of numerical values for assumptions in population models that is much broader than intuition would suggest producing similar historical points at which all persons in a population have a shared common ancestors and an identical set of ancestors (not necessarily in the same proportions) respectively.

I have not quoted an estimate in the original post regarding the likelihood that someone in China has a least one of their genes derived from Confucius because, as pointed out in the comments to that post, the estimate is profoundly wrong.

The actual probability that a Han Chinese person has a gene derived from Confucius from some ancestor is actually on the order of 2.5% (assuming that there are about 25,000 genes in humans) not "3 chances in a quintillion" as the original post claims, because the original post ignores that fact that most people will have multiple lines of descent to the same historical person. But, the underlying point, that having someone in your geneology does not necessarily mean that you have any genetic descent from them several dozen generations later, is a valid point.

As a footnote, the increasingly strong possibility that most people in a large population will have multiple lines of descent (indeed a great many multiple lines of descent) to most or all of the total set of people who are ancestral to the current population, is very much the same kind of observation that makes it heuristically very likely that Goldbach's Conjecture is true for all very large numbers.

The identical ancestor point has some very practical implications for population genetics. For example, the closed set of ancestors implies that, with the exception of mutations that have happened since the identical ancestor point and managed to remain in the gene pool, that the set of genes present in someone in the population is closed at the IA date.

The way the numbers work out, a population that isn't fully isolated from another population is typically going to have an IA date that is certainly within the Holocene era (i.e. about twelve thousand years ago) and typically somewhere within the historic era (i.e. about six thousand years ago). Of course, sustained periods of complete isolation of different populations from each other pushes the IA date back to sometime before the populations were separated. But even slight amounts of population exchange for a surprisingly small number of generations can dramatically reduce the time frame in which everyone in two populations have a most recent common ancestor or identical ancestors.

Wednesday, February 8, 2012

Tree Ring Data Underestimates Temperature Declines Due To Volcanos

You can use tree rings to estimate climate cooling due to volcanic erruptions. But, this method of estimating paleoclimate has limitations, especially in cases of rapid and severe volcanic erruption driven climate change, because all cooling below a given threshold is invisible since tree rings can get no smaller or aren't formed at all in a given year if it is too cold.

Rigor In Quantum Field Theory

Axiomatic QFT is an attempt to make everything absolutely perfectly mathematically rigorous. It is severely handicapped by the fact that it is nearly impossible to get results in QFT that are both interesting and rigorous. Heuristic QFT, on the other hand, is what the vast majority of working field theorists actually do — putting aside delicate questions of whether series converge and integrals are well defined, and instead leaping forward and attempting to match predictions to the data. Philosophers like things to be well-defined, so it’s not surprising that many of them are sympathetic to the axiomatic QFT program, tangible results be damned.

The question of whether or not the interesting parts of QFT can be made rigorous is a good one, but not one that keeps many physicists awake at night. All of the difficulty in making QFT rigorous can be traced to what happens at very short distances and very high energies. And that’s certainly important to understand. But the great insight of Ken Wilson and the effective field theory approach is that, as far as particle physics is concerned, it just doesn’t matter. Many different things can happen at high energies, and we can still get the same low-energy physics at the end of the day. So putting great intellectual effort into “doing things right” at high energies might be misplaced, at least until we actually have some data about what is going on there.

From Sean Carroll at Cosmic Variance (emphasis added).

The rigor issues axiomatic QFT theorists are concerned about is mostly taught in the upper division undergraduate mathematics course called "Real Analysis", which I skipped in favor of applied subjects myself, betraying my biases on this issue.

Nobel prize winning physicist Richard Feynman publicly worried quite a bit about the rigor of quantum field theory and convergent infinite series in particular, but probably didn't lose much sleep over it. If you don't lose sleep over helping to invent the nuclear bomb, infinite series that might not converge are probably not going to do you in either

The emphasized sentence is really the key one from a practical perspective. This is the line between what we can say that we know as a result of the Standard Model with considerable confidence, and what we recognize that we don't know with current quantum mechanical equations.

The very short distance issue, at its root, boils down to assumptions in quantum mechanics that (1) particles are point-like, (2) space-time is continuous rather than discrete, (3) locality in space-time is always well defined, and (4) the effects of general relativity (as distinct from the effects specific to special relativity that is well accounted for in quantum mechanics) on how systems behave is modest. The tools of "analysis" in mathematics (i.e. advance calculus) are pretty much useless in situations where the first three assumptions do not hold. To do interesting mathematics without those three assumptions, you pretty much have to use "numerical methods" which means that you can basically only do calculations by having computers conduct myriad subcalculations that it would be impracticable to do by hand (which helps explain why research programs that abandoned those assumptions never really got very far until powerful computers were available).

The fourth assumption (the absence of extreme general relativity effects) is different in kind and more fundamental. General relativity inherently gives rise to all sorts of singularities, which mathematicians abhor, in the "classical" (meaning non-quantum mechanical) formulation of these equations, and any mathematically rigorous reformulation of general relativity on a quantum mechanical basis needs some sort of asymptotic limit that instrinically prevents the equations from blowing up in the wrong ways as one comes close to what would be the singularities in the normal formulation of general relativity.

Loop quantum gravity is one fairly successful research program (so far) at developing a way of addressing these issues of rigor in assumptions (2), (3) and (4) are dispensed with in lieu of a discrete space-time in which locality is only an imperfect and emergent implication of the theory at super-Planckian scales, and the effects of general relativity are fully accounted for by the theory. LQG is a work in progress, but hasn't yet hit the kind of seemingly insurmountable theoretical roadblocks to progress that have cropped up in String Theory, the other main approach in theoretical physics that is trying to formulate a rigorous treatment of gravity at a quantum scale.

The high energy issue also implicates the effects of general relativity in quantum systems, and also recognizes that there may be beyond the Standard Model physics that are unified or present new particles or forces at high energies rather than displaying the force specific symmetries observed at experimentally accessible energy scales.

The loop quantum gravity program has very little to say about these issues of high energy physics, but these issues are at the heart of String Theory.

The point about the limited utility of theorizing without sufficient data to provide much guidance is an astute one, and the story of String Theory is one of a theory with so many degrees of freedom that no one can find a unique version of it that can be derived from experimental observable and tested with current technology, or even technology conceivable in the next few decades.

Arcadian Pseudofunctor Ends

For reasons not disclosed, Kea has discontinued her Arcadian Pseudofunctor blog, a theoretical math and physics blog, which has appeared in the sidebar here for some time. Kea is a female theoretical physicist from New Zealand who has lately been in the frustrating situation have having the appropriate academic training to do the work, but not the academic affiliation necessary to have proposed publications taken seriously, get invited to the relevant academic conferences, and so on. I've appreciated her many interesting posts.

I wish her the best and hope, as seems to be the case, that she will continue to be a presence in the physics blogosphere through comments on blogs and academic paper posting at ViXra, of which she has several, and so on.

Maintaining a blog is a time consuming labor of love that for a physicist can eat time better spent doing research, that doesn't necessarily always reach the intended audience, so I can understand that there are a multitude of good reasons why one might decide to close one.

Since this blog is no longer active, I have removed it from the sidebar.

Tuesday, February 7, 2012

Pre-Out Of Africa Population Sizes and Densities


Dienekes notes the latest estimate of the population size of modern humans ca. 130,000 years ago, which would be just prior to the Out of Africa event, citing and quoting from Per Sjödin, Agnès E Sjöstrand, Mattias Jakobsson and Michael G B Blum, "Resequencing data provide no evidence for a human bottleneck in Africa during the penultimate glacial period" Mol Biol Evol (2012)doi: 10.1093/molbev/mss061.

Using the estimates of autosomal mutation rates derived from actual direct measurement in mother-father-child trios, which I believe to be more accurate than the mutation rate inferred from human-chimpanzee divergence, the effective population size was about 12,000 (95% confidence interval 9,000-15,000), implying a "census" population of about 90,000-150,000 modern humans who are ancestral to people who remain in the gene pool today in Sub-Saharan Africa.

The researchers note that:

Assuming that the range of humans extends over all the 24 millions km2 of Sub-Saharan Africa, the density of humans at that time would have been extremely low between 0.5 and 1.4 individual per 100 km2, which is even lower than the lowest recorded hunter gatherer density of 2 individuals per 100 km2 reported for the !Kung (Kelly 1995) and the density of 3 individuals per 100 km2 estimated for Middle Paleolithic people (Hassan 1981). However, this discrepancy disappears if humans were restricted to an area some 3-6 times smaller than the entire Sub-Saharan Africa.

However, as the modern population densities of Sub-Saharan Africa make clear, an assumption of a uniform population density in Middle Paleolithic Africa makes no sense. Some habitats are better for modern humans hunter-gatherers than others, and the !Kung continue to survive as a hunter-gatherer and herder society precisely because they live someplace with among the worst conditions for modern humans that it is possible to survive upon. In contrast, Middle Paleolithic modern human hunter-gatherers would have lived in the optimal environments for their life style in which they evolved in the first place.

A better estimate of Middle Paleolithic population densities would be to assume that the 120,000 modern humans of 130,000 years ago were distributed in a way roughly proportional to modern population densities (which still have to reflect ecological habitability due to a lack of economic development), with areas that would not have a floor population density of at least the 2 per 100 km2 population density of the !Kung being effectively uninhabitable by sustainable modern human populations in that area, and the population densities of modern humans in prime territory like the African Rift Valley being much greater - approaching peak population densities seen in hunter-gatherer socities based on fishing in places like the Pacific Northwest during the Pre-Columbian era. A population density of 10 per 100km2 with a much smaller percentage of Sub-Saharan Africa within the modern human range at that point (perhaps just 5% to 10% of the total area) seems like a more plausible assumption.

Friday, February 3, 2012

Archaic Homins Had Alzheimer's Disease Risk Genes

John Hawks notes that both Neanderthals and Denisovians, where ancient DNA provides any sign at all, had genetic markers that are associated with an enhanced risk of Alzheimer's Disease in Europeans today. It isn't clear what to make of this fact as he explains with a few disclaimers, although it isn't terribly surprising that any gene related to inferior intellectual function is less common in modern humans than it was in archaic hominins.

The Roots Of The Na-Dene Languages Are In Alaska

Alaska is the presumed starting point for (at least) three very important migrations that defined the cultural history of the entire Western Hemisphere, but so far the archaeological record within the state has shed virtually no light on two of them, and relatively little on the third. . . .

The first of these migration is, of course, the initial peopling of the Americas in the Late Pleistocene. . . . Recent research in various places has increasingly indicated that the Clovis culture of around 13,000 years ago was not the direct result of the earliest migration into the Americas, but it is still the case that any migrations during the Pleistocene (and it’s increasingly looking like there were at least two) almost certainly would have had to go through Alaska. Unfortunately, despite several decades of looking, no sites have yet been found in Alaska itself that can plausibly be taken to reflect the first immigrants into North America from Asia. An increasing number of early sites have been identified in the past twenty years, but these are all still too late to represent a population ancestral to Clovis or any of the other early cultures found further south. Part of the problem here is that preservation conditions for archaeological sites in most of Alaska are atrocious, and in many areas even finding early sites is extremely difficult. The fact that the state is huge and sparsely populated also means that very little of it has even been surveyed for sites[.]. . .

The second of the migrations I mentioned above is that of speakers of Athapaskan languages [Ed. also known as the Na-Dene languages] to the south, ultimately as far as the Southwestern US and the extreme north of Mexico. As I’ve mentioned before, it’s long been quite obvious that Navajo and the various Apache languages, as well as several languages of California and Oregon coasts, are closely related to a larger number of languages in Alaska and northwestern Canada. The distribution of the languages, as well as some internal evidence in the southern branch, strongly suggests that the direction of the migration that led to this situation was north-to-south, and similar evidence similarly suggests that the start point was somewhere in what is now Alaska. Despite the enormous distance over which Athapaskan languages are now spread, the greatest diversity of the languages grammatically is actually found within Alaska. That is, some Alaskan languages are more closely related to Navajo than they are to other Athapaskan languages in Alaska. While this is all clear linguistically, tracing the actual migration archaeologically has been enormously difficult at both ends. Athapaskan archaeology in Alaska in particular is remarkably poorly understood compared to the archaeology of Eskimo groups, due in part to the fact that Athapaskans have mostly occupied the interior areas that are harder to investigate than the primarily Eskimo coastal areas. . . .

The third migration, and by far the best understood, is that of so-called Thule peoples from northwestern Alaska eastward across the Arctic as far as Greenland.

From here (emphasis and editorial comment added).

While the genetic distinctiveness of the Na-Dene compared to other Native Americans already favored a distinct Na-Dene migration wave to North America from Siberia, and that fact that the Navajo and Apache languages have their roots in Northwestern North America was widely understood, the evidence that the maximum linguistic diversity is in Alaska rather than the Pacific Northwest of the United States, for example, is still significant.

The linguistic evidence tends to coroborate and enhance the genetic evidence by independently disfavoring the possibility that the Na-Dene have origins in a population expansion within some enclave of the pre-existing Native American population that made some economic breakthrough (perhaps some new fishing or hunting technique) and had rare genes that were lost due to serial founder effects in other parts of the New World. It also tends to validate linguistic connections between the Na-Dene and the Yenesians (e.g. the Ket people of Central Siberia) by putting the source population for the North American Na-Dene people pretty much as close to their Old World origins as they could possibly be.

Linguistic evidence of Alaskan origins for the Na-Dene also tell us that the oldest Na-Dene archaeological remains in the interior of Alaska ought to be older than the oldest Na-Dene arcaheological remains in contexts where the conditions to preserve these relics is better, which accompanied by our knowledge that the Na-Dene were likely post-Clovis, helps pinpoint places in Alaska (and particular sedimentary layers at particular digs within Alaska) where is makes sense to look for early Na-Dene archaeology. Archaeologists need to be looking for cites centuries or even millenia earlier than the earliest known Na-Dene sites anywhere else in North America.

But, perhaps most tempting of all is the observation that "some Alaskan languages are more closely related to Navajo than they are to other Athapaskan languages in Alaska." This suggests that it may be possible in the case of these languages, as it was in the case of the Bantu languages and the languages of the people of Madagascar, and the languages of the European Gypsies (aka Roma), to identify the source of an exceptionally long distance prehistoric migration not just to the place where the source language family as a whole was spoken, but to one very geographical specific place within that region that was the source of the particular group of long distance migrants who were the source of the expansion.

Each of these examples also cross validate each other. They suggests that often a long distance migration is going to involve an isolated, "heroic" journal of a single community of people with their origins in a single place, no doubt lead by some visionary leader (he may have seemed crazy at the time), to a new homeland whose links to the larger linguistic and cultural grouping in which they have their origins is filtered entirely through the way it manifests in this particular community, and stands in opposition to migration models that posit bland, almost random walk, diffusions of peoples who are experiencing population expansions.

Narratives like the Biblical Exodus story, rather than being implausibly far fetched, seem to be almost paradigmatic of how a long distance migration of an ethnically distinct population to the new homeland happens. The push and pull factors that motivated these long range migrations at the time, and the events that were critical in making it possible for the new arrivals to displace or culturally dominate the pre-existing residents of their new homes may be forever lost to history, but these kinds of migrations necessarily must have had those aspects to them.

Actually, it isn't quite that simple. Viewed more closely, the linguistic evidence is suggestive of Eastern Athapaskan languages arising from two distinct waves of migration with origins in or around Southern Alberta, and Western Athapaskan languages including Navajo having roots in a different migration (although, in fairness, the three separate branches of migration seem to have been reasonably close in time).

The same author expands upon his conclusions in a follow up post (which happens to reference a Wikipedia page I've put a great deal of work into developing).

it’s generally thought that the Athapaskan migrations which eventually led to the entrance of the Navajos and Apaches into the Southwest began in Alaska. The northern Athapaskan languages are actually spoken over a very large area of northwestern Canada as well, but the linguistic evidence clearly points to Alaska as the original place where Proto-Athapaskan was spoken at the last point before it splintered into the various Athapaskan languages. That is, the Urheimat of the Athapaskans seems to have been somewhere in Alaska.

There are two main pieces of evidence pointing to this conclusion. One is the fact that it has been quite well established at this point that the Athapaskan language family as a whole is related to the recently-extinct language Eyak, which was spoken in south-central Alaska. Eyak was clearly not an Athapaskan language itself, but it had sufficient similarities to reconstructed Proto-Athapaskan to establish a genetic relationship. Since Eyak was spoken in Alaska, it therefore seems most probable that the most recent common ancestor of both Eyak and the Athapaskan languages (Proto-Athapaskan-Eyak) was also spoken in Alaska. . . .

A stronger piece of evidence is the internal diversity of the Athapaskan languages themselves. A general principle in historical linguistics is that the Urheimat of a language family is likely to be found where there is maximal diversity among the languages in the family. . . . When it comes to Athapaskan, this condition obtains most strongly in Alaska. The languages in Canada and the Lower 48 are all relatively closely related to each other within the family as compared to some of the languages in Alaska. Although interior Alaska is overwhelmingly dominated by Athapaskan groups, the linguistic boundaries among these various groups, even those adjacent to each other, are often extremely sharp.

This is particularly the case when it comes to the most divergent of all the Athapaskan languages: Dena’ina. (. . . in many publications this term is spelled “Tanaina,” . . . ”Dena’ina” is the generally preferred form these days[.] . . .) This is the language traditionally spoken around Cook Inlet in south-central Alaska, including the Anchorage area. While it’s clearly Athapaskan, it’s very weird as Athapaskan languages go. It is not mutually intelligible with any other Athapaskan language, although it borders several of them, and it is in turn divided into several internal dialects that are strikingly diverse. . . . there are two main dialects, Upper Inlet and Lower Inlet, and that Lower Inlet is further subdivided into two or three subdialects: Outer Inlet, Inland, and Iliamna. . . the Lower Inlet dialect is more conservative than the Upper Inlet one, which shows extensive influence from the neighboring Ahtna language, which is also Athapaskan but not very similar to Dena’ina. Within the Lower Inlet dialect, the Inland dialect is the most conservative . . . presumably due to the relative isolation of this dialect, which is spoken in the Lake Clark area and further north in Lime Village. . . . this the most likely homeland of Dena’ina speakers . . . [who] moved from the interior to the coast relatively recently. . . .

Despite the relative conservatism of the Lower Inlet dialect, however, all its subdialects do show a certain amount of influence from Yup’ik Eskimo (particularly in the development of the Proto-Athapaskan vowel system). This is unsurprising, as these dialects lie on the boundary of the Yup’ik area to the west and south, and the Dena’ina groups in these areas show extensive Eskimo influence in many aspects of their traditional culture. . . . the main distinctions among the Dena’ina groups were economic, having to do with their subsistence systems, while other social systems were pretty similar across the various groups. The Lower Inlet groups, especially those in the Seldovia area on Kachemak Bay near the outlet of Cook Inlet, showed a much heavier dependence on hunting sea mammals and a correspondingly heavier influence from nearby Yup’ik Eskimo groups with a similar adaptation than their compatriots further north who had a more typically Athapaskan lifestyle based on salmon fishing and hunting of terrestrial animals.

The fact that Dena’ina, the most divergent of the Athapaskan languages and therefore the one that most likely split earliest from Proto-Athapaskan, is spoken in Alaska makes it very likely that Proto-Athapaskan was spoken in Alaska as well. Indeed, if . . . the Lake Clark area was the original homeland of the Dena’ina, this potentially places Proto-Athapaskan quite far west within Alaska and quite close to areas traditionally occupied by Eskimo-speakers. . . . however. . . it’s still very unclear when the breakup of Proto-Athapaskan occurred and who occupied which parts of Alaska at that time.

Thus, the Urheimat of the Athapaskan aka Na-Dene languages in North America was probably somewhere West of Anchorage in areas that are now occupied by Eskimos, but wouldn't have been prior to the arrival of the Thule, sometime within the last fifteen hundred years, who apparently displaced them to inland locations. Also, while the author doesn't expressly address the point, the arrival of the Thule in North American took place close in time to the Na-Dene migration to the Southeast United States, which suggests that this Navajo and Apache exodus story may have its origins in flight from the ancestors of modern Eskimos who displaced them from their previous homeland. It is also roughly coincident with the rise of the Cahokia based Mississippian culture (which despite its name apparently reached at least as far as the Atlantic Coast of North Carolina) and the Viking Vinland colony in far Northeastern North America, and roughly coincides with the Chaco culture of the American Southwest. The interactions between the basically hunter-gatherer Athapaskan cultures and the agriculturalist Pueblo culture are explored here.

The author of the excerpts above provides further detail in a third post on the subject which he concludes with the following observations:

[I]t definitely seems that the Dena’ina most likely spread from west to east, and the Ahtna may have been spreading in the other direction at approximately the same time. Archaeological evidence suggests that the Dena’ina spread into the Kenai peninsula (across the inlet from their apparent homeland) took place less than a thousand years ago. Since it seems very clear that the Athapaskans had been in Alaska for a very long time before that, certainly long enough for Dena’ina in the southwest to diverge markedly from the various languages that form a large dialect continuum in the Tanana and Yukon valleys, this suggests that the story of Athapaskan prehistory is both very complicated and very long-term.

More Of BSM Parameter Space Ruled Out

The Large Hadron Collider relentlessly increases the envelope of energy levels where beyond the Standard Model physics can't be lurking. As of January 31, 2012, some of those new limits on beyond the Standard Model physics from LHC include the following:

Fourth Generation Top Quarks

[I]n a large class of little Higgs and composite Higgs models the fermionic partner of the top quark decays as t' → b W about half of the time. The current limit on the t' mass assuming 100% branching fraction for the t' → b W decay is 525 GeV. For little Higgs et al. the limit is slightly weaker, slightly above 400 GeV (due to the smaller branching fraction) but that is also beginning to feel uncomfortable from the point of view of naturalness of these models.

This result is entirely expected, because a fourth generation electron would be strongly expected to be lighter than a fourth generation top quark, just as each of the other three generations of up quarks are heavier than each of the corresponding three generations of charged leptons. And, while neutrinos are almost impossible to see directly in a detector, and heavy quark decay products can form electrically neutral hadrons that are somewhat hard to see, or can create jets of decay products that are hard to distinguish from know heavy particle decays because there are so many particle detections that have to be combined and because there are many possible scenarios that generate heavy particle decay jets, a very heavy fourth generation electron would generate a very distinctive signal in the data. So it would be surprising if a fourth generation top quark were the first fourth generation particle to be detected.

W' Bosons

What about a particle that behaves more or less like a W boson but is heavier? The bounds on this are also getting tighter, but with a twinge of hope, a single outlier event at a TeV scale.

[T]here is no compelling theoretical reasons for such a creature to exist. However they represent a characteristic and clean signature . . . an energetic electron or muon accompanied by missing energy from a neutrino. To tell W' from the ordinary W boson one looks for events with a large transverse mass . . . . Intriguingly, in the muon channel an outlier event with a very large transverse mass of 2.4 TeV is observed in the data. Of course, most likely it's just a fluke[.]

Heavy top-antitop quark pair decays

This search targets heavy (more than 1 TeV) particles decaying to a pair of top quarks, a signature very common in models with a new strongly interacting sector, like composite Higgs or the Randall-Sundrum model. . . . this search relies on fancy modern techniques of studying substructure of jets, in order to identify closely packed jets that could originate from a fast moving top quark. No resonance is observed in the t-tbar spectrum. . . . the LHC sensitivity now reaches the cross sections predicted by popular versions of the Randall-Sundrum model, excluding Kaluza-Klein gluons lighter than about 1.5 TeV.

This is about fifty percent greater than the limits prior to the LHC.

What is a Randall-Sundrum model?

Randall–Sundrum models . . . imagine that the real world is . . . a five-dimensional anti de Sitter space and the elementary particles except for the graviton are localized on a (3 + 1)-dimensional brane or branes.

The models were proposed in 1999 by Lisa Randall and Raman Sundrum because they were dissatisfied with the universal extra dimensional models then in vogue. Such models require two fine tunings; one for the value of the bulk cosmological constant and the other for the brane tensions. Later, while studying RS models in the context of the AdS/CFT correspondence, they showed how it can be dual to technicolor models.

There are two popular models. The first, called RS1, has a finite size for the extra dimension with two branes, one at each end. The second, RS2, is similar to the first, but one brane has been placed infinitely far away, so that there is only one brane left in the model. . . . It involves a finite five-dimensional bulk that is extremely warped and contains two branes: the Planckbrane (where gravity is a relatively strong force; also called "Gravitybrane") and the Tevbrane (our home with the Standard Model particles; also called "Weakbrane"). In this model, the two branes are separated in the not-necessarily large fifth dimension by approximately 16 units (the units based on the brane and bulk energies). The Planckbrane has positive brane energy, and the Tevbrane has negative brane energy. These energies are the cause of the extremely warped spacetime.

A study in 2007 showed that this class of models could be proven or disproven based on searches for "Kaluza-Klein gluons" at LHC.

SUSY

A minimum mass for gluinos, a particle predicted by SUSY, is based on the latest results, "600-900 GeV depending on how squeezed is the SUSY spectrum."