Pages

Friday, September 30, 2016

Pre-Industrial Societies Reward High Status Men With More Children

From the hunter-gatherer era and on into societies based on herding and farming, high social status men have significantly more children than low social status men. Then, in the industrial era, that relationship was inverted. That is the conclusion of a new meta-analysis of 33 pre-industrial societies.

I've seen a study along the same lines out of a University of Michigan scholar a decade or two ago, that looked at historical and legendary history documents to show the gradually decreasing number of mates and children of ultra high status men from the Bronze Age through the present, with significant changes continuing even from the Victorian era to the 20th century. The drivers of this change aren't entirely clear.

The study and its abstract are as follows:
Social status motivates much of human behavior. However, status may have been a relatively weak target of selection for much of human evolution if ancestral foragers tended to be more egalitarian. We test the “egalitarianism hypothesis” that status has a significantly smaller effect on reproductive success (RS) in foragers compared with nonforagers. We also test between alternative male reproductive strategies, in particular whether reproductive benefits of status are due to lower offspring mortality (parental investment) or increased fertility (mating effort). We performed a phylogenetic multilevel metaanalysis of 288 statistical associations between measures of male status (physical formidability, hunting ability, material wealth, political influence) and RS (mating success, wife quality, fertility, offspring mortality, and number of surviving offspring) from 46 studies in 33 nonindustrial societies. We found a significant overall effect of status on RS (r = 0.19), though this effect was significantly lower than for nonhuman primates (r = 0.80). There was substantial variation due to marriage system and measure of RS, in particular status associated with offspring mortality only in polygynous societies (r = −0.08), and with wife quality only in monogamous societies (r = 0.15). However, the effects of status on RS did not differ significantly by status measure or subsistence type: foraging, horticulture, pastoralism, and agriculture. These results suggest that traits that facilitate status acquisition were not subject to substantially greater selection with domestication of plants and animals, and are part of reproductive strategies that enhance fertility more than offspring well-being.
Christopher R. von Rueden, Adrian V. Jaeggi. "Men’s status and reproductive success in 33 nonindustrial societies: Effects of subsistence, marriage system, and reproductive strategy." 113 (39) Proceedings of the National Academy of Sciences 10824 (2016).

Hominin Evolution Triggered By Climate Change

What did hominins like humans and their archaic hominin ancestors evolve?

One important factor was a major wave of global cooling from 7 million years ago that created the Savannah ecosystems that our ancestors evolved to fill.
Around 7 million years ago, landscapes and ecosystems across the world began changing dramatically. Subtropical regions dried out and the Sahara Desert formed in Africa. Rain forests receded and were replaced by the vast savannas and grasslands that persist today in North and South America, Africa and Asia.

Up to now, these events have generally been explained by separate tectonic events -- the uplift of mountain ranges or the alteration of ocean basins -- causing discrete and local changes in climate. But in a new study, a team of researchers has shown that these environmental changes coincided with a previously undocumented period of global cooling, which was likely driven by a sharp reduction in atmospheric carbon dioxide.
The time period during which this happened is known as the late Miocene Epoch.

The paper spelling all of this out is:

Timothy D. Herbert, Kira T. Lawrence, Alexandrina Tzanova, Laura Cleaveland Peterson, Rocio Caballero-Gill, Christopher S. Kelly. "Late Miocene global cooling and the rise of modern ecosystems." Nature Geoscience (2016)

Gamma Ray Bursts Probably Won't Wipe Us Out, But They Could

Every once and a while a huge gamma ray burst from space strikes a planet. If the planet is like Earth and the gamma ray burst is severe enough, this kills the ozone layer from the outside and everything dies, because the biosphere on the planet is no longer protected from cosmic rays.

Fortunately, about 65% of gamma ray bursts are survivable for planets like Earth. Also, they aren't terribly likely to hit planets way out on the fringe of a galaxy, like ours, as opposed to planets in the inner galaxies where the higher density of stars and a complicated confluence of other considerations related to the age and metal composition of the stars involved make gamma ray bursts more common.

So, bottom line: 

The bad news is that I have just informed you of a new existential threat to all life on Earth that you'd probably never considered or worried about until now. But, the good news is that this particular threat is probably less likely to kill us all than all sorts of other potential threats from space like large heavy objects crashing into the planet, or the Sun expanding and frying the planet, or space aliens invading us. So, really, it's no big thing.

The abstract and citation for the paper that gave us this news is as follows:
A planet having protective ozone within the collimated beam of a Gamma Ray Burst (GRB) may suffer ozone depletion, potentially causing a mass extinction event to existing life on a planet's surface and oceans. We model the dangers of long GRBs to planets in the Milky Way and utilize a static statistical model of the Galaxy that matches major observable properties, such as the inside-out star formation history, metallicity evolution, and 3-dimensional stellar number density distribution. The GRB formation rate is a function of both the star formation history and metallicity; however, the extent to which chemical evolution reduces the GRB rate over time in the Milky Way is still an open question. Therefore, we compare the damaging effects of GRBs to biospheres in the Milky Way using two models. One model generates GRBs as a function of the inside-out star formation history. The other model follows the star formation history, but generates GRB progenitors as a function of metallicity, thereby favoring metal-poor host regions of the Galaxy over time. If the GRB rate only follows the star formation history, the majority of the GRBs occur in the inner Galaxy. However, if GRB progenitors are constrained to low metallicity environments, then GRBs only form in the metal-poor outskirts at recent epochs. Interestingly, over the past 1 Gyr, the surface density of stars (and their corresponding planets) that survive a GRB is still greatest in the inner galaxy in both models. The present day danger of long GRBs to life at the solar radius (R⊙=8 kpc) is low. We find that at least ∼65% of stars survive a GRB over the past 1 Gyr. Furthermore, when the GRB rate was expected to have been enhanced at higher redshifts, such as z≳0.5, our results suggest that a large fraction of planets would have survived these lethal GRB events.
Michael G. Gowanlock, "Astrobiological Effects of Gamma-Ray Bursts in the Milky Way Galaxy" (29 September 2016)

The First Farmers Colonized Europe; The Indo-Europeans Conquered It

Razib Khan sums up a recent paper comparing the gender composition of newcomers to Europe, first among the first wave of farming in the Neolithic era, and then in the major demic upheaval that accompanied the arrival of Indo-Europeans from the Steppe into Europe in the Bronze Age.

The Neolithic populations were gender balanced, probably migrated as families colonizing virgin farmland starting in Anatolia and then to Southeast Europe ca. 8000 years ago and from there West and North, after a formative period during which a Near Eastern population mixed with European hunter-gatherers. But, they didn't admix with local populations much for many centuries until farming hit its first major widespread, but temporary, collapse. Remaining hunter-gatherer populations were at that point enriched in the gene pools during this bottle neck.

The Indo-European migrants from the steppe, ca. 5000 years ago, in contrast, probably came in armed war bands on horses with men outnumbering women in ratios somewhere between 14-1 and 5-1. They took local wives at the expense of local men who were squeeze out of the gene pool by death or simply denial of local women to marry. This process continued for multiple generations, rather than in a single pulse.

Further Back In Time

There were also several waves of migration before the Neolithic Revolution.

Neanderthals were the dominant hominins of Europe from more than 200,000 years ago until about 40,000 years ago (in round numbers). They probably evolved locally from more archaic hominins who migrated to Europe from Africa. Their time period is known as the Lower Paleolithic era.

In round numbers, ca. 40,000 years ago, early modern human hunter-gatherers called Cro-Magnons swept into Europe from the Southeast and largely replaced the Neanderthals who had lived in Europe in smaller number with a hunting style focused more exclusively on large game than the new Cro-Magnon population. This was near the beginning of an era known as the Upper Paleolithic era. At the time the Cro-Magnon people migrated to Europe, modern humans had already been outside Africa from many thousands of years, but presumably because that territory was already taken by the Neanderthals, it took modern humans longer to penetrate Europe.

The Cro-Magnon people had it good for a while, but eventually, the climate cooled and glaciers came to cover most of Northern Europe banishing them to three main refuges, the Franco-Cantabrian one, one in Italy, and one in the Southeastern Mountains, where temperatures were tolerable.  This ice age was at its peak roughly 20,000 years ago.

Several thousand years later, the glaciers retreated, the Cro-Magnon people in the refuges repopulated Europe and they were joined by new people from the Near East and North Africa during a time period known as the Mesolithic era.  This population that had a few thousand years earlier repopulated Europe was the population of hunter-gatherers who were in Europe when the first farmers of the Neolithic Revolution arrived.

1000 Posts

This is the 1000th post at Dispatches From Turtle Island, which has maintained its rather quirky mix of about 50% physics (508 posts out of 1000) and about 50% anthropology and genetics for the roughly five years that it has been in existence.

As of the time I am writing this post, there have been 1876 published comments (excluding deleted comments), which is an average of almost two comments per substantive post.

There have been 391,729 page views of this blog since its inception, an average of 392 page views per post, although that average is highly skewed by a few posts that have received very high traffic, such as the all time most visited post on this blog about "Pre-Out of Africa Population Sizes and Densities", which has received 12,707 page views.

This blog gets less traffic than its sister blog, Wash Park Prophet, from which it was split off. But, the quality and sophistication of the readership is quite impressive.

Elk Migrations To North America

I missed this interesting study at the time it was originally released. It uses multiple methods to infer the arrival of elk in North America via Beringia which generally coincides with the human migration to North America via essentially the same route at about the same time.
Human colonization of the New World is generally believed to have entailed migrations from Siberia across the Bering isthmus. However, the limited archaeological record of these migrations means that details of the timing, cause and rate remain cryptic. 
Here, we have used a combination of ancient DNA, 14C dating, hydrogen and oxygen isotopes, and collagen sequencing to explore the colonization history of one of the few other large mammals to have successfully migrated into the Americas at this time: the North American elk (Cervus elaphus canadensis), also known as wapiti. 
We identify a long-term occupation of northeast Siberia, far beyond the species’s current Old World distribution. Migration into North America occurred at the end of the last glaciation, while the northeast Siberian source population became extinct only within the last 500 years. This finding is congruent with a similar proposed delay in human colonization, inferred from modern human mitochondrial DNA, and suggestions that the Bering isthmus was not traversable during parts of the Late Pleistocene. Our data imply a fundamental constraint in crossing Beringia, placing limits on the age and mode of human settlement in the Americas, and further establish the utility of ancient DNA in palaeontological investigations of species histories.
Meirav Meiri, et al., "Faunal record identifies Bering isthmus conditions as constraint to end-Pleistocene migration to the New World", Proceedings of the Royal Society B: Biological Sciences (December 11, 2013) (Hat tip to Linear Population Model).

Thursday, September 29, 2016

What Did Neanderthal Speech Sound Like?

Based upon the physical size and shape of the Neanderthal voice box, nasal cavity, rib cage and thick heavy skull, we can infer that Neanderthal speech was probably loud, and considerably more high pitched and nasal sounding than one might expect, compared to modern human alive today.

Wednesday, September 28, 2016

Calibrating Ancient Egyptian Chronologies

The Thera Eruption Linked Climate Event

An Egyptian inscription made during the reign of the pharaoh Ahmose, the first pharoh of the 18th Dynasty and the New Kingdom in Egypt that followed the Second Intermediary Period of Canaanite, linguistically Semitic Hyskos rule in Egypt "describes rain, darkness and "the sky being in storm without cessation, louder than the cries of the masses".

There is good reason to believe that this was associated with "the result of a massive volcano explosion at Thera, the present-day island of Santorini in the Mediterranean Sea" that was decisive in bringing Minoan civilization to an end. (This is one of two leading candidate as the source for the Atlantis myth, with the other in Southern Iberia.)

Equally important, radiocarbon dating of wood from an olive tree found in the ashes of that eruption provided a reliable date for that eruption: 1621-1605 B.C.E.

With this critical calibration point, it was possible to date Ahmose's reign to considerably earlier than the 1550 B.C.E date previously associated with the start of his reign. And, since Egyptian documents often reckon dates in terms of the reigning monarch, this shifts a huge chunk of the Egyptian historical record about 50+ years earlier, with the events recounted themselves taking place shortly before the beginning of Ahmose's reign.

What does this imply?
Until now, the archeological evidence for the date of the Thera eruption seemed at odds with the radiocarbon dating, explained Oriental Institute postdoctoral scholar Felix Hoeflmayer, who has studied the chronological implications related to the eruption. However, if the date of Ahmose's reign is earlier than previously believed, the resulting shift in chronology "might solve the whole problem," Hoeflmayer said. 
The revised dating of Ahmose's reign could mean the dates of other events in the ancient Near East fit together more logically, scholars said. For example, it realigns the dates of important events such as the fall of the power of the Canaanites and the collapse of the Babylonian Empire, said David Schloen, associate professor in the Oriental Institute and Near Eastern Languages&Civilizations on ancient cultures in the Middle East. 
"This new information would provide a better understanding of the role of the environment in the development and destruction of empires in the ancient Middle East," he said. 
For example, the new chronology helps to explain how Ahmose rose to power and supplanted the Canaanite rulers of Egypt—the Hyksos—according to Schloen. The Thera eruption and resulting tsunami would have destroyed the Hyksos' ports and significantly weakened their sea power. 
In addition, the disruption to trade and agriculture caused by the eruption would have undermined the power of the Babylonian Empire and could explain why the Babylonians were unable to fend off an invasion of the Hittites, another ancient culture that flourished in what is now Turkey.
The short term havoc wrecked by this eruption, while it may have weakened the Hyskos rulers of Egypt of the Second Intermediate Period (previously dated from 1650 BCE to 1550 BCE, but now calibrated to 1700 BCE to 1600 BCE) and other regimes in the region, pales next to two major arid periods in the region that bookended this event. 

The 4.2 Kiloyear Event

The first, which preceded the Thera eruption from around 2200 BCE to 2000 BCE is called the 4.2 kiloyear event.  (This was good news for California which received a two century break from its 6,000 year old megadrought at this time.)

This led to the collapse of Harappan civilization in South Asia (and the disappearance of the Sarasvati River that figures prominently in the Rig Vedic epics), the  collapse of the Akkadian Empire in what is now Iraq, and Egypt's First Intermediary period (2231 BCE to 2105 BCE). The First Intermediary Period began with the collapse of Egypt's Old Kingdom (2736 BCE to 2231 BCE), which in turn, was preceded by Egypt's early Dynastic period that started around 3150 BCE according to the new calibration, at the dawn of the Copper Age and some of the earliest moment of written history.

This also weakened the existing Hattic regime in Anatolia, clearing the way for the rise of the Indo-European Hittites, and coincides with the appearance of the first Mycenean Greeks (also Indo-Europeans) in mainland Greece, and the migration of the Tocharians to the Tarim Basin.

Bronze Age Collapse

The second, which took place after the Thera eruption, started around 1200 BCE and is commonly known as Bronze Age Collapse.

This brought down the last round of successors to the Bell Beaker culture in Western, Central and Northern Europe. It brought the ethnically Mycenean Greek Philistine Sea people mentioned in the Bible to the Gaza Strip. It led to the fall of the Hittite Empire, and the Trojan War fought on the Western coast of Anatolia. Egypt's Third Intermediate Period starting at the recalibrated date of 1119 BCE, also comes swiftly on the heels of Bronze Age collapse, although it was not the first to fall.

Out of the ashes of the Third Intermediate Period comes ancient Egypt's Late Period as an Egyptian dynasty of pharaohs reestablishes itself at a recalibrated date of about 714 BCE (which endures until a recalibrated date of about 382 BCE), right around the time that the Roman Empire and Classical Greece began to reestablish themselves as Iron Age civilizations.

There Are Three And Only Three Generations Of Standard Model Fermions

Predicted Decays Of The Standard Model Higgs Boson By Boson Mass Via Matt Strassler's Blog. The relevant percentages from the cross-section of the chart for the now known mass of the Higgs boson are as follows:
Specifically, the expected decays of a a 125 GeV Standard Model Higgs with 100% constituting all of the Standard Model Higgs boson decays are as follows. . . : 
60% of such particles would decay to bottom (b) quark/antiquark pairs
21% would decay to W particles
9% would decay to two gluons (g)
5% would decay to tau (τ) lepton/antilepton pairs
2.5% would decay to charm (c) quark/antiquark pairs
2.5% would decay to Z particles
0.2% would decay to two photons (γ)
0.15% would decay to a photon and a Z particle. 
Other even more rare decays of Higgs bosons are predicted [in the Standard Model] but happen too rarely to have an expectation that they could be detected at the LHC at this point.
The total above exceeds 100% and leaves no space for new decays due to rounding errors.

All decays except the gluon channel, charm channel and mixed photon/Z channel have been observed at the LHC. The missing channels are difficult to detect, partially because they are smaller and partially due to large background that confound efforts to attribute observed decays to Higgs boson decays.

All Particles That Interact Weakly Per The Standard Model Up To 45 GeV Are Known

We know that the Standard Model's portfolio of fundamental fermions that interact via the weak force is complete up to particles with masses up to 45 GeV/c^2 (half the Z boson mass) because we have many decades of voluminous data from multiple colliders regarding W and Z boson decays, and no particle that interacted via the weak force in accordance with the Standard Model (such as fourth generation fermions) have been detected.

Therefore, any new particle that interacts in a Standard Model manner via the weak force must have more than 45 GeV of mass.

Standard Model Fermions Come In Groups Of Four

Each generation of fermions in the Standard Model must have one up-like quark, one down-like quark, one charged lepton, and one neutrino, for reasons of mathematical consistency.

How Does The Standard Model Higgs Bosons Decay?

With one exception (photon-Z boson decays), Higgs bosons, like the Z boson, decay either to particle and antiparticle pairs of fundamental fermions, or to complementary pairs of Standard Model gauage bosons with masses up to half of the Higgs boson mass (roughly 62.5 GeV).

Setting aside the bosonic decays of the Higgs boson for a moment, the heavier a fundamental fermion pair that is not mass-energy conservation barred (i.e. the top quark, top antiquark pair which has a combined 346 GeV of mass plus or minus which is much more than the 125ish GeV mass of the Higgs boson), the more likely it is to be produced in Higgs boson decays.

Thus, bottom quark pair decays are most common, then tau lepton pairs, then charm quark pairs, with might lighter fundamental fermion pairs (strange quarks, muons, down quarks, up quarks, electrons, and neutrinos, in approximately that order) all having vanishingly small branching ratios in Higgs boson decays of the 125 GeV Higgs boson.

So far, the branching ratios of Higgs boson decays that are observed are consistent with the Standard Model Higgs Boson's decays up to reasonable margins of error.  But, we don't have a long enough track history with Higgs boson decays to have accurately measured all of its possible decay modes or to rule out new decay modes that aren't too common or involve relatively light particles.

What If There Were Additional Weakly Interacting Standard Model Fermions That Can Be Produced In Higgs Boson Decays?

For a new fundamental fermion that interacts weakly to be produced in Higgs boson decays, it must have a mass of more than 45 GeV, but less than 62.5 GeV.

If such a fermion existed, decays of pairs of these particles would be far more common than bottom quark decays, because it would be the heaviest available fermion pair decay available to the Higgs boson.  While we struggle to see tau pair and charm pair decays with existing data, there is no possible way that the scientists at the LHC could miss a Higgs boson decay into 45 GeV to 62.5 GeV fundamental fermions.

We have no reason to expect new quarks in that mass range, because the top quark is already heavier than any potential quark that could be seen in this mass range and historically, new quarks have always been heavier than the already known ones.  If they weren't, existing quarks would decay to the new quark flavor and we know that this has never happened.  So, it would be very surprising to see a new down-type quark that did not have a mass which was much greater than 174 GeV, which is well out of the allowed range for Higgs boson decays. Direct searches for fourth generation quarks at the LHC and prior experiments are more stringent, ruling out a fourth generation b quark with a mass of less than 675 GeV, and a fourth generation t quark with a mass of less than 782 GeV.

The LHC data should rule out, however, a fourth generation charged lepton up to 62.5 GeV which would have been obvious in the data by now if the Higgs boson had such decays. Koide's rule would suggest a fourth generation charged lepton mass of 43.2 GeV which is ruled out, but with a modest overshot from the rule could not be produced in Z boson decays, but would show up in Higgs boson decays. So, we can have considerably more confidence than we did previously that the failure to see fourth generation charged leptons to date in Higgs boson decays rules them out.

Direct searches for heavy charged leptons, however, already exclude them up to 100.8 GeV according to the Particle Data Group, with is quite far above the 1.776 GeV of the tau lepton and well in excess of our naive expectations regarding a fourth generation charged lepton mass.

Now, the heaviest active neutrino has a mass of not more than about 0.1 eV (we know those from cosmology data and it is not inconsistent with neutrino oscillation data), and all three of the Standard Model neutrinos interact weakly, so we know that there is no Standard Model neutrino with a mass of 45 GeV or less. (The Particle Data Group conservatively puts the limit at only 39.5 GeV).

So, if the Higgs boson can decay to neutrino pairs, than the limit on the mass of a fourth generation Standard Model active neutrino would rise from 45 GeV (at least 450 billion times the mass of the next most heavy neutrino) to 62.5 GeV (at least 625 billion time the mass of the next most heavy neutrino).  

But, this additional exclusion is not entirely clear in the way that the charge lepton exclusion would be.  This is because while everyone agrees that the neutrino interacts via the weak force, there is no agreement or experimental evidence to tell us if Standard Model active neutrinos have a coupling to the Higgs boson and acquire their massed by that means or not. Even if they did, the likelihood of a Standard Model Higgs boson with a 125 GeV mass decaying to any of the known neutrino-antineutrino pairs is so unlikely that we would be unlikely to see one and if we did the only signal of that would be missing traverse energy which could be a myriad of BSM particles, in addition to being a Standard Model active neutrino.  However, given that the mass gap between any fourth generation neutrino and the next lightest neutrino established by Z boson decays is so huge, it hardly matters.

Of course, if one can rule out fourth generation leptons in the Standard Model, that also rules out fourth generation quarks at any mass, and it also rules out any fifth or higher generation of Standard Model fermions.

Therefore, SM4+ model with four or more generations of Standard Model fermions, are powerfully disfavored in a way that LHC Higgs boson data has significantly bolstered. So, there is powerfully suggestive evidence that Nature really does have three and only three generations of Standard Model fundamental fermions.

Other BSM particles

As you may have noticed in the analysis above, we actually don't gain any additional power to exclude fourth generation Standard Model fermions from the observed Higgs boson decays. But, that doesn't mean that our Higgs boson decay information hasn't provided new information.

In principle, there could be particles that are over 45 GeV, interact with the weak force, couple to the Higgs boson and do not have analogs in the Standard Model.  Any such particles are now ruled out up to 62.5 GeV of mass, rather than 45 GeV prior to the discovery of the Higgs boson.

Likewise, in principle, there could be a beyond the Standard Model particle that couple to the Higgs boson, but do not decay via the weak force, and hence wouldn't show up in W or Z boson decays. These are now ruled out, at least, in the mass range from about 4 GeV (at or near the bottom quark mass) to 62.5 GeV (half of the Higgs boson mass).

These constraints effectively mean that BSM particles below the 62.5 GeV scale, if they exist must not have weak force interactions and if they have Higgs boson interactions, must be significantly lighter than a bottom quark (a low energy region where countless searches at multiple colliders over many decades have failed to show any sign of a BSM resonance, unless some resonance currently classified as a hadron with not fully established properties is really a misclassified fundamental particle, which is quite unlikely).

In other words, if there are any BSM particles, they live in a parameter space world pretty much completely divorced from the parameter space world of the Standard Model. There is no overlap between the circumstances where BSM particles could be found and those where Standard Model particles have been found.

I'm also pretty convinced that there is no way that any electrically charged particle weighing less than a bottom quark could possibly have been overlooked.

So, if BSM particles exist anywhere they are either electrically neutral light particles that don't interact via the weak force or derive mass from the Higgs mechanism (and realistically, probably can't have strong force interactions either, as it would also be virtually impossible for light, strongly interacting fundamental particles to be missed), or they are much more massive than the top quark.

The former category of BSM particles are basically sterile neutrino-like fermions and non-SM force carrying bosons (e.g. the hypothetical graviton and hypothetical particles such as axions or the self-interaction bosons of dark matter, if it exists).  The latter encompasses everything else.

Tuesday, September 27, 2016

Ashkenazi Jewish Ancestry

According to Davidski at the Eurogenes Blog, using new data points from an open access Estonian genome database, ancestry in the Ashkenazi Jewish (i.e. non-Spanish European Jewish) gene pool breaks down as genetically most similar to the following modern populations in the following proportions (rounded to the nearest percentage point):

* 50% to two populations (34% Samaritan and 16% Arab) from present day Israel
* 8% to Anatolians (i.e. populations from modern day Turkey)
* 30% to Tuscan Italians
* 12% to Polish

Two components with only trace admixture (less than 1%) are omitted.

These are listed in the likely order of admixture.  

The Levantine component is presumably the source population, the Anatolian admixture probably arose en route to Europe, Tuscan admixture probably preceded a bottleneck event in the Ashkenazi Jewish population in the Middle Ages, and Polish admixture probably immediately followed that bottleneck.

Of course, the results of any such model depend upon the details of the methods used to assign ancestry and the source populations available to the model. But, the results generally conform to the leading historical accounts and to other efforts to identify the sources of the Ashkenazi Jewish gene pool. See, for example, a paper earlier this year on the subject, and a historical analysis released in 2013.

Friday, September 23, 2016

Some Back Of Napkin Observations About The Universe

Selected Facts

The universe is now about 13.8 billion years old and has a volume on the order of 4*10^80 m^3, and a radius of about 46.6 billion light years, which is about 4*10^26 meters.  A light year is about 9* 10^15 meters.

In general, the relationship of a Schwarzschild radius to mass is r=2GM/c^2.

The total ordinary matter of the universe has an estimated mass of 10^53 kg.  The Schwarzschild radius of this mass is 1.485*10^26 meters.

Add in dark matter and dark energy and you are still no more than about 10^55 kg. The Schwarzschild radius of this mass is 1.485*10^28 meters.

The most dense objects in the universe have a density of on the order of 6*10^17 kg/m^3.

The volume of the ordinary matter in the universe at that density is about 10^35 cubic meters, and the volume of the ordinary matter, dark matter and dark energy in the universe at the scale is about 10^37 cubic meters.

This implies a scale of 10^12 meters to 10^13 meters of an object with all of the matter and energy in the universe at neutron star/smallest stellar black hole level densities, which is far smaller than the volume of the universe after inflation, or the universe's Schwarzschild radius.

After cosmological inflation, the universe allegedly had a scale of about 10^24 meters after just 10^-32 of a second. Before inflation, the universe allegedly had a scale smaller than an atom. A hydrogen atom has a volume on the order of 6* 10^-31 cubic meters.

Analysis

Thus, if the universe is squeezed below the threshold of a radius somewhere in the range of 10^12 m to 10^13 m, protons and neutrons can no longer be the primary source of matter in the universe. And, in a universe that small, it isn't obvious that it is possible for particles to have kinetic energy either, because there is no room for them to move. 

So, at that point there are basically two options. Protons and neutrons can be replaced by more massive hadrons with some second and/or third generation quarks in them, and/or fermions can be replaced by bosons, so that more than one particle can be in the same place at the same time.

The heaviest possible baryon not involving a top quark (which ordinarily decays to a bottom quark before it can hadronize) is one made of three bottom quarks with a mass roughly 15 times that of a proton or neutron, which would allow for a radius about 2.5 times smaller, so as little as 4*10^11 m.

If hadrons made of three top quarks were possible in these extreme circumstances, they would have a density of roughly 520 times that of protons and neutrons, which would allow for a radius about 8 times smaller than a neutron star, so about 1.25*10^11 meters (about one light day).

Below that threshold, the universe could not be made up predominantly of fermionic matter.  Only bosons would be possible below that threshold. This could presumably include gravitons, photons, gluons, weak force bosons, Higgs bosons, and mesons.

The Canonical Chronology of the Universe

Honestly, the first ten seconds or so of the canonical chronology of the universe is all pretty speculative in my opinion. 

These include, in order:

* Planck epoch (10^-43 seconds) (10^19 GeV a.k.a. 10^32 K) Quantum gravity dominates.

* GUT epoch (10^36 seconds) (10^16 GeV) The Standard Model forces are merged.

* Electroweak and Inflationary epoch and Baryogenesis (inflation from 10^-33 seconds to 10^-32 seconds, the rest thought 10^-12 seconds) (10^28 K to 10^22 K a.k.a. 10^15 GeV to 10^9 GeV) The electroweak force grows distinct from the strong force; inflation causes space-time to surge from smaller than an atom to 100 million light years; quarks come into existence, and perhaps leptons too.

* Electroweak symmetry breaking and the Quark Epoch (10^-12 seconds to 10^-6 seconds) (10^12 K).  The electromagnetic force and weak force become distinct, the Higgs mechanism starts to function more or less as it does now, and there are quarks which have not hadronized in a quark-gluon plasma. This is the first era in which the universe was as cool as the highest energies observables that have been seen at the Large Hadron Collider (or any other experiment ever done on Earth), so this era, whenever it occurred, is roughly at the limits of the energies where the Standard Model is experimentally confirmed.

All of the events in above purportedly take place in the first 10^-6 seconds (i.e. one millionth of a second) of the universe.

The remainder of the first second of the universe in this chronology (starting at 10^-6 seconds after the Big Bang) is called the Hadron epoch (10^11 K to 10^9 K) during which quarks transform from a quark-gluon plasma to ordinary hadrons, and anti-hadrons are annihilated in collisions with ordinary matter hadrons, giving rise to the existing matter-antimatter asymmetry in quarks.

At the commencement of the next era, called the Lepton epoch, when the temperature of the universe is about 10 billion kelvins which translates into energy scales of 1 MeV, neutrinos "decouple" and cease to interact with ordinary matter.  The Lepton epoch allegedly lasts nine seconds and during this era new lepton-antilepton pairs are created in abundance, but ultimately energy levels fall to a point where new pairs are not created and redundant lepton-antilepton pairs annihilate.

At 10 seconds to 1000 seconds after the Big Bang, the connection with reality starts to kick in as this is when Big Bang Nucleosynthesis allegedly occurs at energies from 10 MeV to 100 keV (temperatures of 10^11 K to 10^9 K).  This is when protons and neutrons bind themselves into primordial atomic nuclei. And, the Big Bang Nucleosynthesis hypothesis is quite precisely supported empirically by the relative frequencies of chemical elements in the universe.

10 seconds after the Big Bang in this chronology is also the start of the Photon Epoch during which the temperature of the universe falls from 10^9 K to 10^3 K that lasts until 10^13 seconds (about 380,000 years). During this time period "a plasma of nuclei, electrons and photon; temperatures remain too high for the binding of electrons to nuclei."

Starting during the Photon Epoch, at 47,000 years after the Big Bang in the chronology (starting at a 10,000 K temperature of the universe) is the Matter Dominated Era when the energy density of matter dominates both the energy density of radiation and dark energy, slowing the expansion of space, which continues until 10 billion years after the Big Bang.

At a moment about 380,000 years after the Big Bang in this chronology, is a moment called Recombination (4,000 K) when "Electrons and atomic nuclei first become bound to form neutral atoms. Photons are no longer in thermal equilibrium with matter and the universe first becomes transparent. The photons of the cosmic microwave background radiation originate at this time."

This is followed by the Dark Ages from about 380,000 years after the Big Bang until 150,000,000 years after the Big Bang (a.k.a. redshift 20) during which the temperature of the universe falls from 4,000 K to 60 K and "The time between recombination and the formation of the first stars. During this time, the only radiation emitted was the hydrogen line." At the end of the Dark Ages, called Reionization around 150,000,00, the first stars form.  The oldest observed object in space is galaxy GN-z11 at a red shift of 11.09.  By 1,000,000 years after the Big Bang (redshift 6) the temperature of the universe has fallen to 19 K and galaxies and the first "proto-clusters" start to form in earnest.

The temperature of the universe falls to 4 K and Dark Energy begins to dominate at 10,000,000 years after the Big Bang (red shift 0.4) which causes the expansion of the universe to accelerate.

The temperature of the universe is now 2.7 K at 13.8 billion years after the Big Bang.

Commentary on The Chronology

The canonical chronology of the universe is pretty well supported, in sequence and our understanding of the physical laws that applied when the events at that part of the sequence were happening at least, around the time of the Quark Era and after Electroweak Symmetry Breaking canonically commencing 10^-12 seconds after the beginning of the universe.

Everything happening before then in the canonical chronology of the universe has not been experimentally probed and is rather speculative. There is no solid evidence that at some energy scale above the Large Hadron Collider's scope but below the GUT scale, that electromagnetism and the weak force were really unified, or that at the GUT scale, all three Standard Model forces were unified. While the phenomena attributed to cosmic inflation are definitely real, there is no solid evidence that these were actually caused by cosmic inflation of the nature of the inflation phenomena, if it did occur. I am hardly a voice alone in the wilderness in being skeptical of a cosmic inflation hypothesis when it takes a book length physics article just to describe the variations on that hypothesis that have been seriously proposed.  And, nobody really knows what physics looks like at the Planck scale and beyond.

Indeed, while we can comfortably say that the universe expanded from a size of about 100 million light years radius at which point it was extremely homogeneous and had temperatures not less than 10^12 K to its present size over about 13.8 billion years and temperature of 2.7 K, during which the mass-energy of the universe has been conserved, in a manner consistent with the Standard Model of Particle Physics and more or less consistent with general relativity and the predictions of the lamdaCDM Standard Model of Cosmology, we can't really with any confidence extrapolate much further back to a true Big Bang singularity before that point. We have no way to confirm that the classical formulation of General Relativity or the Standard Model are reliable in those circumstances, or the nature of any "new physics" that might arise at such high energies.

The largest particle masses that the LHC can probe are in the 100s of GeVs (i.e. about 10^11 Gev). This is a factor of 100,000 smaller than the GUT scale and a factor of 100,000,000 smaller than the Planck scale.

An ability to probe the physics of the Quark Epoch experimentally is impressive. But, no colliders that humans will ever be able to construct, and not even the most epic astrophysical events like large supernova and colliding large black holes give rise to any material number of interactions at energies at the GUT scale or beyond. The most energetic gamma rays or cosmic rays emitted from the biggest supernovas have energies on the order of 1.6 TeV, roughly comparable to the highest energies that will be probed at the LHC by the time it is complete or if not at the LHC at the very next generation of higher energy colliders, which could take place in my lifetime.

Fortunately, beyond intellectual curiosity and hints that it might provide about the deeper structure of he laws of physics which we have observed and confirmed, it is not terribly important to know how the laws of physics act in circumstances that we will never be able to observe by any but the most indirect and inconclusive means, and that we will certainly never encounter.

We can, however, say with some confidence that Standard Model physics as we have experimentally tested it prevailed until roughly 13.8 billion years ago, and that the era during which physics at energies higher than we have tested prevailed were present only comparatively briefly, a mere 10^-12 of a second in the canonical chronology of the universe and not more than 0.1 billion years even in the absence of inflation (the temperature of the universe is largely a function of energy density per volume, so the volume of the universe is a more robust measure of when "new physics" might appear than the time elapsed since the Big Bang).

The absolute amounts of time elapsed after the Big Bang are also speculative, and I am skeptical of the accuracy of the duration of all of the Epochs through at least Big Bang Nucleosynthesis. Fortunately, in practice, neither the absolute ages after the Big Bang for any of the Epochs, or the duration of the earliest Epochs, is very material to what we observe today.

For the most part, it really makes no great difference if Big Bang Nucleosynthesis took 990 seconds or 99,900 years, and it makes no great difference if Big Bang Nucleosynthesis started 10 seconds after the Big Bang or 100,000,000 years after the Big Bang (about the length of time that it would take the universe to reach the size it is at that point in the chronology in the absence of cosmic inflation which would make the universe about 1% older than in the canonical chronology of the universe). As long as the initial conditions at the time that Big Bang Nucleosynthesis begins are the same and Big Bang Nucleosynthesis has run its course before the Photon Epoch (a.k.a. Radiation Era) is over, and the outcome of Big Bang Nucleosynthesis however long it occurs is the same, it really doesn't matter.

We have exhaustively studied the cosmic background radiation which we believe derives from about 13.42 billion years ago, long before the first star was born, when the average temperature in the universe was an intolerably hot 4000 degrees Kelvin. Indeed, the Planck experiment was so impressive that it pretty much captured all information about the cosmic background radiation that it is even theoretically possible to obtain with solar system based instruments. There is basically nothing for telescopes to see from before this era, so the best we can do is rule out phenomena that would have left a detectable signature if it had occurred before then and did not.

Gravity Modification Still Works To Describe Galaxy Dynamics

We report a correlation between the radial acceleration traced by rotation curves and that predicted by the observed distribution of baryons. The same relation is followed by 2693 points in 153 galaxies with very different morphologies, masses, sizes, and gas fractions. The correlation persists even when dark matter dominates. Consequently, the dark matter contribution is fully specified by that of the baryons. The observed scatter is small and largely dominated by observational uncertainties. This radial acceleration relation is tantamount to a natural law for rotating galaxies.
Stacy McGaugh, Federico Lelli, Jim Schombert, "The Radial Acceleration Relation in Rotationally Supported Galaxies" (September 19, 2016).

The dynamics of galaxies of almost every kind can be fully explained by the distribution of ordinary matter in those galaxies.

Specifically, the observed gravitational potential (gobs) is a function of the baryonic gravitational potential (gbar) of the form:
gobs= gbar/(1-e-sqrt(gbar/g†))

where g† is an acceleration scale physical constant with a value of g† = 1.20 ± 0.02 (random) ±0.24 (systematic)×10−10 m s−2. The random error is a 1σ value, while the systematic uncertainty represents the 20% normalization uncertainty in the mass-luminosity ratio Y.  A discussion in the paper explains that much of the error involves uncertainty in accurately determining the ordinary matter mass of a galaxy and accurately measuring rotation speeds.

Stated slightly differently, the gravitational potential attributable to dark matter, gDM=gobs-gbar and this in turn is expressed by the formula:

gbar/(esqrt(gbar/g†)-1)

As a result, one of two things must be true. Either General Relativity is not an accurate description of weak gravitational fields and need to be modified in some fashion to reflect this reality, or there is some mechanism by which the distribution of dark matter in a galaxy and the distribution of ordinary matter in a galaxy are rigidly intertwined. The paper expresses the idea in this way (emphasis added, citations omitted):
Possible interpretations for the radial acceleration relation fall into three broad categories.
1. It represents the end product of galaxy formation.
2. It represents new dark sector physics that leads to the observed coupling.
3. It is the result of new dynamical laws rather than dark matter.
None of these options are entirely satisfactory. 
In the standard cosmological paradigm, galaxies form within dark matter halos. Simulations of this process do not naturally lead to realistic galaxies. Complicated accessory effects (“feedback”) must be invoked to remodel simulated galaxies into something more akin to observations. Whether such processes can satisfactorily explain the radial acceleration relation and its small scatter remains to be demonstrated. 
Another possibility is new “dark sector” physics. The dark matter needs to respond to the distribution of baryons (or vice-versa) in order to give the observed relation. This is not trivial to achieve, but the observed phenomenology might emerge if dark matter behaves as a fluid or is subject to gravitational polarization. 
Thirdly, the one-to-one correspondence between gbar and gobs suggests that the baryons are the source of the gravitational potential. In this case, one might alter the laws of dynamics rather than invoke dark matter. Indeed, our results were anticipated over three decades ago by MOND. Whether this is a situation in which it would be necessary to invent MOND if it did not already exist is worthy of contemplation. 
In MOND, eq. 4 [ed. the first equation above] is related to the MOND interpolation function. However, we should be careful not to confuse data with theory. Equation 4 provides a convenient description of the data irrespective of MOND. 
Regardless of its theoretical basis, the radial acceleration relation exists as an empirical relation. The acceleration scale g† is in the data. The observed coupling between gobs and gbar demands a satisfactory explanation. The radial acceleration relation appears to be a law of nature, a sort of Kepler’s law for rotating galaxies
Nothing axiomatic about dark matter theories explains why there is such a tight relationship between the distribution of ordinary matter and the distribution of dark matter in a galaxy, although a wide variety of dark matter theories approximately reproduce observed galactic behavior. But, this could simply reflect our ignorance of an emergent property of what is naively a pretty simple dark matter model.

In this paper McGaugh, a leading physicists advocating for a gravity modification solution to dark matter phenomena in other publications, doesn't resolve that question.

In particular, what his paper silently suggests is that if you want to test a dark matter model, you shouldn't just do simulations. You should be able to produce the simple and tight analytical relationship that McGaugh derives from observation from your reasoning about how dark matter halos form in your model in the presence of ordinary matter. If that isn't possible, your dark matter model is probably wrong.

It also silently makes the point that if this empirical formula can describe reality which a single parameter within the domain of applicability, that any correct dark matter theory ought to be able to either do the same, or get a much tighter fit to the data with each additional parameter (which is almost impossible at this time because the magnitude of the observational uncertainty about the data points in this study is large enough to account for essentially all of the uncertainty and scatter in the final result).

An article discussing the paper is here, and the main fault in the article is the assumption that this research is new or groundbreaking, when it really simply sums up conclusions that have been widely discussed ever since Dr. Milgrom published his first paper on the topic thirty-four years ago in 1982 in which the theory is dubbed MOND.

Since 1982, MOND has on more than one occasion accurately predicted the dynamics of new kinds of gravitational systems that had not previously been observed, while dark matter theories did not.

MOND in its original conceptual was not relativistic, it was a modification of Newtonian gravity which we have known is flawed for a century, even though it is used for many practical purposes in terrestrial, solar system, and galactic scale calculations. 

Fortunately, new data will allow the discussion to continue on a genuine scientific footing. It also helps there there is no singular consensus on exactly how gravity should be modified or exactly what parameters the dark matter theory should have.

Consider for example, a point made by McGaugh in a 2010 conference presentation:
Scientists believe that all ordinary matter, the protons & neutrons that make up people, planets, stars and all that we can see, are a mere fraction -- some 17 percent -- of the total matter in the Universe. The protons and neutrons of ordinary matter are referred to as baryons in particle physics and cosmology. 
The remaining 83 percent apparently is the mysterious "dark matter," the existence of which is inferred largely from its gravitational pull on visible matter. Dark matter, explains McGaugh "is presumed to be some new form of non-baryonic particle - the stuff scientists hope the Large Hadron Collider in CERN will create in high energy collisions between protons." 
McGaugh and his colleagues posed the question of whether the "universal" ratio of baryonic matter to dark matter holds on the scales of individual structures like galaxies.
"One would expect galaxies and clusters of galaxies to be made of the same stuff as the universe as a whole, so if you make an accounting of the normal matter in each object, and its total mass, you ought to get the same 17 percent fraction," he says. "However, our work shows that individual objects have less ordinary matter, relative to dark matter, than you would expect from the cosmic mix; sometimes a lot less!" 
Just how much less depends systematically on scale, according to the researchers. The smaller an object the further its ratio of ordinary matter to dark matter is from the cosmic mix. McGaugh says their work indicates that the largest bound structures, rich clusters of galaxies, have 14 percent of ordinary baryonic matter, close to expected 17 percent. 
"As we looked at smaller objects - individual galaxies and satellite galaxies, the normal matter content gets steadily less," he says. "By the time we reach the smallest dwarf satellite galaxies, the content of normal matter is only ~1percent of what it should be. (Such galaxies' baryon content is ~0.2 percent instead of 17 percent). The variation of the baryon content is very systematic with scale. The smaller the galaxy, the smaller is its ratio of normal matter to dark matter. Put another way, the smallest galaxies are very dark matter dominated.
In contrast, ellipical galaxies have far less dark matter relative to their ordinary matter content. 

BLAM

This post is about BLAM - Baryon number, Lepton number, Antimatter and Matter (n.b. the punctuation convention in physics is to omit periods from acronyms, probably because physicists use them so often and it takes longer to type extra characters).

In the Standard Model, baryon number a.k.a. B (the number of quarks minus the number of anti-quarks divided by three) is conserved and lepton number a.k.a. (the number of leptons minus the number of anti-leptons) is separately conserved.

Thus, every interaction that creates a quark must also create an anti-quark, every interaction that destroys a quark must also destroy an anti-quark, every interaction that creates a lepton must create an anti-lepton, and every interaction that destroys a lepton must destroy an anti-lepton.

There is an exception to this rule in the Standard Model, called a sphaleron process, which conserves B-L, but not B and L separately.
A sphaleron (Greek: σφαλερός "slippery") is a static (time-independent) solution to the electroweak field equations of the Standard Model of particle physics, and it is involved in processes that violate baryon and lepton numbers. Such processes cannot be represented by Feynman diagrams, and are therefore called non-perturbative. Geometrically, a sphaleron is simply a saddle point of the electroweak potential energy (in the infinite-dimensional field space), much like the saddle point of the surface z(x,y)=x2−y2 in three-dimensional analytic geometry. 
In the standard model, processes violating baryon number convert three baryons to three antileptons, and related processes. This violates conservation of baryon number and lepton number, but the difference B−L is conserved. In fact, a sphaleron may convert baryons to anti-leptons and anti-baryons to leptons, and hence a quark may be converted to 2 anti-quarks and an anti-lepton, and an anti-quark may be converted to 2 quarks and a lepton. A sphaleron is similar to the midpoint (τ=0) of the instanton, so it is non-perturbative. This means that under normal conditions sphalerons are unobservably rare. However, they would have been more common at the higher temperatures of the early universe.
The Standard Model also conserves electromagnetic charge.

At the scale of the universe, there are strong indications that the global electric charge of the universe is zero.

But, baryon number is not zero or anywhere close. The baryon number of the universe is slightly less than the combined total number of protons and neutrons in the universe, increased slightly by unstable baryons with mean lifetimes of less than a millionth of a second, and decreased slightly by the number of anti-baryons in the universe at any given moment. So, the baryon number of the universe is a bit more than 95% of the total mass of the universe measured in units of GeV/c^2 (often abbreviated to GeV based on the convention that in "natural units" c^2 is one and because physicists are just plain lazy sometimes).

From the perspective of someone trying to trace back the history of the universe to the Big Bang, this is problematic, because if this law of physics holds back to the beginning of the universe, then the net number of baryons at the moment of the Big Bang is the same as it is now. And, if the universe started as "pure energy" (a near universal assumption of cosmologists and theoretical physicists) then we have no know process except the sphaleron to make that transition.

On the other hand, a matter dominated universe is one that makes perfect sense and that we have a process to explain (the same applies to an antimatter dominated universe, but if we lived in an antimatter dominated universe we would call it "matter" and would call matter "antimatter").

* Protons, neutrons, antiprotons and antineutrons are the only remotely stable baryons in the universe (mesons aren't stable either but are irrelevant because they have a baryon number of zero since they have equal numbers of quarks and antiquarks). 
* A proton that encounters an antiproton will annihilate into energy, as will a neutron that encounters an antineutron.
* Therefore, if the net baryon number in the universe is positive, sooner or later, almost every antibaryon in the universe will be annihilated by a baryon, leaving only baryons left over.

The story on the lepton side is a bit more complex. There is only one stable charged lepton, the electron. But, there are three kinds of neutrinos (the electron neutrino, the muon neutrino and the tau neutrino, because somebody ran out of creativity when coming up with names for them), all of which are stable.

If an electron encounters a positron (a.k.a. an anti-electron), they annihilate. So, it makes sense that there are far more electrons in the universe than there are positrons, because they annihilate one to one until only the more numerous kind is left.  The positrons we do observe in the universe are probably overwhelmingly created by recent processes which haven't met their nemesis yet, rather than primordial.

But, the kicker is that it is not true in general that a lepton that encounters an antilepton will annihilate. An electron or positron that encounters a neutrino of any kind, will not annihilate, because that interaction would violate conservation of charge (although an electron and antineutrino could merge to form a W- boson and a positron and an neutrino could merge to form a W+ boson).

Even less widely known is the fact that a neutrino that encounters an antineutrino of the same type will not annihilate into a photon, like charged particles and their antiparticles do, because they don't couple to the electromagnetic force that photons mediate.  Conservation of charge also prevents them from forming W bosons.  And, their lack of QCD color charge means that they can't annihilate to gluons because they can't couple to gluons.

A neutrino and an antineutrino of the same type can couple to form a Z boson, and a Z boson could decay to a particle antiparticle pair of quarks or charged leptons, and these in turn could annihilate.

But, a W boson has a mass of about 80 GeV and a Z boson has a mass of about 90 GeV, while an electron-antineutrino pair have a mass of about 0.00051 GeV and a neutrino-antineutrino pair have a mass on the order of 0.0000001 GeV.  So, these interactions can take place only via quantum tunneling through virtual W or Z bosons (which are highly suppressed in probability by the mass differences unless the particles have relativistic kinetic energies), or they have so much kinetic energy that a real W or Z boson can be produced.

Also, charged lepton-antineutrino merges into W- bosons and the antimatter equivalent, are commonly believed to require that, for example, an electron couple to an electron antineutrino, rather than a muon antineutrino or a tau antineutrino, and likewise a neutrino-antineutrino coupling to form a Z boson is commonly believed to require that, for example, a muon neutrino couple only to an antimuon neutrino.

This isn't totally certain, because the big experimental discovery of the last twenty years or so in neutrino physics has been that neutrinos emphatically do not seem to conserve lepton flavor. A muon neutrino, for example, can, with a well defined probability, oscillate into either an electron neutrino or a tau neutrino, which violates neutrino flavor conservation, but does not violate lepton number conservation.  

(I state this with less than perfect certainty because one of the common way that neutrino oscillation is described is as a blending of weak force neutrino flavor and the three neutrino mass eigenstate. So it may actually be the case that a muon neutrino that starts with neutrino mass eigenstate number two that oscillates to neutrino mass eigenstate one is just a lighter than usual muon neutrino, while a muon neutrino that oscillates to neutrino mass eigenstate two is just a heavier than usual muon neutrino.)

To make a long story short, however, the bottom line is that while there are very efficient processes by which we would expect almost all antiquarks and charged antileptons to be removed from the universe when there are more quarks than antiquarks, and are more charged leptons than charged antileptons, there is not a similarly efficient process by which antineutrinos are removed from the universe.

We also know that the number of neutrinos in the universe (including antineutrinos) profoundly outnumber the baryon number of the universe and the number of charged leptons in the universe (the gross numbers and the numbers net of antibaryons and antileptons are almost the same), combined.

Since virtually all stable positively charged particles in the universe are protons, and virtually all stable negatively charged particles in the universe are electrons, and the net charge of the universe appears to be zero, it follows that the subtotal of the quantity B-L from all particles in the universe other than neutrinos is almost exactly zero, and if the net charge of the universe is zero there is a very good argument that it should be exactly zero.

This implies that if B-L for the universe as a whole is zero, that the number of neutrinos and antineutrinos in the universe must be exactly equal, or very nearly so.

Now, another way to deal with this issue would be for neutrinos to be Majorana particles that are their own antiparticles, and hence, for them to have a net lepton number of zero.  But, this is profoundly problematic because, if that is the case, then the conservation of lepton number principle which was used to predict the existence of neutrinos in the first place wouldn't exist, when there is ample experimental evidence that neutrinos do indeed appears in decays when and only when they are need to maintain the correct lepton number.  This is one of the main reasons that I greatly doubt that proposition that neutrinos have Majorana mass and that they are Majorana particles.  If neutrinos could oscillate into antineutrinos and back with any frequency even if it took immense amounts of energy to do so, lots of lepton number violating processes would be observable experimentally.

What implications does this leave us with?

Either the number of neutrinos and antineutrinos are exactly equal, or the universe does not observe B-L conservation, or the initial value of B-L in the universe is not zero.

Now, nobody has yet measured the ratio of neutrinos to antineutrinos in the universe and getting a representative sample may be difficult. But, the task is simplified by the fact that neither neutrinos nor antineutrinos have meaningful interactions with other kinds of matter, and by the fact that almost all neutrinos observed in nature seem to have relativistic amounts of kinetic energy relative to their rest masses. So, there are not especially strong reasons for the ratio of neutrinos to antineutrinos in the vicinity of the solar system to be much different from other places in the universe.

In particular, if we managed to measure this ratio at some point and, for example, the result was that 66% +/- 6.6% of the neutrino/antineutrinos were antineutrinos, we could say with a high degree of confidence that we do not live in a universe in which the initial value of B-L was zero and the quantity B-L is conserved.

It is hard enough to measure neutrinos at all, and even harder to distinguish experimentally between a neutrino and an antineutrino, but this is not a kind of measurement that is in principle impossible with technology not much more advanced than we already have and resources that the human race collectively can spare for a project like that.  And, smart money is on an excess of antineutrinos over neutrinos.

Now, it is hard enough to come up with an extension of the Standard Model that is consistent with experimental evidence but preserves B-L, but not B and L separately, because the experimental constraints on B and L violations from proton decay experiments, neutrinoless double beta decay experiments, and collider searchers for flavor changing neutral currents, for example, have all placed incredibly severe constraints on B and L violations, and neither the Standard Model, nor almost any common GUT theories, propose violations of separate B or L conservation laws that don't respect B-L conservation.

So, if the percentage of neutrino/antineutrinos in the universe is not statistically consistent with 50% neutrinos, there are basically only three possibilities:

1. The Big Bang had zero B and zero L, but some process in the very early universe not yet conceived violated not just B conservation and not just L conservation, but also B-L conservation.

Coming up with a process that violates B, L, and B-L conservation, but only violates any of them at extremely high Big Bang energies is a real challenge, but perhaps someone is up to it.

2. The Big Bang had non-zero B and/or L.  This is ugly, but is the default answer under the Standard Model.  It is the Standard Model default because the only B-L process in the Standard Model doesn't appear to be capable of creating enough of an imbalance in a small enough amount of time that specifically leads to a large positive baryon number of the universe, to account for what is observed.

3. The universe conserves B-L, and there is a missing pool of leptons or antileptons (as the neutrino ratio indicates) hiding somewhere.

4. The universe conserves B and L, and there is a missing pool of antibaryons and a missing pool of leptons or antileptons (as the neutrino ratio indicates) hiding somewhere.

Where could missing B or L hide in scenarios 3 or 4 above?

* One possibility is that one or more kinds of dark matter have B and/or L numbers, but can't annihilate with SM matter, that balance things out.  

There is estimated to be something on the order of 9 times as much dark matter mass in the universe as there is ordinary matter.  And, a dark sector could easily have enough particles (if they had mass of 100 MeV each or less) to bring the universe's aggregate baryon number to zero.

But, to bring lepton number to zero, dark matter particle with leptons number would have to have very tiny masses, far lighter, for example, than keV scale proposed warm dark matter sterile neutrinos.  And, such tiny masses are inconsistent with thermal relic dark matter.

Within the realm of beyond the Standard Model theories that have any meaningful popularity, scenario 3 with axion dark matter than has lepton number, is probably the only plausible fit to the constraints.

But, there are also good reasons to doubt that non-thermal relic axion dark matter that has lepton number is the solution and you still need a B-L process limited to very high energies that is very efficient to give rise to a positive baryon number for the universe very early one.

* A second possibility is that there are areas of the universe that are predominantly made of antibaryons and the missing leptons or antileptons (at the neutrino ratio indicates). But, this possibility has been examined rather carefully by physicists and had been pretty much ruled out, essentially, because there is no mechanism to segregate the two parts of the universe and because the boundary area would be obvious as matter-antimatter annihilations flared constantly.

* Another possibility is that missing antibaryons and missing leptons or antileptons (as the neutrino ratio indicates) are hiding outside the observable universe while still being within the light cone of the Big Bang.  This leaves essentially two possibilities (which are not mutually exclusive).  

One is that the missing particles have been preferentially gobbled up by black holes which are not observationally accessible because they are separated from the rest of the universe by an event horizon. But, there is no obvious reason that antibaryons and the wrong kind of lepton should be preferentially absorbed by black holes when the attraction is seemingly all gravitational.

The other is that the missing particles are out there but are hiding earlier in time than the Big Bang. The are some heuristic motivations to think that this could lead to segregation because antimatter involves a flip of CPT from normal matter and one of the way that can happen is time, but it is still a bit of a sketchy concept that basically requires the second law of thermodynamics which is one of the main arrows of time to work in the opposite direction before the Big Bang.

For now, we won't get many hints until we can measure the universe's neutrino-antineutrino ratio, so we'll have to just ponder and guess.


Tuesday, September 20, 2016

Reframing Fundamental Physics

Even though we have the Standard Model of physics, a lot of it basically isn't used for any applied purpose other than analyzing the debris created when we hurl protons or electrons or atoms at each other at very high speeds in particle accelerators.

The Limited Practical Applications Of QCD

In particular, while will have a full set of equations for quantum chromodynamics (the physics of the strong force), outside the collider context, almost no applications actually use QCD calculations. Instead, nuclear engineers and physicists other than high energy collider physicists use tables that contain the experimentally measured properties of various hadrons and atomic nuclei, because experimentally measurements of all commonly encountered hadrons and atomic nuclei are profoundly more precise than calculations from first principles using QCD in the comparatively low energy circumstances in which we usually encounter them in nature and in practical applications of nuclear physics.

QCD does provide us with a list of essentially all possible pseudo-scalar mesons, vector mesons, baryons, tetraquarks and pentaquarks, but even in helping us to determine all possible hadrons, fails to straightforwardly explain the observed spectrum of scalar mesons and axial vector mesons, and likewise fails to straightforwardly explain why it is that we don't observe pure glueballs with the properties that one calculates from QCD. QCD certainly helps us distinguish between plausible ways of explaining what we see, and implausible ones.  But, it doesn't even definitively provide us with a complete list of all possible hadrons.  Ultimately, in the year 2016, we rely on experimental results and not QCD theoretical considerations to determine that.

The Limited Practical Application of Standard Model Weak Force Calculations And Antimatter

Similarly, tables are generally used, rather than first principles calculations, to predict the weak force decays of hadrons that appear outside colliders (top quarks don't hadronize, but pretty much only appear in the collider context). Indeed, for the most part, the only weak force interactions that ever matters for practical purposes are the weak force decay of down quarks to up quarks in neutrons, also known as "beta decay", and the weak force decay of muons to electrons.

Likewise, while we have an excellent experimental and theoretical understanding of antimatter, in practice, pretty much the only kinds of antimatter encountered outside the collider context, are the electron antineutrino produced in beta decay, the muon antineutrino produced in the decay of muons to electrons, and the tau antineutrino produced when an electron or muon antineutrino oscillates into a tau antineutrino. And, it takes truly extraordinary instrumentation to detect any kind of neutrino (apart from what can be inferred from the missing energy and momentum in an interaction that produces them). To the best of my knowledge, currently existing instrumentation that cannot even distinguish between neutrinos and antineutrinos when a neutrino is directly detected.

Furthermore, while scientists occasionally encounter muons (and even more rarely positrons in cosmic rays) in nature, weak force decays of other fundamental or composite particles such as hadrons other than neutrons, are quite rare outside the collider context. Probably the only one encountered with any frequency are pions and kaons which are again well described based upon experimental evidence.

Indeed, in most contexts, experimental data collapses the weak and strong force to simply describe the decays of various kinds of fundamental and composite particles.

The Limited Practical Application Of Higgs Physics

And, while the Higgs boson is important from a fundamental theoretical perspective for explaining the nature of the mass of the fundamental particles of the Standard Model, in practical applications, the mass arising from the Higgs mechanism in a hadron is so muddied by the mass arising from the gluons in a hadron, that it is irrelevant for hadrons, and in the case of charged leptons, experimentally measured charged lepton masses are known precisely and there is no need to know how these masses actually arise.

Quantum Electrodynamics (QED) Is Used Every Day

Of course, this doesn't address photons (which are described by a part of the Standard Model called QED) which are understood for all practical purposes exactly, or the quantum mechanical motion of individual particles like an electron in the absence of interactions with force carrying bosons.

Conclusion

In summary, for the most part, high energy physics and a large share of the Standard Model, while providing a deeper understanding of the universe and providing the means, in principle, to explain a book full for experimentally measured physical constants, is basically used in no context other than explaining phenomena that pretty much never happen outside of colliders.

Counterargument

While the high energies seen in colliders are seen in only very isolated circumstances in nature today, or non-collider circumstances, this doesn't necessarily mean that they have never been relevant are never relevant.

First, learning the greater complexity provides hints to understanding the aspects of fundamental physics we still do not understand.

Second, in the very early universe, shortly after the Big Bang, there were energies as high as, and indeed, well in excess of, those found in colliders. Understanding high energy physics may shed light on cosmology and astronomy as a result.

Third, there may be high energy physics in isolated circumstances in nature today, such as places near black hole event horizons and supernovas, for example.