Saturday, December 15, 2018

Alternative Facts Strike The Scientific Establishment

Davidski at Eurogenes is more than a little appalled, and rightly so, that the seemingly reputable Max-Planck-Institut für Menschheitsgeschichte linguistics research center in Germany, is still circulating in a flashy animated presentation, the claim that the Indo-European languages made their way to South Asia, Western Europe and Eastern Europe as separate spokes from a common Armenian hub, around 8000 years ago. 

This claim, as Davidski correctly points out with solid, published research support, is contrary to overwhelming evidence from modern and ancient DNA and historical accounts to place that this DNA evidence in a linguistic context.

At this point, it should be really hard for any legitimate peer reviewed publication to take a paper proposing that hypothesis since it really doesn't hold water. There are certainly some respects in which the orthodox paradigm in the field of Indo-European linguistic origins could be wrong. I even support some of those hypotheses myself. But, this is not one of them.

Honestly, it is a little hard to figure out why an institution like that could support a position that rings of a Trump-like belief in "alternative facts". But, inertia is a powerful thing and old scholars can be very slow to acknowledge that their old hypotheses have been obviously disproven.

Thursday, December 13, 2018

Ancient Iron Age DNA From Normandy Shows Continuity With The Bronze Age

The uniparental genetics of 39 people buried in a Celtic Iron Age cemetery in Normandy in what is now France, shows substantial continuity with the Bronze Age populations in the region. But, it is slightly shuffled by the exchange of people and genes up and down the Atlantic Coast of Europe from Spain to Great Britain.

This is in accord with numerous other genetic studies which collectively tend to show that in most of Europe, something approximating the modern population genetic mix was established in the Bronze Age.

How Did Our Species Emerge Within Africa?

This quite non-technical paper argues that a model that sees human origins as older than conventionally assumed, and as the product of population structure, and hybridization between structured branches within our species and with archaic hominins who were contemporaneous with them, in a process that may have extended from all of Africa to West Asia, explores lots of ideas and raises many questions, but reaches few conclusions. It is a good introduction to some of the leading questions in the field of pre-Out of Africa human origins.
We challenge the view that our species, Homo sapiens, evolved within a single population and/or region of Africa. The chronology and physical diversity of Pleistocene human fossils suggest that morphologically varied populations pertaining to the H. sapiens clade lived throughout Africa. Similarly, the African archaeological record demonstrates the polycentric origin and persistence of regionally distinct Pleistocene material culture in a variety of paleoecological settings. Genetic studies also indicate that present-day population structure within Africa extends to deep times, paralleling a paleoenvironmental record of shifting and fractured habitable zones. We argue that these fields support an emerging view of a highly structured African prehistory that should be considered in human evolutionary inferences, prompting new interpretations, questions, and interdisciplinary research directions.
Eleanor M.L. Scerri, et al., "Did Our Species Evolve in Subdivided Populations across Africa, and Why Does It Matter?" 33(8) Trends Ecol Evol. 582 (August 2018) (open access).

Wednesday, December 12, 2018

From The Lab The Sterile Neutrino Controversy Continues, Still No Sign Of SUSY And CKM Fits

* A recent review paper below suggests that there is strong evidence for a 1 eV sterile neutrino.

I am far more skeptical based upon the fact that this estimate omits experimental searches that have had negative results like the MINOS, MINOS+, Daya Bay and JUNO experiments. The negative results exist at the level of reactor experiments that haven't found the 1 eV sterile neutrino anomaly or can explain it somehow (unlike the cherry picked examples of those that have), and aggregating positive results of an anomaly without including negative results is a gross example of a look elsewhere effect that can't be ignored and would greatly reduce the true statistical significance of these observations.

Cosmology data also disfavor from cosmology measurements that show the effective number of neutrino types under 10eV (Neff) to be 3 rather than 4 (after considering a radiation adjustment of about 0.046 to each number) at six sigmas of confidence, when it should be about 4.046 with a 1 eV sterile neutrino. A 1 eV sterile neutrino that oscillates with the active neutrinos would also drive up the sum of the neutrino masses from cosmology measurements to well above the measured constraints on that sum of masses (even if they are divided by four and the multiplied by three to adjust for the additional number of neutrino types and the active neutrinos were given a minimum combined mass it would imply sum of neutrino masses renormalized to three types of about 0.8 eV, about six times current limits). Specifically Neff=2.99±0.17, and the neutrino mass is tightly constrained to ∑mν<0.12eV. You need not just sterile but "secret" neutrinos to overcome these cosmology data problems.

Also, a sterile neutrino would be "hot" rather than "cold" (ca. GeV mass scale) or "warm" (keV mass scale), so it couldn't provide a dark matter candidate either. Yet, there is no astronomy evidence for "hot dark matter" which would reduce the amount of large scale structure at the galactic level in the universe.

The hypothetical sterile neutrino in the phenomenological model that is being fit to in order to evaluate anomalies in the observed results is also not a good fit in the overall scheme of Standard Model fundamental particles. An additional active neutrino type, for example, would be contrary to W and Z boson decay data and would require a fourth generation of fundamental fermions, so that can be ruled out. But, oscillations with fundamental particles to a sterile neutrino that don't have the same quantum number with regard to weak isospin as the other neutrino types would also seem problematic, and should also lead to non-tree level process effects that aren't observed.

In short, despite the high claimed significance of this result, I am quite comfortable that these results are due to systemic measurement errors not accounted for in a certain type of reactor experiment, rather than being true indications of a new fundamental particle that is a sterile neutrino that oscillates with low frequencies to the three active neutrino types and has a mass on the order of 1 eV.
For a long time there were 3 main experimental indications in favor of the existence of sterile neutrinos: νe¯ appearance in the νμ¯ beam in the LSND experiment, νe¯ flux deficit in comparison with theoretical expectations in reactor experiments, and νe deficit in calibration runs with radioactive sources in the Ga solar neutrino experiments SAGE and GALEX. All three problems can be explained by the existence of sterile neutrinos with the mass square difference in the ballpark of 1 eV2. Recently the MiniBooNE collaboration observed electron (anti)neutrino appearance in the muon (anti)neutrino beams. The significance of the effect reaches 6.0σ level when combined with the LSND result. Even more recently the NEUTRINO-4 collaboration claimed the observation of νe¯ oscillations to sterile neutrinos with a significance slightly higher than 3σ. If these results are confirmed, New Physics beyond the Standard Model would be required. More than 10 experiments are devoted to searches of sterile neutrinos. Six very short baseline reactor experiments are taking data just now. We review the present results and perspectives of these experiments.

The introduction to this paper notes that:
Oscillations of the three neutrino flavors are well established. Two mass differences and three angles describing such oscillations have been measured [1]. Additional light active neutrinos are excluded by the measurements of the Z boson decay width [2]. 
Nevertheless, existence of additional sterile neutrinos is not excluded. Moreover, several effects observed with about 3σ significance level can be explained by active-sterile neutrino oscillations. 
The GALEX and SAGE Gallium experiments performed calibrations with radioactive sources and reported the ratio of numbers of observed to predicted events of 0.88 ± 0.05 [3]. This deficit is the so called “Gallium anomaly” (GA). 
Mueller et al. [4] made new estimates of the reactor ˜νe flux which is about 6% higher than experimental measurements at small distances. This deficit is the so called “Reactor antineutrino anomaly” (RAA). 
Both anomalies can be explained by active-sterile neutrino oscillations at Very Short Baselines (VSBL) requiring a mass-squared difference of the order of 1 eV2 [5]. 
The LSND collaboration reported observation of ˜νµ → ν˜e mixing with the mass-squared difference bigger than ∼ 0.1 eV2 [6]. The initial results of the MiniBooNE tests of this signal were inconclusive and probably indicated additional effects [7]. However, in May, 2018 the MiniBooNE collaboration presented the 4.7σ evidence for electron (anti)neutrino appearance in the muon (anti)neutrino beams [8]. The effect significance reaches 6.0σ when the MiniBooNE and LSND results are combined. The MINIBooNE and LSND data are consistent, however the energy spectrum of the excess does not agree too well with the sterile neutrino explanation. 
The best point in the sterile neutrino parameter space corresponds to a very large mixing (sin2 2θ = 0.92) and a small mass square difference of ∆m2 14 = 0.041eV2 (see Figure 1). However, this region in the sterile neutrino parameter space is disfavored by other experiments and only a small area with larger mass square differences up to 2 eV2 and smaller mixing is still allowed by the global fits [9, 10]. 
Very recently the NEUTRINO-4 collaboration claimed the observation of the of ˜νe oscillations to sterile neutrinos with a significance slightly larger than 3σ [11]. The measured sterile neutrino parameters are surprisingly large: ∆m2 14 = 7.22eV2 and sin2 (2θ14) = 0.35. These values are in contradiction with the limits obtained by the reactor ˜νe flux measurements at larger distances (see, for example [12]). However, these limits depend on the phenomenological predictions of the reactor ˜νe flux which are model dependent. 
There are also cosmological constraints on the effective number of neutrinos [2, 13]. However, in several theoretical models sterile neutrinos (at least with not too large masses) are still compatible with these constraints. Details can be found in a review of sterile neutrinos [14]. 
At the DANSS detector in Russia:
The optimum point of the RAA and GA fit is clearly excluded. Figure 3 shows the 90% and 95% Confidence Level (CL) area excluded by DANSS in the ∆m2 14, sin2 2θ14 plane. The excluded area covers a large fraction of regions indicated by the GA and RAA. In particular, the most preferred point ∆m2 14 = 2.3 eV2 , sin2 2θ14 = 0.14 [5] is excluded at more than 5σ CL.
As this summary by the author illustrates, four different kinds of anomalies seen at several different experiments are not consistent with each other. The GA and RAA effects are in the opposite directions. The lack of replication of the anomalies in different kinds of neutrino experiments also casts doubt on the hypothesis that these anomalies can really be explained by sterile neutrinos.

* Meanwhile, another paper, in a seemingly endless stream of them at the LHC, looks for signs of supersymmetry (a.k.a. SUSY) and sees nothing. The latest paper include Run-2 data at higher energies than Run-1. Supersymmetry has pretty much been ruled out at scales on the order of 1 TeV and are hard to square with the data up to something like 10 TeV. 

Moreover, since supersymmetry was conceived to deal with naturalness and hierarchy issues at the electroweak energy  scale (ca. 0.246 TeV, which is the vacuum expectation value of the Higgs field), it has pretty much been ruled out as a solution to the issue it was originally devised to address.
Results of a search for supersymmetry are presented using events with a photon, an electron or muon, and large missing transverse momentum. The analysis is based on a data sample corresponding to an integrated luminosity of 35.9 fb−1 of proton-proton collisions at √s= 13 TeV, produced by the LHC and collected with the CMS detector in 2016. Theoretical models with gauge-mediated supersymmetry breaking predict events with photons in the final state, as well as electroweak gauge bosons decaying to leptons. Searches for events with a photon, a lepton, and missing transverse momentum are sensitive probes of these models. No excess of events is observed beyond expectations from standard model processes. The results of the search are interpreted in the context of simplified models inspired by gauge-mediated supersymmetry breaking. These models are used to derive upper limits on the production cross sections and set lower bounds on masses of supersymmetric particles. Gaugino masses below 930 GeV are excluded at the 95% confidence level in a simplified model with electroweak production of a neutralino and chargino. For simplified models of gluino and squark pair production, gluino masses up to 1.75 TeV and squark masses up to 1.43 TeV are excluded at the 95% confidence level.

* Finally, there is an updated summary of and global fit of measurements of the four Standard Model parameters of the CKM matrix.
[T]he results of the global fit under the SM hypothesis remain excellent: the p-value is 51%, which corresponds to 0.7σ, if all uncertainties are treated as Gaussian. . . . 
The consistent overall picture allows for a meaningful extraction of the CKM matrix elements, the extracted Wolfenstein parameters being (68% C.L. intervals) 
A = 0.8403 +0.0056 −0.0201 (2% unc.), 
λ = 0.224747 +0.000254 −0.000059 (0.07% unc.), 
ρ¯ = 0.1577 +0.0096 −0.0074 (5% unc.), and 
η¯ = 0.3493 +0.0095 −0.0071 (2% unc.).
Also, to be clear the comparison to the Standard Model hypothesis above is merely testing the relationships of the parameters to each other for consistency purposes. There is no established theory in the Standard Model to determine what the specific absolute value of the Standard Model parameters should be.

The most powerful confirmation of all of the CKM matrix construct, however, is that the data from myriad different processes can produce consistent measurements of each of the parameters at all, which suggests that the theoretical construct of the CKM matrix is sound.

The CKM matrix element measurements, while not measured supremely precisely, still place bounds on beyond the Standard Model physics indirectly that limit the scales at which it would manifest to not lower than about 114 TeV, far beyond what any near term collider could seem, and arguably much higher than that.

Buoyant Force And The Weight Of Combined Liquids

If you combine two liquids in a laboratory environment, they may have a little less volume than they did when they were separate. This impacts the amount of air in the vessel where you are measuring the weight of the liquids with a scale, causing the buoyant force from the increased amount of air to be present in the vessel that is being weighed. This, in turn, slightly reduces the weight of the combined liquids on a scale relative to merely adding their weights because the buoyant force of the air impacts weight from gravity on the scale (although not the actual mass of the liquids which stays the same in the absence of a nuclear reaction).

This is similar to pulling very lightly on a string hanging from the ceiling while you are weighing yourself on a bathroom scale.

It turns out that the effect is tiny. It is on the order of 0.001% for equal masses of liquid whose combined volume is reduced by 1% through mixture, which would be typically of what you might see in a laboratory. But, it is an effect in everyday physics that I wasn't aware of until reading the linked material today, despite having had a fair amount of college physics and physical chemistry and having read a great deal about since since then. So, I figured it deserved a post of its own to mention it.

For many applications these kind of fine details don't matter. The rule of thumb that I was taught in physics in college was that three significant digits is usually fine for any practical application of physics. But, in a high precision experiment, such as measurements of electromagnetic constants like muon g-2, you need to take into account factors of this scale (and other factors like the gravitational effect of tidal forces and like impacts of slight changes in temperature on substance density) to get an accurate answer, and a tiny "unknown unknown" that you have failed to account for could result in systemic error that won't show up in your margin of error bars.

This is one reason that a "five sigma" result that is considered a "discovery" of something in physics can't always be taken at face value, especially when the precision of the measurement is very high. Because this is based on a margin of error that may omit some overlooked factor. An overlooked factor is often much more probable than the probability of a statistical fluke necessary to produce a five sigma result. So, in any high precision scientific measurement, there is effectively a maximum threshold of precision based upon the likelihood that some factor is not accounted for in the experiment and its interpretation. 

This is why replication of results and having more than one independent laboratory make measurements even at the same facility (as was done at Tevatron and the LHC) is necessary to make the significance of the results obtained by the group more robust and trustworthy.

But, even multiple instances of performing the same experiment can miss these factors if everyone has the same training and makes identical omissions of potential error factors due to group think.

Monday, December 10, 2018

Does MOND Underestimate Excess Gravity For Wide Binaries?

Wide binary stars are one of the strongest arguments for modified gravity rather than dark matter as a correct solution to dark matter phenomena.

Simply put, the kind of diffuse dark matter distribution which dark matter propose exists in galaxies would be too slight and would exert too homogeneous a pull to explain systems of two stars at distance of, for example 7000 AU.

Yet, wide binary stars do display the same excessively strong attraction to each other that dark matter theories and various modified gravity theories observe and try to explain in galaxy scale systems. 

But, the plot thickens.

According to Mike McCulloch, MOND underestimates the effect seen, which he argues that his theory, now called quantified inertia but in February 2012 when he made this post called MiHsC, explains better. He explains:
There has been a great observational study done recently by Hernandez et al. (see: They have looked at wide binary stars and found that when they are separated by 7000AU or more, so that their accelerations decrease below 2*10^-10 m/s^2, then their behaviour becomes non-Newtonian, in that their orbital speeds are so large that the centrifugal (inertial) forces separating them should be greater than the gravitational pull inwards from the mass that we can see, so they should zoom off to infinity. A similar behaviour is seen in galaxy rotation curves, which deviate from Newtonian behaviour below this same acceleration. For these simple binary systems, it is hard to see how dark matter (DM) could kick in at a particular acceleration, and Newton and MoND both predict only about 1/10th of the orbital speeds seen. 
This provides a experimentum crucis, and so I have recently been testing MiHsC on these data: because of their low acceleration, MiHsC predicts a decrease in the stars’ inertial masses so they manage to orbit each other at the faster speed without inertia separating them. The orbital speed predicted by MiHsC is still only 1/2 of that seen, but this is much better than the 1/10th from Newtonian dynamics and MoND. I have just today submitted an abstract on this to the UK’s National Astronomy Meeting (NAM 2012).
This would be particularly notable, because somebody else also predicts an enhanced gravitational force that is greater in a two point particle system than in a spiral galaxy system, because it is further from spherical symmetry and uses that observation to explain why "dark matter phenomena" seem to be greater in galactic clusters than in spiral galaxies. That person would be Alexandre Deur who, while noting the result more in passing than rigorously proving it, discussed the applications of his approach to two point systems in a peer reviewed journal article in 2009, two years before Hernandez had published his wide binary star observations in 2011.

Hernandez updated the wide binary study in 2014, and this continued to show non-Newtonian behavior not plausibly explained by dark matter. But, MOND seems to be a bit closer to the mark in the more recent study as noted in a more recent post from McCulloch.

Genetic Data Still Shows Archaic Admixture With Ghost Populations In Africa

[H]uman evolutionary models that include archaic admixture in Africa, Asia, and Europe provide a much better description of patterns of genetic diversity across the human genome. We estimate that individuals in two African populations have 6−8% ancestry through admixture from an unidentified archaic population that diverged from the ancestors of modern humans 500 thousand years ago.
The pre-print is here.

At least two prior investigations have reached essentially the same conclusions. The estimated date of admixture is rather recent. We have no really solid clues concerning what archaic populations were involved in Africa. A species ancestral to Neanderthals and Denisovans might be a plausible possibility.

From the body text:
We chose a separate population trio to validate our inference and compare levels of archaic admixture with different representative populations. This second trio consisted of the Luhya in Webuye, Kenya (LWK), Kinh in Ho Chi Minh City, Vietnam (KHV), and British in England and Scotland (GBR). We inferred the KHV and GBR populations to have experienced comparable levels of migration from the putatively Neanderthal branch. However, the LWK population exhibited lower levels of archaic admixture (∼ 6%) in comparison to YRI, suggesting population differences in archaic introgression events within the African continent[.] . . .  
We inferred an archaic population to have contributed measurably to Eurasian populations. This branch (putatively Eurasian Neanderthal) split from the branch leading to modern humans between ∼ 470 − 650 thousand years ago, and ∼ 1% of lineages in modern CEU and CHB populations were contributed by this archaic population after the out-of-Africa split. This range of divergence dates compares to previous estimates of the time of divergence between Neanderthals and human populations, estimated at ∼650 kya (Pr¨ufer et al., 2014). The “archaic African” branch split from the modern human branch roughly 460 − 540 kya and contributed ∼ 7.5% to modern YRI in the model[.] . . .
[The] model is augmented by the inclusion of two archaic branches, putatively Neanderthal and an unknown archaic African branch. We inferred that these branches split from the branch leading to modern humans roughly 500 − 700 kya, and contributed migrants until quite recently (∼14 kya). Times reported here assume a generation time of 29 years and are calibrated by the recombination (not mutation) rate[.]
Some of the detailed conclusions are sketchy and buried in jargon and presentations of raw technical statistics.

Previous studies found particular (outlier) African populations to have particularly high levels of archaic admixture. This paper only looks at two very representative samples of modern populations in Africa, and since it looks at only two populations, cannot capture even a large fraction of the linguistic and ethnic diversity found in Africa.

Some of the results are summarized below, although you'll have to refer to the paper to know what the variables whose values are estimated mean.

Reproducing MOND with Conformal Gravity

MOND is a phenomenological toy model that reproduces the dynamical behavior of galaxies entirely from the distribution in baryonic matter in those galaxies, something that suggests that phenomena attributed to dark matter may actually be due to a deviation in reality from the predictions of general relativity as conventionally applied in very weak gravitational fields. But, MOND is not itself a relativistic theory.

One possibility to explain the discrepancy is that MOND is a quantum gravity effect. And, some theories are easier to prove than others.

A new paper illustrates one modified gravity theory that reproduces MOND's phenomenological successes relativistically, called conformal gravity.
In 2016 McGaugh, Lelli and Schombert established a universal Radial Acceleration Relation for centripetal accelerations in spiral galaxies. Their work showed a strong correlation between observed centripetal accelerations and those predicted by luminous Newtonian matter alone. Through the use of the fitting function that they introduced, mass discrepancies in spiral galaxies can be constrained in a uniform manner that is completely determined by the baryons in the galaxies. Here we present a new empirical plot of the observed centripetal accelerations and the luminous Newtonian expectations, which more than doubles the number of observed data points considered by McGaugh et al. while retaining the Radial Acceleration Relation. If this relation is not to be due to dark matter, it would then have to be due to an alternate gravitational theory that departs from Newtonian gravity in some way. In this paper we show how the candidate alternate conformal gravity theory can provide a natural description of the Radial Acceleration Relation, without any need for dark matter or its free halo parameters. We discuss how the empirical Tully-Fisher relation follows as a consequence of conformal gravity.
James G. O'Brien, et al., "Radial Acceleration and Tully-Fisher Relations in Conformal Gravity" (December 7, 2018).

The conclusion to this article states:
In McGaugh et al. [8] the RAR in galactic rotation curves was established via a set of 2693 total points. In this work we have shown that conformal gravity can universally fit the gOBS versus gNEW data in an even larger 6377 data point sample. Conformal gravity has successfully fitted over 97% of the 6377 data points across 236 galaxies without any filtering of points, fixing of mass to light ratios or modification of input parameters. Further, conformal gravity is shown to satisfy the v4∝ M relation consistently found in rotation curve studies, while also providing a derivation and extension of the TF relation. We conclude by noting that there is a great deal of universality in rotation curve data. This universality does not obviously point in favor of dark matter, and is fully accounted for by the alternate conformal gravity theory.
The raw equations used were as follows:
[C]onformal gravity is derived from an action based on the square of the Weyl tensor as 
IW = −αg times the indefinite integral of d4x(−g)1/2*CλµνκCλµνκ , (3) 
Cλµνκ = Rλµνκ − 1/2(gλνRµκ − gλκRµν − gµνRλκ + gµκRλν) + 1/6(Rαα)(gλνgµκ − gλκgµν) (4)
is the conformal Weyl tensor with dimensionless coupling constant αg. The resulting field equations, 

g (2∇κλCµλνκ − CµλνκRλκ) = Tµν

were solved by Mannheim and Kazanas in the region exterior to a static, spherically symmetric source [11], where it was shown that a point stellar mass produces a potential V(r) = −βc2/r + γc2r/2. To go from the single star to the prediction for rotational velocities of entire spiral galaxies, it was shown [12] that the resulting galactic velocity expectation is given by 
vCG(R) = (v2NEW(R) + (M/M⊙)(γc2R2/2R0)*I1(R/2R0)*K1(R/2R0) + γ0c2R/2 − κc2R2)1/2 , (5) 
where M is the mass of the galaxy in solar mass units (M⊙), R0 is the galactic disk scale length, and vNEW(R) is the standard Freeman formula for a Newtonian disk: 
vNEW(R) = ((M/M⊙)(βc2R2/2R30)(I0(R/2R0)*K0(R/2R0) − I1(R/2R0)*K1(R/2R0))1/2 . (6) 
Conformal gravity, while quite similar in outcome in most respects to Einstein gravity, is more naturally made a quantum gravity theory. A more complete development of the theory (which is simply compared to data after a thumbnail summary in the new paper) can be found at:
We review some recent developments in the conformal gravity theory that has been advanced as a candidate alternative to standard Einstein gravity. As a quantum theory the conformal theory is both renormalizable and unitary, with unitarity being obtained because the theory is a PT symmetric rather than a Hermitian theory. We show that in the theory there can be no a priori classical curvature, with all curvature having to result from quantization. In the conformal theory gravity requires no independent quantization of its own, with it being quantized solely by virtue of its being coupled to a quantized matter source. Moreover, because it is this very coupling that fixes the strength of the gravitational field commutators, the gravity sector zero-point energy density and pressure fluctuations are then able to identically cancel the zero-point fluctuations associated with the matter sector. In addition, we show that when the conformal symmetry is spontaneously broken, the zero-point structure automatically readjusts so as to identically cancel the cosmological constant term that dynamical mass generation induces. We show that the macroscopic classical theory that results from the quantum conformal theory incorporates global physics effects that provide for a detailed accounting of a comprehensive set of 138 galactic rotation curves with no adjustable parameters other than the galactic mass to light ratios, and with the need for no dark matter whatsoever. With these global effects eliminating the need for dark matter, we see that invoking dark matter in galaxies could potentially be nothing more than an attempt to describe global physics effects in purely local galactic terms. Finally, we review some recent work by 't Hooft in which a connection between conformal gravity and Einstein gravity has been found.
Philip D. Mannheim, "Making the Case for Conformal Gravity" (October 27, 2011).

From the body of the 2011 paper:
The non-renormalizable Einstein-Hilbert action is expressly forbidden by the conformal symmetry because Newton’s constant carries an intrinsic dimension. However, as noted above, this does not prevent the theory from possessing the Schwarzschild solution and Newton’s law of gravity. In addition, the same conformal symmetry forbids the presence of any intrinsic cosmological constant term as it carries an intrinsic dimension too; with conformal invariance thus providing a very good starting point for tackling the cosmological constant problem. 
Now we recall that the fermion and gauge boson sector of the standard SU(3)×SU(2)×U(1) model of strong, electromagnetic, and weak interactions is also locally conformal invariant since all the associated coupling constants are dimensionless, and gauge bosons and fermions get masses dynamically via spontaneous symmetry breaking. Other than the Higgs sector (which we shall shortly dispense with), the standard model Lagrangian is devoid of any intrinsic mass or length scales. And with its associated energy-momentum tensor serving as the source of gravity, it is thus quite natural that gravity should be devoid of any intrinsic mass or length scales too. Our use of conformal gravity thus nicely dovetails with the standard SU(3) × SU(2) × U(1) model. To tighten the connection, we note that while the standard SU(3)×SU(2)×U(1) model is based on second-order equations of motion, an electrodynamics Lagrangian of the form Fµ∂ααFµν would be just as gauge and Lorentz invariant as the Maxwell action, and there is no immediate reason to leave any such type of term out. Now while an FµνααFµν theory would not be renormalizable, in and of itself renormalizability is not a law of nature (witness Einstein gravity). However, such a theory would not be conformal invariant. Thus if we impose local conformal invariance as a principle, we would then force the fundamental gauge theories to be second order, and thus be renormalizable after all. However, imposing the same symmetry on gravity expressly forces it to be fourth order instead, with gravity then also being renormalizable. As we see, renormalizability is thus a consequence of conformal invariance.
The 2011 approach taken to the Higgs sector, before the Higgs boson was definitively discovered, turns out to be a problem rather than a benefit of the theory, but not necessarily an intractable one. 

In Physics, Rigor Is Often Late To The Party

Physicists often move forward using equations and mathematical models before knowing with certainty that they are consistent and have all of the necessary properties.

For example, Feynman himself suspected that renormalization, the process used to make the mathematics of quantum mechanics tractable, might not be mathematically valid in a rigorous way. He was wrong on that score. But, the proof of that came only a few years ago, about fifty years after he came up with the technique.

Another property of the Standard Model that has long been assumed, but has still only been partially established in a rigorous manner, is that it gauge invariant. Now a new paper has established that for an substantial and important subset of Standard Model processes.
For gauge theory, the matrix element for any physical process is independent of the gauge used. Since this is a formal statement and examples are known where gauge invariance is violated, for any specific process this gauge invariance needs to be checked by explicit calculation. In this paper, gauge invariance is found to hold for a large non-trivial class of processes described by tree diagrams in the standard model -- tree diagrams with two external W bosons and any number of external Higgs bosons. This verification of gauge invariance is quite complicated, and is based on a direct study of the difference between different gauges through induction on the number of external Higgs bosons.
Tai Tsun Wu, Sau Lan Wu, Gauge invariance for a class of tree diagrams in the standard model (December 6, 2018).

The fact that the Standard Model can be rigorously proven to have the mathematical properties that it should may seem like thankless work for a model that has worked well in applied applications for half a century. But, this is still important work because there are some kinds of mathematical defects that would be capable of going undetected for a long time period in practical applications that could point the way towards new physics that would otherwise have been assumed to involve possibilities that are believed to be impossible.

For example, the possibility that singularities could exist in General Relativity was determined to exist mathematically not long after the theory was proposed and was assumed to be a mere mathematical defect in the theory, but eventually, it has turned out that singularities like black holes and the Big Bang in General Relativity have physical meaning and are critically important phenomena necessary to understand the universe. 

A quote from Professor Susskind about string theory is relevant:
My guess is, the theory of the real world may have things to do with string theory but it’s not string theory in it’s formal, rigorous, mathematical sense. We know that the formal, by formal I mean mathematically, rigorous structure that string theory became. It became a mathematical structure of great rigor and consistency that it, in itself, as it is, cannot describe the real world of particles. It has to be modified, it has to be generalized, it has to be put in a slightly bigger context. The exact thing, which I call string theory, which is this mathematical structure, is not going to be able to, by itself, describe particles… 
We made great progress in understanding elementary particles for a long time, and it was always progressed, though, in hand-in-hand with experimental developments, big accelerators and so forth. We seem to have run out of new experimental data, even though there was a big experimental project, the LHC at CERN, whatever that is? A great big machine that produces particles and collides them. I don’t want to use the word disappointingly, well, I will anyway, disappointingly, it simply didn’t give any new information. Particle physics has run into, what I suspect is a temporary brick wall, it’s been, basically since the early 1980s, that it hasn’t changed. I don’t see at the present time, for me, much profit in pursuing it.

Friday, December 7, 2018

Negative Mass Paper Seeking To Explain Dark Matter and Dark Energy Fails

Sabine Hossenfelder explains succinctly and clearly at her Backreaction blog why a new paper proposing a negative mass solution to explain dark matter and dark energy doesn't work. 

The most basic problem is that in Einstein's theory of General Relativity and pretty much any reasonable generalization of it, negative matter attracts negative matter, positive matter attracts positive matter, and negative matter repeals positive matter.  But, this paper makes negative matter repel other negative matter, and also engages in baroque contortions in an effort to deal with the non-conservation of mass-energy associated with dark energy.

Ultimately, the resulting solution is worse than the cosmological constant and dark matter of the Concordance model of cosmology a.k.a. lambdaCDM, which is itself flawed.

Lone Direct Dark Matter Detection Signal Contradicted

There have been many direct dark matter detection experiments. Just one of them, DAMA, claims to have seen a signal (purportedly at 9 sigma significance) of dark matter in the form of an annual modulation of events (some background events are expected anyway, but it shouldn't vary seasonally).

A new experiment, COSINE-100 attempted to replicate DAMA's finding using the same materials and theoretical framework which it reported upon in the journal Nature. The result contradicts the outlier positive signal from DAMA:
We observe no excess of signal-like events above the expected background in the first 59.5 days of data from COSINE-100. Assuming the so-called standard dark-matter halo model, this result rules out WIMP–nucleon interactions as the cause of the annual modulation observed by the DAMA collaboration. The exclusion limit on the WIMP–sodium interaction cross-section is 1.14 × 10−40 cm2 for 10-GeV c−2WIMPs at a 90% confidence level.
Once again, the dark matter paradigm has failed to show results and this result is a step backward for that theory with the only positive evidence for dark matter detection rejected as unsound.

How To Migrate To America Without Leaving Much Of A Genetic Trace


The only genetic trace of any population other than the Founding Population with a small effective size arriving in the Americas at any time prior to the ca. 3500 BCE to 3000 BCE time frame when the Paleo-Eskimo ancestors of the Na-Dene peoples arrive is a smidgen of Paleo-Asian ancestry (on the order of 5% to 10%) in a couple of groups of tribes in the Amazon jungle near the base of the mountains to their west and north, who number in the aggregate, closer to 10,000 people than 100,000 people.

The bulk of the Founding Population migration from Beringia to the rest of North America and South America at which point the Founding Population starts to grow exponentially until it reaches the full carrying capacity of those two continents with the technologies available to them at the time, starts around 16,000 years ago, although it is possible that some members of the Founding Population jumped the gun in modest numbers whose population growth didn't explode the way that it did around 16,000 years ago.

What pre-Na-Dene migrations to the Americas could have happen that wouldn't leave a genetic trace that has been detected so far (considering primarily genetic evidence at this point)?

Any scenario in which there is a genetically distinct population of modern humans in the Americas other than the Founding population at any time more than 6000 years ago or so, must fit into one of the scenarios below to have escaped detection through population genetics at this point.

In General

Since at this point DNA testing is of statistical and opportunity samples, rather than being comprehensive, anything that keeps that target DNA from dispersing into a large generation population and experiencing some approximation of panmixia rather than staying in clumps in a small discrete population that is small enough, or hard enough to recognize as distinct, can allow it to escape inclusion in any samples examined so far. So, while low absolute levels of the target DNA in the macro-population is important, strong clustering as opposed to dispersal of the target DNA also matters a great deal.

Among other things, it is critical to keep the undetected or exceedingly rare population out of any exponentially growing founding population.

Bottleneck conditions during adverse environmental circumstances or after wars are tricky. Outsiders who arrive right before a bottleneck can easily have their genetic trace lost to random drift as the population shrinks. But, if any of the outsider DNA manages to stay in the gene pool once the population rebounds after a bottleneck, that outsider DNA will be caught in a founder's effect and become widespread and common in the successor population.

1. Into the Americas That Failed.

It is possible that modern humans or more archaic hominins arrived in the Americas before the Founding population did but went almost entirely extinct before they arrived for some reason. Perhaps some relict population survived in some place that had effective boundaries to prevent their expansion beyond a small population out of their refugium. Perhaps that relict population experienced genocide at the hands of the Founding population upon first contact in most locations so that only a few members of the relict population introgress into the Founding population gene pool and those genes are lost due to genetic drift in the first few generations. This scenario is especially possible if the encounter with the relict population happens relatively close to the time that the Founding Population has expanded to its Malthusian limit. Otherwise, even a minor introgression of genes into the Founding Population during a period of rapid population growth for the Founding Population would be magnified by a Founder's effect and it would be almost impossible to prevent the introgressed genes from becoming ubiquitous in a moderately sized region of the Americas within a few generations of the initial introgression.

Examples: Modern humans has expanded beyond Africa by ca. 125,000 years ago, and has reached India before 75,000 years ago, but did not experience rapid expansion until about 75,000 to  50,000 years ago.

Madagascar had a small modern human population that did not rapidly expand in population or thrive economically for at least significant portions of the time period five hundred years or so before the Austronesians arrived. But, when Austronesians arrived, all trace of them vanished if they had not gone extinct themselves before the Austronesians arrived.

In the Americas, Leif Erikson's attempt to colonize North America from Iceland ca. 1000 CE, and the Roanoke colony are examples of failed attempts at colonization that left no subsequent genetic trace.

2. Migrations Of People Who Are Late To The Party And Not Conquerors.

Once the Founding population expands to its Malthusian limit and populations stabilize or even experience occasional bottlenecks in some location, a much larger introgression can go undetected for a long time, because the genetic effect of the introgression isn't amplified into a Founder effect, and the likelihood the same of the introgressed genes are lost to genetic drift over time is much greater, especially if the introgressing population has a fertility rate or population genetic fitness that is not larger than that of the existing population. It also helps if the population into which the introgression takes place is geographically immobile, perhaps because of geographic boundaries or because neighboring regions are populated with people who don't welcome newcomers.

Examples: Gypsies in Europe. Jews and Catholics in the Deep South of the United States or Appalachia.

3. Closely Related Populations.

The more similar a population other than the Founding population is to the Founding population genetically (which usually implies a most recent common ancestor at not too great of a time depth), the harder it is to distinguish through genetic data from a single wave Founding population, so long as the additional population also has a small effective population size. This is because if they had a large effective population size they would make the Founding population effective population size look larger, and we know that the apparent size of the Founding population, robustly through many kinds of measures, is very small, probably the smallest of any continent sized population apart from the Aborigines of Australia and indeed probably even smaller than in Australia.

Examples: Recent English immigrants to New Zealand are genetically indistinguishable from the earliest English colonists of New Zealand.

There were probably at least two waves of Indo-Aryan migration into India separated by several centuries, one of which reached the whole sub-continent, and the other of which only extended to part of the sub-continent (generally in the north). But, the two waves of migration in northern India are very hard to distinguish from a single Indo-Aryan migration wave.

4. Isolated Outsiders.

The non-Founding population is small in absolute number, is strongly endogamous for the time period until the Founding population (at least locally) is at the tail end of the logistic population growth curve, and the non-Founding population has a significantly lower population growth rate than the Founding population for some reason.

The endogamy doesn't necessarily have to be primarily a social matter, although it can be. Geographic barriers such as long distances of water that must be traversed or high mountains or deserts or jungles can also powerfully enforce endogamy.

The most extreme example of this would be a single person (think Marco Polo) or a family (think Swiss Family Robinson) that comes to the Americas and doesn't have any children after they arrive who survive to have children of their own.

Example: Jewish communities in Africa and India. Religious minority populations in Iran. North African sailors in the British Isles. There are quite a few examples of very strong (although rarely absolutely and entirely perfectly maintained) endogamy for long periods of time (a couple thousand years or so), so as jati endogamy for roughly the last two thousand years in India.

Different Levels Of Impact

Historically, replacement of Y-DNA in a population is most common. Replacement of mtDNA in a population is much more rare but can happen in isolated circumstances. Complete autosomal DNA replacement is much more rare than complete replacement of either kind of uniparental DNA.

This probably happened to some extent in the founding population of the Americas which has much more mtDNA diversity (even though that itself is very modest) than it does Y-DNA diversity.

Similarly, while all modern humans out of Africa have a small single digit percentage of Neanderthal ancestry, no Neanderthal Y-DNA or mtDNA survives in modern human populations. The same pattern is observed in the smaller subset of people who have Denisovan ancestry.