Friday, July 13, 2018

Higgs Data Still Consistent With Standard Model Higgs Boson

The Coupling Strengths of the Higgs Bosons

A Higgs boson coupling in the context used in this post, is the likelihood that a Higgs boson will decay to a particular kind of pair of fundamental particles, which depends upon mass but has otherwise been known for a long time at various possible Higgs boson masses in the Standard Model. As a refresher, the decays of a 125 GeV Higgs boson in the Standard Model are approximately as follows:

b-quark pairs, 58%
c-quark pairs, 3%
tau-lepton pairs, 6.3%
muon pairs, 0.02%
gluon pairs, 8%
photon pairs, 0.2%
W boson pairs, 21.3%
Z boson pairs, 2.8%

The total does not add to 100% due to rounding errors and due to omitted low probability decays (e.g. electron pairs, s-quark pairs, etc.).

The observed decays can be normalized for each kind of decay by comparing the observed decay of the Higgs boson into particular kinds of pairs of particles to the Standard Model expectation. And, combined appropriately, you can combine all of the observed normalized Higgs boson decay rates into a single measure, mu, of the fit of the observed decays to the theoretical expectation in the Standard Model.

At last week's ICHEP 2018 conference, ATLAS and CMS released their latest results on the fit of the couplings of the Higgs boson that are observed so far to the Standard Model predicted couplings of the Higgs boson for a Higgs boson of the measured mass, which is summed up in a statistic known as mu for which a value of 1.0 is a perfect fit to the Standard Model prediction. This is summed up at Gauge Connection as follows:
About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here) 
\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th}) 
and CMS (see here) 
\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}). 
The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model.
This works out to a result that is 1.46 sigma from the Standard Model at ATLAS and 1.68 sigma from the Standard Model at CMS, both easily within the two sigma benchmark for consistency of experimental results with the predicted values (in a Gaussian distribution the average sigma amount with only statistical error present should be one sigma). The combined value of mu is about 1.15 or within 15% of the Standard Model expectation (and again consistent with it because it is less than two sigma from the expected value).

Experimental constraints of the Higgs self-coupling, which is very rare, are that it is not more than 22 times the Standard Model expectation.

Every time mu gets smaller, the parameter space for beyond the Standard Model theories that change the properties of the Higgs boson relative to the Standard Model gets smaller.

Other properties of the Higgs boson like the relative strength of the channels by which it is produced, its width, its spin, its charge and its CP even v. odd status remain perfectly consistent with a Standard Model Higgs boson as well, something that we already knew quite a while ago.

The Higgs Boson Mass

The combined ATLAS and CMS Higgs boson mass measurements at the end of Run 1 had the value 125.09 +/- 0.24 GeV.

A year ago, the combined ATLAS and CMS Higgs boson mass measurements were 125.14 +/- 0.17 GeV, with the ATLAS value equal to 124.98 +/- 0.28 GeV and the CMS value equal to 125.26 +/- 0.22 GeV.


The current Run-2 value from CMS is 125.26 +/- 0.21 GeV, more or less unchanged from a year ago. We don't have a new measurement from ATLAS so far this year.

A Higgs boson mass of 124.65 GeV is theoretically notable because it is the value of the Higgs boson mass if if the the sum of the square of the boson masses equals the sum of the square of the fermion masses which in turn each equal one half of the square of the Higgs vacuum expectation value. 

This theoretically significant value is just under 2.88 sigma from last year's combined value which so far remains unchanged; about 2.9 sigma from the CMS result and 1.2 sigma from the ATLAS result. Something as simple as a lower margin of error in a new ATLAS result could shift that balance, however, bringing the combined measurement closer to this theoretically notable value.

So far, the Higgs boson mass has been a bit heavy relative to this theoretically significant value, while the top quark mass, which is the dominant factor on the fermion side of the ledger has been a bit lighter than the theory would predict.

Wednesday, July 11, 2018

Does SUSY Really Lead To Gauge Unification?

There are three Standard Model constants that govern how strong the three Standard Model forces are and they vary in an exactly known way with energy scale. In the Standard Model as we know it, however, all three of these constants never take the same value at the same energy scale.

It is widely claimed that in Minimal Supersymmetric Standard Model (MSSM) which is one of the most studied version of Supersymmetry (SUSY) that there is an energy scale called the GUT scale at which all three coupling constants that govern the strength of the three forces of nature (other than gravity) do take the same numerical value (which is dimensionless).

The presentation cited below, at pages 38-40 (see also pages 57-59), however, argues that the widely touted claim that the MSSM gives rise to gauge unification isn't accurate. As Woit comments:
The question recently came up here (see this posting) of how good the SUSY GUT coupling constant unification prediction is. At a recent summer school lecture, Ben Allanach says the prediction is off by 5 sigma, i.e. that if you try and predict the strong coupling at the Z mass this way, you get 0.129 +/- 0.002, whereas the measured value is 0.119 +/- 0.002. Someone should tell Frank Wilczek…
In terms of "beauty" gauge unification is one of the most compelling arguments for SUSY, so this is a big deal.

Indeed, at this point it may take smaller tweaks to the running of the Standard Model constants for them to unify near the GUT scale than it does for the MSSM to produce this result.

Also, any theory that results in exact gauge unification in a pure GUT model without gravity almost certainly has to be wrong, because the addition of quantum gravity into a theory of gravity with its new force carrying boson, the graviton, necessarily tweaks the running all of the other experimentally determined constants in the Standard Model or any GUT that you want to devise. 

Friday, July 6, 2018

The Younger Dryas Impact Hypothesis

Count me among those who find the evidence for the Younger Dryas Impact Hypothesis to be more convincing than the evidence marshaled by the opponents of this hypothesis, a scientific controversy recently summed up in Science News.

Previous discussion of the issue here and here. A discussion of the contemporaneous Clovis Culture is here.

Quote Of The Day

[B]eware that there are general arguments in quantum gravity, independent of string theory, that global de Sitter spacetime is inconsistent, see e.g. Rajaraman 16 and references given there. If true, this means that no quantum consistent model for observed cosmology will be totally straightforward, all of them will have to realize a de Sitter cosmology as an effective phenomenon on the backdrop of non-de Sitter cosmology.
From Urs Schreiber at the Physics Forums.

The paper referenced is:
Graviton loop corrections to observables in de Sitter space often lead to infrared divergences. We show that these infrared divergences are resolved by the spontaneous breaking of de Sitter invariance.
Arvind Rajaraman, "de Sitter Space is Unstable in Quantum Gravity" (Submitted on 25 Aug 2016 (v1), last revised 17 Sep 2016 (this version, v2)).

The conclusion of the paper states:
We have argued that de Sitter space is not a solution to gravity coupled to a cosmological constant when quantum effects are taken into account. The classical equations of motion receive quantum corrections which are singular if the solution is taken to be de Sitter. We have argued that the deviations from de Sitter are calculable, and that the true solution is deformed away from de Sitter by a parameter proportional to √ κ. 
The argument for this was straightforward. In the exact metric of de Sitter space, there are gravitational perturbations whose propagator is ill defined, and which caused infrared divergences. A small deviation from de Sitter parametrized by a small parameter ǫ allows these modes to have a well defined propagator. However, the quantum effective action computed around this new metric now generically has terms which go as 1 ǫ . The quantum equations of motion are singular as we take ǫ to zero, and cause de Sitter to not be a solution when quantum corrections are included. (Another way to say this is that the de Sitter metric has an infinite action when quantum effects are included.) The calculation in the previous section argues that these qualitative arguments can be made quantitative, and that the perturbation away from de Sitter can be computed in perturbation theory. 
These arguments are related to previous arguments in the literature, for instance by Polyakov. While Polyakov has argued that scalar field theory in de Sitter (using the in-out formalism) already leads to an instability, we have shown that gravitons produce an instability in the more controlled in-in formalism. The in-in formalism is expected to asymptote to the in-out result when the intermediate time is taken to infinity; it would be interesting to see if this is the case. 
The Schwinger-Keldysh propagator for the gravitational perturbations is enhanced by a factor proportional to √ 1 κ . This indicates that the gravitational perturbation series, which is normally in powers of κ, is modified. A 1-loop diagram with a Schwinger-Keldysh propagator now scales as √ κ and in general, the perturbation series becomes an expansion in √ κ. 
We should also discuss the occasionally thorny issue of gauge invariance. It is well known that tadpoles of gravitons are not gauge invariant, and so one might wonder about the stautus of the tadpoles we have calculated. The resolution is that our intermediate steps have been performed in a fixed gauge, but our final result (that de Sitter is unstable) is a gauge invariant statement. It is therefore valid in any gauge. Similarly, the deformed metric is not a gauge invariant quantity, but it has been presented in a particular gauge, and can be transformed to any gauge of choice. 
A more subtle issue is the question of observables in quantum gravity. It is often argued that correlation functions, even for a fixed geodesic distance, are not well defined; this is roughly because any pointlike sources are smeared into black holes. However, our corrections are of order √ κ and therefore scale faster than any perturbative effect in quantum gravity, including the size of black holes. They are hence dominant at weak coupling and will not be washed out by quantum gravity effects. 
Finally we note that this solution to the issue of the IR divergences may potentially lead to observable effects, at least if the Hubble scale is large enough. We leave this issue for future work.

Aggregate Lepton Number Bounds Reconsidered

Lepton number is a quantum number defined to equal the total number of leptons (electrons, muons, tau leptons, electron neutrinos, muon neutrinos, and tau neutrinos) minus the total number of anti-leptons (positrons, anti-muons, anti-taus, electron antineutrinos, muon antineutrinos, tau antineutrinos). If there are more leptons than anti-leptons (making their ratio greater than one), then lepton number is a positive integer. If there are fewer leptons than anti-leptons (making their ratio less than one), then lepton number is a negative integer.

In the Standard Model of Particle Physics, lepton number is conserved in all interactions except one that only occurs in high energy circumstances (above 10 TeV), is predicted to be rare even then, and has never actually been observed. Neutrinoless double beta decay, if it was observed, would violate lepton number. These temperatures haven't been found in Nature in the universe since shortly after the Big Bang.

We don't have a direct measurement of the ratio of neutrinos to anti-neutrinos in the universe, and I have a post at Physics Stack Exchange asking about the experimental bounds on this ratio.

An anti-neutrino excess would favor an effective number of neutrino types (Neff) which is higher than the 3.042 predicted in the absence of a neutrino-antineutrino asymmetry, and higher proportions of primordial helium than predicted in vanilla Big Bang Nucleosynthesis, which are both present in the CMB data. But, the data are also consistent with no asymmetry between neutrinos and antineutrinos.

The expected primordial proportion of Helium-4 is about 25% and observation doesn't grossly differ from that value although it isn't a perfect fit. Per Wikipedia citing data through 2008:
Using this value, are the BBN predictions for the abundances of light elements in agreement with the observations? 
The present measurement of helium-4 indicates good agreement, and yet better agreement for helium-3. But for lithium-7, there is a significant discrepancy between BBN and WMAP/Planck, and the abundance derived from Population II stars. The discrepancy is a factor of 2.4―4.3 below the theoretically predicted value and is considered a problem for the original models,[13] that have resulted in revised calculations of the standard BBN based on new nuclear data, and to various reevaluation proposals for primordial proton-proton nuclear reactions, especially the abundances of 7Be + n → 7Li + p, versus 7Be + 2H → 8Be + p.[14]
One paper cited is this one.

The predicted value of Neff with the amount of neutrino asymmetry that is a best fit with observed helium-4 levels (as of 2012) would predict a Neff of about 3.146, when the observed value is 3.04 ± 0.18. This would involve a neutrino chemical potential of -0.2 which would imply a considerable excess of antineutrinos over neutrinos. 

There would be more anti-particles than particles in the universe by far, aggregate lepton number would be strongly negative and this negative amount would dwarf the positive baryon number of the universe. Even dark matter of keV mass or greater that carried lepton or baryon number would not overcome this asymmetry. For that matter, even a very modest 1% asymmetry in favor of antineutrinos would produce this result. This is because:
The number of baryons in the universe is about , and the number of neutrinos in the universe is about .
We know that the ratio of baryon antimatter to baryon matter (and the ratio of charged antileptons to charged leptons) is on the order of .
And, we know that to considerable precision there are 2 neutrons for every 14 protons in the universe (this is a confirmed prediction of Big Bang Nucleosynthesis), and that the number of charged leptons is almost identical to the number of protons in the universe.
The number of mesons and baryons other than neutrons and protons is negligible at any given time in nature since they are so short lived and generated only at high energies. (Also, the baryon number of a meson is zero.)
With an average neutrino mass of about 30 meV (which cosmology data combined with neutrino oscillation data suggests is order of magnitude correct), the share of neutrino mass of all Standard Model matter mass is about one part per 10,000. The ratio of nucleon mass to average neutrino mass is about 3*10^13 and the ratio of neutrinos to baryons is about 3*10^9. If neutrinos are disproportionately in the lightest neutrino mass eigenvalue, the fraction of all ordinary mass that comes from neutrinos is much lower.

The most recent papers I've located on the topic is from 2012. It's abstract states:
Recent observations of the cosmic microwave background (CMB) at smallest angular scales and updated abundances of primordial elements, indicate an increase of the energy density and the helium-4 abundance with respect to standard big bang nucleosynthesis with three neutrino flavour. This calls for a reanalysis of the observational bounds on neutrino chemical potentials, which encode the number asymmetry between cosmic neutrinos and anti-neutrinos and thus measures the lepton asymmetry of the Universe. We compare recent data with a big bang nucleosynthesis code, assuming neutrino flavour equilibration via neutrino oscillations before the onset of big bang nucleosynthesis. We find a slight preference for negative neutrino chemical potentials, which would imply an excess of anti-neutrinos and thus a negative lepton number of the Universe. This lepton asymmetry could exceed the baryon asymmetry by orders of magnitude.
Dominik J. Schwarz, Maik Stuke, "Does the CMB prefer a leptonic Universe?" (Submitted on 28 Nov 2012 (v1), last revised 11 Mar 2013 (this version, v3)). Published version here.

The body text explains:
In this work, we re-investigate the possibility of non-standard big bang nucleosynthesis, based on SPT results [3, 4], the final WMAP analysis [5], and the recent reinterpretation of the helium-4 and deuterium abundance [6, 7, 8, 9]. We use stellar observations and CMB data to constrain the influence of a possible neutrino or lepton asymmetry. To do so, we compare and combine different results for the abundance of primordial light elements with theoretical expectations including neutrino chemical potentials. We assume that neutrinos are Dirac fermions and that they are relativistic before and at the epoch of photon decoupling, i.e. mνi < 0.1 eV, i = 1, 2, 3. . . .
The difference in the energy densities is the observed extra radiation energy density, commonly expressed as additional neutrino flavour in the effective number of neutrinos 
∆Neff = (Nν − 3) +X f 30 7 ξf π 2 + 15 7 ξf π 4 , (1) 
with Nν = 3 for the three neutrino flavour f = e, µ, τ , and corresponding neutrino chemical potentials ξf = µνf /Tν at neutrino temperature Tν. Note that the standard model predicts Neff = 3.042, a small excess above 3 due to corrections from electronpositron annihilation (not included in (1), but taken into account in our numerical calculations below). . . . 
We will concentrate here only on neutrino asymmetry induced chemical potentials. Assuming relativistic neutrinos and a lepton asymmetry much bigger than the baryon asymmetry |l| ≫ b, but still |l| ≪ 1, one can link the neutrino chemical potentials to the lepton asymmetry l [13], 
ξf = µνf Tν = 1 2 l s T 3 , (2) 
where s denotes the entropy density. A large lepton asymmetry leads also to a second effect during BBN, due to interactions of electron neutrinos with ordinary matter. While all three neutrino flavour chemical potentials affect the Hubble rate independently of their sign, the electron neutrino chemical potential influences the beta-equilibrium e + p ↔ n + νe directly. It shifts the proton-to-neutron ratio, depending on the sign of µνe , and so modifies the primordial abundances of light elements. . . .
Recent CMB data, combined with priors obtained from BAO data and measurements of H0, point to a high helium fraction compared to standard BBN and observed in HII regions. At the same time there might be some extra radiation degrees of freedom, expressed in Neff. Introducing a single additional variable to the standard model of cosmology, a non-vanishing neutrino chemical potential induced by a large lepton asymmetry, leads naturally to higher primordial helium without affecting the abundance of primordial light elements too much. Allowing for the helium fraction Yp and Neff to be free parameters in the analysis of CMB data, gives, for the combination WMAP9+ACT11+SPT11+BAO+H0, Yp = 0.278+0.034 −0.032 and Neff = 3.55+0.49 −0.48 [5]. Also CMB alone (SPT12+WMAP7) points to a higher Yp = 0.314 ± 0.033 and Neff = 2.60 ± 0.67 [4]. . . .  [A] negative chemical potential would allow for a BBN best-fit model much closer to the maximum of the CMB posterior distribution of Yp and Neff. The chemical potential seems to do better than just adding extra degrees of freedom. 
As was shown in [4] the analysis of the CMB data combined with BAO and H0 shows a preference for an extension of the ΛCDM model with Neff and massive neutrinos. The CMB alone favours the one parameter extension of including a running of the spectral index of primordial density perturbations or a two-parameter extension with Yp and Neff as free parameters. The introduction of a neutrino chemical potential has the advantage that it can improve the fit to both data sets with only a single additional parameter. 
Here we suggested that recent CMB data could provide a first hint towards a lepton asymmetry of the Universe, much larger than the baryon asymmetry of the Universe. Today this lepton asymmetry would hide in the neutrino background. This scenario would have interesting implications for the early Universe, especially at the epochs of the cosmic quark-hadron transition [13] and WIMP decoupling [14]. The largest allowed (2σ) neutrino chemical potential |ξf | = 0.45 leads to ∆Neff = 0.266, fully consistent with CMB observations. 
The helium fraction reported by CMB observations results in a negative neutrino chemical potential ξf ∼ −0.2 and ∆Neff ∼ 0.1. In that case we would live in a Universe ruled by anti-neutrinos. From our analysis we conclude that the present abundance of light elements and CMB data are not able to rule out ξf = 0, the standard scenario of BBN. However, upcoming CMB data releases and improved measurements of primordial abundances will allow us to test the idea of a leptonic Universe. 
Unfortunately, the paper doesn't directly convert the neutrino chemical potential into a neutrino-antineutrino asymmetry ratio.

Since then, constraints on ∆Neff  have tightened considerably. "[A]s of 2015, the constraint with Planck data and other data sets was [Neff is equal to] 3.04 ± 0.18." Neff equal to 3.046 in a case with the three Standard Model neutrinos and neutrinos with masses of 10 eV or more do not count in the calculation.

But, this still doesn't rule out the kinds of neutrino-antineutrino asymmetries suggested.

There is also outdated information available to constrain this ratio, as I note at Physics Stack Exchange:
Another 2002 paper puts an upper bound on electron neutrino asymmetry at 3% of the number of electron neutrinos, and a bound on muon and tau neutrino asymmetry at 50% of the combined number of such neutrinos, but I'm not clear that this reflects current research or that its methods are sound.
This predates the latest CMB (cosmic background radiation) data by more than a decade, however, and also does not reflect the latest data on neutrino oscillations.

Thursday, July 5, 2018

Neanderthal Admixture In Europe Diluted By African Migration In Last 20,000 Years

To oversimplify, the main reason that Europeans have lower percentages of Neanderthal admixture than Asians is African migration to Europe in the last 20,000 years, and not natural selection against Neanderthal variants or "basal European" admixture from Southwest Europe. This is encouraging as the notion of "basal Europeans" with little or no Neanderthal admixture in Southwest Europe never made much sense to me.
Several studies have suggested that introgressed Neandertal DNA was subjected to negative selection in modern humans due to deleterious alleles that had accumulated in the Neandertals after they split from the modern human lineage. A striking observation in support of this is an apparent monotonic decline in Neandertal ancestry observed in modern humans in Europe over the past 45 thousand years. Here we show that this apparent decline is an artifact caused by gene flow between West Eurasians and Africans, which is not taken into account by statistics previously used to estimate Neandertal ancestry. When applying a more robust statistic that takes advantage of two high-coverage Neandertal genomes, we find no evidence for a change in Neandertal ancestry in Western Europe over the past 45 thousand years. We use whole-genome simulations of selection and introgression to investigate a wide range of model parameters, and find that negative selection is not expected to cause a significant long- term decline in genome-wide Neandertal ancestry. Nevertheless, these models recapitulate previously observed signals of selection against Neandertal alleles, in particular a depletion of Neandertal ancestry in conserved genomic regions that are likely to be of functional importance. Thus, we find that negative selection against Neandertal ancestry has not played as strong a role in recent human evolution as had previously been assumed.
Martin Petr, Svante Pääbo, Janet Kelso, Benjamin Vernot, "The limits of long-term selection against Neandertal introgression" bioRxiv (July 4, 2018) doi: https://doi.org/10.1101/362566

UPDATE: A comment suggests that I have misinterpreted the paper somewhat and I am looking into that possibility.

Monday, July 2, 2018

There Are No Useful Wormholes

4gravitons reports from Strings 2018 on one of the less controversial points at the Conference (his reference to "from these talks" is to one afternoon of the Conference, not the Conference overall):
My main takeaway from these talks was perhaps a bit frivolous: between Maldacena’s talk (about an extremely small wormhole made from Standard Model-compatible building blocks) and Hartman’s discussion of the Average Null Energy Condition, it looks like a “useful sci-fi wormhole” (specifically, one that gets you there faster than going the normal way) has been conclusively ruled out in quantum field theory.
This is unfortunate, as this is a staple of science fiction writing, which now can't really be claimed to have even a speculative basis.

Dark Matter Has A Cluster Problem

One of the main flaws of the toy model version of MOND is that is underestimates dark matter effects in clusters. But, dark matter models also have trouble producing the right halos for clusters.
We explore a scenario where metal poor globular clusters (GCs) form at the centres of their own dark matter halos in the early universe before reionization. This hypothesis leads to predictions about the abundance, distribution and kinematics of GCs today that we explore using cosmological N-body simulations and analytical modelling. We find that selecting the massive tail of collapsed objects at z≳9 as GC formation sites leads to four main predictions: i) a highly clustered population of GCs around galaxies today, ii) a natural scaling between number of GCs and halo virial mass that follows roughly the observed trend, iii) a very low number of free floating GCs outside massive halos and iv) GCs should be embedded within massive and extended dark matter (sub)halos. We find that the strongest constraint to the model is given by the combination of (i) and (ii): a mass cut to tagged GCs halos which accounts for the number density of metal poor GCs today predicts a radial distribution that is too extended compared to recent observations. On the other hand, a mass cut sufficient to match the observed half number radius could only explain 60% of the metal poor population. In all cases, observations favour early redshifts for GC formation (z≥15) placing them as contributors to the early stages of reionization.
Peter Creasey, et al., "Globular Clusters Formed within Dark Halos I: present-day abundance, distribution and kinematics" (June 28, 2018).