## Thursday, November 29, 2018

### 14,000 Year Old Fishing Village Unearthed In British Columbia

The New Archaeological Site In British Columbia

While it has received prominent mention in recent years, it is still possible to gain valuable insights into human prehistory by means other than genetics. Sometimes old school archaeological digs and carbon dating can still be a source of important discoveries.

An archaeological site in British Columbia sheds light on the lives of members of this Founding Population at a time close to their primary expansion out of Beringia. Among other things, it corroborated the hypothesis that these people had relatively long term settlements in some places, and relied on a mix of fishing and terrestrial hunting and gathering for subsistence.
CTV reports that a team of students from the University of Victoria’s archeology department have uncovered the oldest settlement in North America. This ancient village was discovered when researchers were searching Triquet Island, an island located about 300 miles north of Victoria, British Columbia.
The team found ancient fish hooks and spears, as well as tools for making fires. However, they really hit the jackpot when they found an ancient cooking hearth, from which they were able to obtain flakes of charcoal burnt by prehistoric Canadians.

Using carbon dating on the charcoal flakes, the researchers were able to determine that the settlement dates back 14,000 years ago[.] . . .
Alisha Gauvreau, a Ph.D student who helped discover this site. . . and her team began investigating the area for ancient settlements after hearing the oral history of the indigenous Heiltsuk people, which told of a sliver of land that never froze during the last ice age.
William Housty, a member of the Heiltsuk First Nation, said, “To think about how these stories survived only to be supported by this archeological evidence is just amazing. This find is very important because it reaffirms a lot of the history that our people have been talking about for thousands of years.”
But, one quote from the PhD student in the story is mostly wrong:
“What this is doing, is changing our idea of the way in which North America was first peopled, said Gauvreau.”
In fact, while this find is important, it is important mostly because it confirms and corroborates the existing paradigm regarding the peopling of the Americas, not because it "is changing our idea" of how this happened. It is notable not because it changes our ideas about the peopling of the Americas, but because it is some of the most clear and concrete evidence to date confirming the existing paradigm.

But, it is understandable and forgivable that an investigator selling a story about her discovery to the press stretched the truth a little on this score. Paradigm changing discoveries are hot news. And, while this particular paradigm affirming find actually is important, paradigm affirming results are rarely news (imagine how dull the nightly news would be if it ran a big news story every time that the Large Hadron Collider had a result consistent with the Standard Model of Particle Physics).

Background: Why Does The "Founding Population" Of The Americas Matter?

When it comes to the prehistory of the Americas, one of the central questions is to understand is how people arrived in the Americas, and one of the central players in the answer is the "Founding Population" of the Americas.

The Founding Population was a group of people with a quite small effective population size (a few hundred at most) who rapidly expanded from Beringia into essentially all of the "virgin territory" of North America and South America over a period of a couple of thousand years or so as the last great ice age (which peaked at the Last Glacial Maximum about 20,000 years ago) retreated, starting more than a thousand years before the Young Dryas climate event (ca. 12,900 to 11,700 years years ago, which was a return to glacial conditions which temporarily reversed the gradual climatic warming after the Last Glacial Maximum started receding).

There is a growing community of investigators and observers of the prehistory of the Americas who give credence to the scattered bits of evidence for one or more older hominin populations in the Americas (either modern human or archaic hominid) who migrated into the Americans from Beringia before or during the Last Glacial Maximum, rather than only starting when the vast North American glacier started to melt and recede. But, we know that any earlier hominins in the Americas (modern human or otherwise) never thrived and were either almost entirely wiped out by the later waves of modern human migration, or were so similar genetically similar to the founding population of the Americas they are indistinguishable from them genetically. Because there is no distinguishable trace of them in any modern or ancient DNA samples from the Americas with the possible exception of some minor "paleo-Asian" ancestry in a few tribes in the Amazon, whose origins are a mystery.

But, even if you find that evidence to be credible, there is overwhelming modern and archaic genetic evidence that 99.99% or more of the ancestry of the pre-Columbian residents of the Americas is derived from a single "Founding Population" which started to expand in earnest not many centuries earlier than 14,000 years ago, subject to two exceptions: (1) Inuits in the Arctic and sub-Arctic, and (2) some select tribes in Alaska, the Pacific Northwest and the American Southwest with Na-Dene ancestry.

The Inuits derive from a migration wave from Northeast Asia within the last two thousand years and replaced earlier "paleo-Eskimo" populations in Northern Canada. The Na-Dene derive from a migration wave from Northeast Asia around the time of the European Bronze Age and then admixed with descendants of the Founding Population who were already present in North America.

But, apart from a small component of some cryptic "paleo-Asian" ancestry in a handful of hunter-gatherer tribes in jungles in the Amazon River basin near the northeastern foothills of the Andes Mountains, all other pre-Columbia genetic ancestry in the Americas derives from the Founding Population. Founding Population ancestry was the predominant source of ancestry in almost every non-Inuit indigenous person in North America and South America in 1492, and was the only source of ancestry in the lion's share of those millions of people.

So, given their central role as the primary ancestors of all of the indigenous people of the Americas, except the Inuits, knowing more about this quite small community of people from around 14,000 years ago, is obviously a matter of great importance.

## Wednesday, November 28, 2018

### The Latest On Top Quark Mass And Properties

ATLAS and CMS have also come out with a combined review of top quark property measurements at the LHC. Like the Higgs boson review released yesterday, the data aren't particularly new.

In principle, the top quark is fully described in the Standard Model when you know its mass and the relevant components of the CKM matrix (which is itself a function of four parameters). All other top quark properties are predicted by the Standard Model (sometimes with the assistance of other Standard Model parameters like the strong force coupling constant), so the experimental results can be compared to Standard Model predictions to constrain extensions of the Standard Model involving "new physics" and to calibrate numerical and analytical approximation methods for ascertaining Standard Model predictions.
The top quark mass, mtop is a key parameter in the SM and is the major contributor to the Higgs boson mass (mH) through radiative corrections. Therefore, the accuracy on both mtop and mH measurements is quite crucial for the consistency tests of the SM. Starting from the Tevatron experiments, the top quark mass has so far been measured with increasing precision using multiple final states, as well as with different analysis techniques. Two of the most recent mtop measurements from CMS and ATLAS experiments are presented here.
ATLAS has recently performed a top quark mass measurement [21] in lepton+jets final states using a 20.2 fb−1 dataset at √ s = 8 TeV. The full event reconstruction is performed using a likelihood based kinematic fitter, KLFITTER [22]. The t¯t → lepton+jets event selection is further optimized through the usage of a boosted decision tree [23]. The top quark mass (mtop) together with the jet energy scale factor (JSF) and b-jet energy scale factor (bJSF) is then simultaneously extracted using the template fit technique. The template fit results in terms of mtop and mW are shown in Fig. 7. The measurement yields a top quark mass of 172.8±0.39(stat)±0.82(syst) GeV, where the dominant uncertainties are driven by theoretical modeling and systematics.
The latest mtop measurement [25] from CMS is based on a 35.9 fb−1 dataset at √ s = 13 TeV. The full t¯t → lepton+jets event reconstruction is performed using a kinematic fit of the decay products. A 2-D ideogram fitting technique [24] is then applied to the data to measure the top quark mass simultaneously with an overall jet energy scale factor (JSF), constrained by mW (through W → qq¯ 0 decays); the fit results in terms of mtop and mW are shown in Fig. 8. The ideogram method measures an mtop value of 172.25 ± 0.08(stat) ± 0.62(syst) GeV, in consistency with the Run 1 CMS measurements at √ s = 7 and 8 TeV. The measurement results in a precision of ∆mtop/mtop ≈ 0.36% where the leading uncertainties originate from MC modeling, color reconnection, parton showering, JES, etc. The most recent individual mtop measurements from the LHC experiments, along with the world average value for mtop are summarized in Fig. 9.
The Particle Data Group reports that the global average value for the top quark mass (including measurements from Tevatron as well as the LHC and also the one CMS Run 2 result) is 173.0 ± 0.4 GeV.

Either analysis of speculative theoretical predictions of what the top quark mass should be to fit various assumptions can be found in a March 20, 2014 post at this blog. Some highlights:
An extended Koide's rule estimate of the top quark mass using only the electron and muon masses as inputs, predicted a top quark mass of 173,263.947 ± 0.006 MeV. . . .
The dominance of any imprecision in the top quark mass to overall model fits is further amplified in cases where the quantities compared are the square of the masses rather than the masses themselves (e.g. comparing the sum of squares of the Standard Model particle masses to the almost precisely identical square of the vacuum expectation value of the Higgs field). . . . About 72% of this imprecision is due to the top quark mass and about 99.15% of the imprecision is due to the top quark mass and Higgs boson masses combined. . . . What is the best fit value for the top quark mass? Answer: 173,112.5 ± 2.5 MeV . . . The value of the top quark mass necessary to make the sum of the squares of the fermion masses equal to the sum of the square of the boson masses would be about 174,974 MeV under the same set of assumptions[.]
That analysis assumed a 125,955.8 MeV mass for the Higgs boson, which is high (the current best estimate is 125.18 ± 0.16 GeV), so the top quark mass estimates in both cases should be higher than estimated given those assumptions. As previously noted in a December 16, 2016 blog post at this blog:
If the the sum of the square of the boson masses equals the sum of the square of the fermion masses equals one half of the Higgs vacuum expectation value, the implied top quark mass is 174.03 GeV if pole masses of the quarks are used, and 174.05 GeV if MS masses at typical scales are used. . . .
The expected value of the top mass from the formula that the sum of the square of each of the fundamental particle masses equals the square of the Higgs vaccum expectation value (a less stringent condition because the fermion and boson masses don't have to balance), given the global average Higgs boson mass measurement (and using a global fit value of 80.376 GeV for the W boson rather than the PDG value) is 173.73 GeV. The top quark mass can be a little lighter in this scenario because the global average measured value of the Higgs boson mass is a bit heavier than under the more stringent condition.
One property of the top quark predicted by the Standard Model is its "decay width" (which has a one to one correspondence with its half-life). A particle's half-life is inversely proportional to its decay width, so a particle with a very large decay width, like the top quark, has a very short half-life. The quantity αs referred to in the text is the strong force coupling constant strength at the Z boson mass.
Top Decay Width
Being quite heavy the top quark has a large decay width (Γt). Within the SM, the Next-to-next-to-leading-order (NNLO) calculations predict Γt of 1.322 GeV for a top quark mass (mtop) of 172.5 GeV and αs=0.1189 [15].
CMS has recently utilized the t¯t → dilepton events from 12.9 fb−1 of the Run 2 dataset (at √ s = 13 TeV) to constrain the total decay width of the top quark through direct measurement. . . . the likelihood fit provides an observed (expected) bound of 0.6 < Γt < 2.5 (0.6 < Γt < 2.4) GeV at 95% confidence level [16]. [Ed. although expressed differently, this is roughly equivalent to a value of 1.5 +/- 0.45 GeV which is actually a smaller MOE and a mean value closer to the predicted value than the ATLAS measurement. The actual fit to the prediction is actually a little better since the error margins are lopsided.]
ATLAS performed a more refined measurement of top quark decay width using the t¯t → lepton+jets events from 20.2 fb−1 of the Run 1 dataset at √ s = 8 TeV. . . . the measurement yields a value of Γt = 1.76±0.33(stat) +0.79 −0.68 (syst) GeV (for mtop=172.5 GeV) [17], in good agreement with the SM predicted value. However, the measurement is limited by the systematic uncertainties from jet energy scale/resolution and signal modeling.

## Tuesday, November 27, 2018

### The Latest On The Higgs Boson Mass

One of the parameters of the Standard Model which I watch very closely, because it has only been measured at all for a few years and because it is relevant for many purposes is the Higgs boson mass. Indeed, it is the only experimentally measured parameter involving the Higgs boson in the Standard Model. If you know its mass, in the Standard Model, the particle is fully described.

An end of year paper speaking officially for both the ATLAS and CMS experiments at the Large Hadron Collider provides this summary of Higgs boson mass measurements at the LHC.
5.4 Higgs boson mass measurement
The Higgs boson mass can be measured using the high resolution ZZ∗ and γγ final state. Combining the measurements in these two channels from 2015-2016 data and from run 1, the ATLAS collaboration reports a value of the Higgs boson mass of 124.97±0.24 GeV [32] (with ±0.19 GeV of statistical uncertainty and ±0.13 GeV of systematic uncertainty, mainly from uncertainties in the photon energy scale). With the ZZ∗ channel from 2015-2016 data, the CMS collaboration reports a mass value of 125.26 ± 0.21 GeV [33].
At the same time, a direct upper limit on the decay width is set at 95% confidence level at 1.1 GeV. This is still far above the predicted width in the SM which is about 4 MeV. A more model dependent constrain on the Higgs boson width can be derived comparing the rate of gg → H(∗) → ZZ(∗) events in the on-shell and off-shell Higgs mass regions. The ATLAS analysis with 2015-2016 data sets a model-dependent limit at 14.4 MeV on the decay width, at 95% confidence level [34].
All other properties of the Higgs boson measured to date are consistent with the Standard Model predictions for it, within the limits of experimental measurement uncertainty.

The most recent current combined LHC mass measurement of the Higgs boson I have see in most sources is 125.09 ± 0.24 GeV, which is based upon all measurements in all channels at ATLAS and CMS combined, in Run 1. But, the Particle Data Group reports a more precise figure of 125.18 ± 0.16 GeV, which includes one Run 2 measurement in one channel from CMS.

A Higgs boson mass of 124.65 GeV is not yet ruled out by the data and would be interesting because that mass is one for which the sum of Yukawas for all of the fundamental bosons in the Standard Model is exactly 0.5. But, the weighted global average of the Higgs boson mass is about 125.09 GeV with a MOE of 0.24 GeV, which is 0.44 GeV higher than the 124.65 GeV value that is so notable, which is a little under two sigma. So, the lower value isn't excluded experimentally, but it isn't favored either.

The gap between the ATLAS measurement and this theoretical value is 0.32 GeV, which with a MOE of 0.24 GeV is just 1.25 sigma from the expected value. But, the gap between the CMS measurement and the theoretical value is 0.61 GeV, which with a MOE of 0.21 GeV is almost three sigma.

On the other hand, the fact that two experiments using the same equipment are 0.36 GeV apart, and that the underlying measurements that went into each experiment's average value are even further apart, makes me think that the systemic and.or theoretical error is underestimated. The ATLAS and CMS individual experiments going into the global average have a swing of on the order of 1 GeV plus. Statistical error is pretty hard to get wrong (except for considering the effect of look elsewhere effects which aren't very important when there are only four measurements or so at issue), but systemic and theoretical error is inherently hard to estimate.

### Quantum Gravity v. General Relativity

A quantum gravity theory based upon a massless spin-2 graviton should, in the classical limit, reproduce general relativity (GR) (I have yet to see any really rigorous proof of this piece of folk wisdom). But, such a theory isn't and can't be, completely identical to GR, although devising an experimental test of whether it is one or the other is a question that has stumped physicists so far.

There are some pretty generic qualitative differences between classical GR and any theory of gravity based upon graviton exchange. Here are fifteen of them. In a quantum gravity theory:

1. Gravitational energy is localized (this is not true in GR).

2. Gravitational energy is perfectly conserved (this is not true in most interpretations of GR).

3. Graviton self-interactions and graviton interactions with other particles would look the same mathematically, while in GR gravitational field self-interactions do have an impact on space-time curvature, but while all other kinds of mass-energy inputs make their way into Einstein's equations via the stress-energy tensor, gravitational field self-interactions are treated differently mathematically.

4. Gravitons deliver gravity in tiny lumps, while space-time curvature does so continuously; i.e. sometimes graviton should act like particles instead of waves, while GR has only wave-like gravitational behavior.

5. Gravitons ought to be able to exhibit tunneling behavior that doesn't exist in classical GR.

6. A graviton based theory is stochastic; GR is deterministic.

7. It is much less "natural" to include the cosmological constant in a graviton theory than in GR where it is an integration constant. In a quantum gravity theory there is a tendency to decouple dark energy from other gravitational phenomena.

8. In a quantum gravity theory, gravitons couple to everything so a creation operator from a pair of high energy gravitons could give rise to almost anything (in contrast, photoproduction can give rise only to pairs of charged particles that couple to photons); likewise any two particles with opposite quantum numbers could annihilate into gravitons instead of, for example, photons. Neither creation nor annihilation operations exist in GR in quite the same way, although seemingly massive systems can be converted into high energy gravitational waves.

9. In some graviton based theories, properties of a graviton must be renormalized with energy scale like all of the SM physical constants; in others there is a cancellation or symmetry of some kind (probably a unique one) that prevents this from happening. One or the other possibility is true but we don't know which one. GR doesn't renormalize.

10. In graviton based theories lots of practical calculations require approximating infinite series that we don't know how to manage mathematically; in GR, in contrast, infinite series expressions are very uncommon and the calculations are merely wickedly difficult rather than basically impossible.

11. In GR singularities like black holes can be absolute; in a quantum gravity theory they can be only nearly "perfect" but will always leak a little, because they are discontinuous and stochastic.

12. In quantum gravity it ought to be possible to have gravitons that are entangled with each other, while in GR this doesn't happen.

13. In quantum gravity with gravitons, the paradigmatic approach is to look at the propagators of point particles; GR is conventionally formulated in a hydrodynamic form that encompasses a vast number of individual particles (although it is possible to formulate GR differently while retaining its classical character).

14. In quantum gravity, calculations for almost every other interaction of every kind need to be tweaked by considering graviton loops; in GR the gravitational sector and the fundamental particles of the Standard Model operate in separate domains. For example, even if Newton's constant does not run with energy scale due to some symmetry in a quantum gravity theory, the running for the strong force coupling constant with energy scale would be slightly different than in the SM without gravitons.

15. Adding a graviton to the mix of particles in a TOE qualitatively changes what groups can include all fundamental particles that exist and none that do not; while in GR where gravity is not fundamental particle based, it does not.

## Monday, November 26, 2018

### More Structure Not Predicted By The Standard Model Of Cosmology

The Standard Model of Cosmology a.k.a. LambdaCDM model a.k.a. Concordance Model of Cosmology, doesn't predict the tight relationship between the distribution of star in galaxies and the location of dark matter inferred from the dynamics and lensing in the vicinity of those stars.

Another thing which is observed, but not predicted by the Concordance Model is the fairly strong correlation between a galaxy's bulge size and its number of satellite galaxies. But, that structure is also present in the data. A new paper confirms that there is a correlation and that the Concordance Model doesn't predict its existence.
There is a correlation between bulge mass of the three main galaxies of the Local Group (LG), i.e. M31, Milky Way (MW), and M33, and the number of their dwarf spheroidal galaxies. A similar correlation has also been reported for spiral galaxies with comparable luminosities outside the LG. These correlations do not appear to be expected in standard hierarchical galaxy formation. In this contribution, and for the first time, we present a quantitative investigation of the expectations of the standard model of cosmology for this possible relation using a galaxy catalogue based on the Millennium-II simulation. Our main sample consists of disk galaxies at the centers of halos with a range of virial masses similar to M33, MW, and M31. For this sample, we find an average trend (though with very large scatter) similar to the one observed in the LG; disk galaxies in heavier halos on average host heavier bulges and larger number of satellites. In addition, we study sub-samples of disk galaxies with very similar stellar or halo masses (but spanning a range of 2-3 orders of magnitude in bulge mass) and find no obvious trend in the number of satellites vs. bulge mass. We conclude that while for a wide galaxy mass range a relation arises (which seems to be a manifestation of the satellite number - halo mass correlation), for a narrow one there is no relation between number of satellites and bulge mass in the standard model. Further studies are needed to better understand the expectations of the standard model for this possible relation.
B. Javanmardi, M. Raouf, H. G. Khosroshahi, S. Tavasoli, O. Müller, A. Molaeinezhad, "The number of dwarf satellites of disk galaxies versus their bulge mass in the standard model of cosmology" (November 21, 2018) (accepted in The Astrophysical Journal).

This is quite powerful, despite a firstly thin data set to establish the correlation that exists in the real world, because it is a problem with lambdaCDM that is independent of its inaccurate expectations about where dark matter is located. A new paper continuing this line of research is the following one:
Low mass galaxies are expected to be dark matter dominated even within their centrals. Recently two observations reported two dwarf galaxies in group environment with very little dark matter in their centrals. We explore the population and origins of dark matter deficit galaxies (DMDGs) in two state-of-the-art hydrodynamical simulations, the EAGLE and Illustris projects. For all satellite galaxies with M>109 M in groups with M200>1013 M, we find that about 5.0% of them in the EAGLE, and 3.2% in the Illustris are DMDGs with dark matter fractions below 50% inside two times half-stellar-mass radii. We demonstrate that DMDGs are highly tidal disrupted galaxies; and because dark matter has higher binding energy than stars, mass loss of the dark matter is much more rapid than stars in DMDGs during tidal interactions. If DMDGs were confirmed in observations, they are expected in current galaxy formation models.
Yingjie Jing, et al., "The dark matter deficit galaxies in hydrodynamical simulations" (November 22, 2018).

Another problem, somewhat related to the unexpected structure in inferred dark matter distributions, is that a very large swath of the parameter space of particles that interact with Standard Model matter non-gravitationally has been excluded experimentally, but the tight alignment of stars and inferred dark matter distributions implies that if dark matter is real that it has to have non-trivial, non-gravitational interactions with stars and other ordinary matter. Truly "sterile" dark matter which doesn't interact with anything non-gravitationally, which would be "collisionless", as lambdaCDM assumes that dark matter comes close to, has basically been ruled out experimentally.

In addition to these two relatively independent problems, lambdaCDM also has a problem with its chronology of the moderately early universe. This gives rise to the "Impossible Early Galaxies" problem, and to 21cm radiation wavelength lines that fail to show the behavior expected in a world with dark matter at roughly the end of the "radiation era".

While correlation is not causation, most strong correlations in nature have a cause of some kind. Figuring out which set is the cause and which is the effect can be difficult, or can even be a category error. But, there is almost always some reason for the relationship.

Because the Concordance Model fails to explain multiple independent phenomena that show correlations, it is probably wrong. Not wildly totally wrong, because it does get lots of things that we can confirm with astronomy at very large scales right. But, significantly, deeply flawed.

It only took the one flaw to convince me that something was amiss with the Concordance Model. But, lots of people who are less skeptical of lambdaCDM than I am, are going to start looking for alternatives as multiple, significant, seemingly independent breaks between the Concordance Model and observed reality emerge.

I, of course, think (although I can't personally rigorously prove it) that pretty much all of the flaws of lambdaCDM exist because we have misunderstood some important second and third order quantum gravitational effects that matter in very weak gravitational fields in very high mass systems. My very strong intuition is that, in reality, there is both no dark matter and no dark energy, apart of fields of Standard Model fundamental particles and gravitons. But, I don't expect that paradigm shift to spread all that quickly, unless a rising star popularizes a solution of that kind of a mass scale within the physics and physics journalism communities.

Meanwhile, another promising modified gravity theory has emerged.
We have recently shown that the baryonic Tully-Fisher (BTF) and Faber-Jackson (BFJ) relations imply that the gravitational "constant" G in the force law vary with acceleration a as 1/a. Here we derive the converse from first principles. First we obtain the gravitational potential for all accelerations and we formulate the Lagrangian for the central-force problem. Then action minimization implies the BTF/BFJ relations in the deep MOND limit as well as weak-field Weyl gravity in the Newtonian limit. The results show how we can properly formulate a nonrelativistic conformal theory of modified dynamics that reduces to MOND in its low-acceleration limit and to Weyl gravity in the opposite limit. An unavoidable conclusion is that a0, the transitional acceleration in modified dynamics, does not have a cosmological origin and it may not even be constant among galaxies and galaxy clusters.
Dimitris M. Christodoulou, Demosthenes Kazanas, "Gravitational Potential and Nonrelativistic Lagrangian in Modified Gravity with Varying G" (November 21, 2018).

Further afield and mostly unrelated is the possibility that lots of the filamentary large scale structure of the universe could be driven by magnetism, which is usually assumed to be negligible and not influential in interstellar space. But, maybe not:
Evidence repeatedly suggests that cosmological sheets, filaments and voids may be substantially magnetised today. The origin of magnetic fields in the intergalactic medium is however currently uncertain. We discuss a magnetogenesis mechanism based on the exchange of momentum between hard photons and electrons in an inhomogeneous intergalactic medium. Operating near ionising sources during the epoch of reionisation, it is capable of generating magnetic seeds of relevant strengths over scales comparable to the distance between ionising sources. Furthermore, when the contributions of all ionising sources and the distribution of gas inhomogeneities are taken into account, it leads, by the end of reionisation, to a level of magnetisation that may account for the current magnetic fields strengths in the cosmic web.
Mathieu Langer, Jean-Baptiste Durrive "Magnetising the Cosmic Web during Reionisation" (November 22, 2018).

MORE INTERESTING PAPERS (NO TIME TO FORMAT THEM, A RICH LOAD OF PAPERS TODAY FOR SOME REASON, PERHAPS A PRE-THANKSGIVING RUSH TO WRAP STUFF UP):

arXiv:1811.09197 [pdf, other]
Large-scale redshift space distortions in modified gravity theories
César Hernández-Aguayo, Jiamin Hou, Baojiu Li, Carlton M. Baugh, Ariel G. Sánchez
Comments: 18 pages, 11 figures, submitted to MNRAS
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO)

Measurements of redshift space distortions (RSD) provide a means to test models of gravity on large-scales. We use mock galaxy catalogues constructed from large N-body simulations of standard and modified gravity models to measure galaxy clustering in redshift space. We focus our attention on two of the most representative and popular families of modified gravity models: the Hu \& Sawicki f(R) gravity and the normal branch of the DGP model. The galaxy catalogues are built using a halo occupation distribution (HOD) prescription with the HOD parameters in the modified gravity models tuned to match with the number density and the real-space clustering of {\sc boss-cmass} galaxies. We employ two approaches to model RSD: the first is based on linear perturbation theory and the second models non-linear effects on small-scales by assuming standard gravity and including biasing and RSD effects. We measure the monopole to real-space correlation function ratio, the quadrupole to monopole ratio, clustering wedges and multipoles of the correlation function and use these statistics to find the constraints on the distortion parameter, β. We find that the linear model fails to reproduce the N-body simulation results and the true value of β on scales $s < 40\Mpch$, while the non-linear modelling of RSD recovers the value of β on the scales of interest for all models. RSD on large scales (s≳20-$40\Mpch$) have been found to show significant deviations from the prediction of standard gravity in the DGP models. However, the potential to use RSD to constrain f(R) models is less promising, due to the different screening mechanism in this model,

arXiv:1811.09222 [pdf, ps, other]
Beyond the Standard models of particle physics and cosmology
Maxim Yu. Khlopov
Comments: Prepared for Proceedings of XXI Bled Workshop "What comes beyond the Standard models?"
Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Astrophysical Phenomena (astro-ph.HE); High Energy Physics - Phenomenology (hep-ph)

The modern Standard cosmological model of inflationary Universe and baryosynthesis deeply involves particle theory beyond the Standard model (BSM). Inevitably, models of BSM physics lead to cosmological scenarios beyond the Standard cosmological paradigm. Scenarios of dark atom cosmology in the context of puzzles of direct and indirect dark matter searches, of clusters of massive primordial black holes as the source of gravitational wave signals and of antimatter globular cluster as the source of cosmic antihelium are discussed.

arXiv:1811.09578 (cross-list from physics.gen-ph) [pdfpsother]
Emergent photons and gravitons
Comments: to appear in Proceedings of the 21st Bled Workshop "What Comes Beyond Standard Models"
Subjects: General Physics (physics.gen-ph); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th)
Now, it is already not a big surprise that due to the spontaneous Lorentz invariance violation (SLIV) there may emerge massless vector and tensor Goldstone modes identified particularly with photon and graviton. Point is, however, that this mechanism is usually considered separately for photon and graviton, though in reality they appear in fact together. In this connection, we recently develop the common emergent electrogravity model which would like to present here. This model incorporates the ordinary QED and tensor field gravity mimicking linearized general relativity. The SLIV is induced by length-fixing constraints put on the vector and tensor fields, A2μ=±M2A and H2μν=±M2H (MA and MH are the proposed symmetry breaking scales) which possess the much higher symmetry then the model Lagrangian itself. As a result, the twelve Goldstone modes are produced in total and they are collected into the vector and tensor field multiplets. While photon is always the true vector Goldstone boson, graviton contain pseudo-Goldstone modes as well. In terms of the appearing zero modes, theory becomes essentially nonlinear and contains many Lorentz and CPT violating interaction. However, as argued, they do not contribute in processes which might lead to the physical Lorentz violation. Nonetheless, how the emergent electrogravity theory could be observationally differed from conventional QED and GR theories is also briefly discussed.
The electron self-energy in QED at two loops revisited
Subjects: High Energy Physics - Phenomenology (hep-ph)
We reconsider the two-loop electron self-energy in quantum electrodynamics. We present a modern calculation, where all relevant two-loop integrals are expressed in terms of iterated integrals of modular forms. As boundary points of the iterated integrals we consider the four cases p2=0p2=m2p2=9m2 and p2=. The iterated integrals have q-expansions, which can be used for the numerical evaluation. We show that a truncation of the q-series to order (q30) gives numerically for the finite part of the self-energy a relative precision better than 1020for all real values p2/m2.
Properties of the decay Hγγ using the approximate α4s-corrections and the principle of maximum conformality
Subjects: High Energy Physics - Phenomenology (hep-ph)
The Higgs boson decay channel, Hγγ, is one of the most important channels for probing the properties of the Higgs boson. In the paper, we reanalyze its decay width by using the QCD corrections up to α4s-order level. The principle of maximum conformality has been adopted to achieve a precise pQCD prediction without conventional renormalization scheme-and-scale ambiguities. By taking the Higgs mass as the one given by the ATLAS and CMS collaborations, i.e. MH=125.09±0.21±0.11 GeV, we obtain Γ(Hγγ)|LHC=9.364+0.0760.075 KeV.
Lepton and Quark Masses and Mixing in a SUSY Model with Delta(384) and CP
Comments: 1+41 pages, 1 figure, 5 tables
Subjects: High Energy Physics - Phenomenology (hep-ph)
We construct a supersymmetric model for leptons and quarks with the flavor symmetry Delta(384) and CP. The peculiar features of lepton and quark mixing are accomplished by the stepwise breaking of the flavor and CP symmetry. The correct description of lepton mixing angles requires two steps of symmetry breaking, where tri-bimaximal mixing arises after the first step. In the quark sector the Cabibbo angle theta_C equals sin pi/16 = 0.195 after the first step of symmetry breaking and it is brought into full agreement with experimental data after the second step. The two remaining quark mixing angles are generated after the third step of symmetry breaking. All three leptonic CP phases are predicted, sin delta^l = -0.936, |sin alpha|=|sin beta|=1/sqrt{2}. The amount of CP violation in the quark sector turns out to be maximal at the lowest order and is correctly accounted for, when higher order effects are included. Charged fermion masses are reproduced with the help of operators with different numbers of flavor (and CP) symmetry breaking fields. Light neutrino masses, arising from the type-I seesaw mechanism, can accommodate both mass orderings, normal and inverted. The vacuum alignment of the flavor (and CP) symmetry breaking fields is discussed at leading and at higher order.
arXiv:1811.09378 [pdf, other]
Bound on the graviton mass from Chandra X-ray cluster sample
Sajal Gupta, Shantanu Desai