Pages

Saturday, June 29, 2019

Vegetarian Crocodiles Evolved On Three Independent Occasions

The New York Times has a fascinating article about the vegetarian episodes in crocodile evolution. The lede is as follows:
Imagine you’re a small mammal of the Mesozoic. Snuffling around one day, you run into a cat-size, scaly, big-eyed reptile that looks not unlike a crocodile found later in the 21st century. Spotting you, he opens his mouth wide to reveal … tiny, intricate teeth. Then he turns his head and munches on some leaves. 
Such encounters may have been common in prehistory. Research published Thursday in Current Biology suggests that vegetarianism evolved at least three separate times in ancient crocs — a conclusion reached after scientists studied the unusual teeth sported by many species, including the Simosuchus described above.
There's nothing paradigm breaking about the results, but they're still pretty neat and expand your sense of what is possible. 

Wednesday, June 26, 2019

Why Does Deur's Approach To Quantum Gravity Deserve More Attention?

In my informed, but not necessarily expert, opinion, the most promising work on quantum gravity in the world today (so far) has been done by Alexandre Deur, and I've devoted a permanent page on this blog discussing this work. I think that it is more likely than not the case that this approach is the correct one to explaining quantum gravity.

In a nutshell, this approach to quantum gravity explains dark matter and dark energy as second order quantum gravity effects that are ignored in general relativity that arise from proper consideration of the self-interactions of gravitons. Dark matter is viewed as the excess concentration of gravitons in places where matter is at the expense of gravitons going in other directions due to their self-interaction. Dark energy flows from gravitons partially staying within gravitational systems like galaxies which reduces the flow of gravitons between gravitational systems thereby reducing the gravitational pull of those systems to each other - thus, dark energy phenomena arise because the gravitational pull between systems is weakened, not because something else is pulling these systems apart from each other.

To be clear, I am not saying that Deur's approach to quantum gravity has been rigorously confirmed, although several of his articles have been published in peer reviewed journals. 

But, I am saying that he is doing something that no one else in the quantum gravity field is doing, and that his approach is the most promising one that has been proposed by anyone to solve several of the most important unsolved question in physics that are outstanding today, and it does so simultaneously.

Also, even if he isn't 100% correct, many of the ideas that make up the entire package of its ideas deserve more attention and may provide a basis for advances even if other parts of this package of ideas don't pan out.

Why should you consider my opinion?

I read the abstracts of almost every astronomy, general relativity, high energy physics experiment, high energy physics lattice, and high energy physics phenomenology preprint on arXiv and have done so reasonably (although not perfectly) consistently since I started this blog more than eight years ago, and I go deeper and read some or all of the body text of several of those articles almost every week day. I also follow all of the physics blogs shown in the sidebar on a regular basis, and have done additional research on things I don't understand or want to know more about from secondary sources on the web, hard copy physics books oriented towards educated laymen, and math and physics textbooks in hard copy (some of which go beyond what I studied as an undergraduate college student).

I don't have the depth of knowledge of a PhD in physics or astrophysics, but I have a very broad understanding of the latest work in high energy physics, and the latest work on dark matter and dark energy phenomena, from the perspective of all of the relevant disciplines (although I admit that high energy physics theory is not my cup of tea and that I am less up to date in that area, seeing only papers that are cross-listed, discussed at the Physics Forums, or discussed at Physics Stack Exchange, for the most part).

Why is this approach to quantum gravity so great? Here are twenty reasons:

1. It explains "dark matter" phenomena and "dark energy" phenomena without introducing new particles or new forces. 

2. It explains gravity (in principle, at least) with one less fundamental constant (the cosmological constant) than general relativity does, and two less than a relativistically generalized MOND theory like TeVeS.

3. It does not require extra dimensions.

4. It does not introduce Lorentz symmetry violations.

5. It is the only gravity theory of which I am aware that explains "dark energy" phenomena in a manner that  does not violate conservation of mass-energy or require new non-Standard Model/non-General Relativity particles or attribute properties to aether (such as inherent curvature). It also explains the "cosmic coincidence" problem (i.e. that matter, dark matter and dark energy are present in quantities that are of roughly the same order of magnitude).

6. It is based upon interactions of the almost universally hypothesized particle, a graviton, with zero rest mass. This means, for example, that it is consistent with the evidence regarding the speed of gravity derived from the merger of a black hole and a neutron star observed with both gravitational wave detectors and telescopes.

7. It allows gravitational mass-energy to be localized to the same extent as any Standard Model interaction, and treats graviton-graviton interactions just like all other graviton-fundamental particle interactions.

8. It derives its phenomenological conclusions from first principles at the fundamental particle and interaction level in a very conventional manner that relies on no axioms that aren't entirely mainstream in quantum gravity research.

9. Because the first principles origins of the theory are known, it is possible to determine the correct form of equation to use in a way not possible with a purely phenomenologically derived description of what is observed.

10. Most investigators in the field agree that quantum gravity should be described by a non-Abelian quantum field theory with a self-interacting carrier boson, and its notable phenomenological conclusions flow directly from this aspect of this approach.

11. The phenomenological conclusions reached have close analogs in the best understood non-Abelian quantum field theory with a self-interacting carrier boson (i.e. QCD), in line with many prior investigators who have pursued the quantum gravity as "QCD squared" approach.

12. At least at a back of napkin calculation basis it is consistent with observed dark matter phenomena at the galaxy scale, at the galactic cluster scale, in the Bullet Cluster, and at the cosmology scale, although this has not been fleshed out and rigorously confirmed to the full extent that would be desirable. 

13. The approach of doing calculations with the theory (which overcomes the common practical concern that naive quantum gravity using a massless spin-2 graviton isn't renormalizable), using a scalar graviton approximation, which is equivalent to a static equilibrium case in a tensor gravity theory, works. Also, it is possible to demonstrate that for the non-relativistic speed, astronomy scale processes it is being used to model, that this theoretical error introduced by using a scalar graviton (i.e. spin-0 graviton) approximation in lieu of a complete tensor graviton (i.e. spin-2 graviton) treatment is acceptably low. By analogy, in general relativity, in systems of these types, the theoretical error introduced by using a nearly Newtonian gravitational to general relativity introduces only very modest theoretical error.

14. It has predicted something (the relationship between the amount of apparent dark matter in elliptical galaxies and their shape) that was subsequently supported by the evidence.

15. It is mathematically consistent with the Standard Model and uses essentially the same mathematical framework as the Standard Model.

16. The respects in which it is contrary to conventional classical General Relativity, as commonly implemented by physicists today, give rise only to consequences that have not been tightly established by observational or experimental evidence.

17. It is reassuring that this has been done by a professional physicists with a sophisticated mathematical background and does not appeal to numerology or for example "Vedic physics."

18.  The fact that it relies upon mathematical approaches that people with a QCD specialty, like Deur, are very familiar with, but are less familiar to people who specialize in general relativity or astronomy, helps to explain why no one else has come up with this in the last century, as does the fact that QCD and its associated mathematical conclusions themselves are only about 50 years old.

19. The fact that this approach comes to some conclusions (e.g. gravity can be localized, gravitational fields are an input, dark energy phenomena can arise without violating conservation of mass-energy) that are contrary to conventional, widely accepted maxims of general relativity, can help explain why no one else has come up with this in the last century. This factor and the previous one also explain why this approach has not received a lot of attention and support from other researcher in the field.

20. If this approach were validated, refined and widely adopted, this would pretty much be "the end of fundamental physics" as it this would leave no phenomena in the universe, prior to the first microsecond after the Big Bang, that was not fully described by the Standard Model plus this approach to quantum gravity. The main remaining inquiry would be to look for some deeper theory to describe why the many physically measured parameters of the Standard Model and quantum gravity have the values that they do. It would, for example, end the empirical motivation to look for new types of fundamental particles (other than the graviton) not found in the Standard Model.

What Unsolved Problems Of Physics Does It Solve?

Wikipedia maintains a list of unsolved problems in physics. This approach solves (or at least addresses or renders irrelevant) many of them:


Estimated distribution of dark matter and dark energy in the universe
  • Quantum gravity: Can quantum mechanics and general relativity be realized as a fully consistent theory (perhaps as a quantum field theory)?[21] Is spacetime fundamentally continuous or discrete? Would a consistent theory involve a force mediated by a hypothetical graviton, or be a product of a discrete structure of spacetime itself (as in loop quantum gravity)? Are there deviations from the predictions of general relativity at very small or very large scales or in other extreme circumstances that flow from a quantum gravity theory?
  • Extra dimensions: Does nature have more than four spacetime dimensions? If so, what is their size? Are dimensions a fundamental property of the universe or an emergent result of other physical laws? Can we experimentally observe evidence of higher spatial dimensions?
  • Galaxy rotation problem: Is dark matter responsible for differences in observed and theoretical speed of stars revolving around the centre of galaxies, or is it something else?
  • Quantum chromodynamics: . . . What determines the key features of QCD, and what is their relation to the nature of gravity and spacetime

Monday, June 24, 2019

Genetic Corroboration That Lithuanian Is Fairly Basal With IE Languages

This paper (hat tip Eurogenes), in contrast, has an excellent and informative abstract that provides insight into the the genetic origins of Lithuanians in a historical context and also regarding the proper place of the Lithuanian language in the Indo-European family tree.
When combining the new data we generated with external datasets, we confirmed that Lithuanians locate within the expected European context, even though they also present particular genetic distinctiveness when compared to neighbouring populations. In addition, the inclusion of ancient individuals from different periods across western Eurasia in the analysis allowed us to distinguish the genetic signature of three main prehistorical sources shaping the distinctiveness of present-day Lithuanians: pre-Neolithic HG groups, the Early to Middle Bronze Age Steppe pastoralists and Late Neolithic Bronze Age (LNBA) Europeans. Moreover, up to three HG populations can be inferred to contribute to the main genetic component identified the Lithuanians being the contribution of the WHG and the Scandinavian HG greater than that of the EHG. On the contrary, earlier European Neolithic movements from Levant/Anatolia known to contribute to genetically differentiated populations in Europe such as Sardinians or Basques are not especially predominant in Lithuania. 
Partial genetic isolation of the Lithuanians is a possible explanation for the structure results observed. Until the late Middle Ages, the eastern Baltic region was one of the most isolated corners of Europe [27]. Moreover, after the fall of the Roman Empire in the 5th century, the eastern Baltic region was spared by the subsequent population movements of the Migration Period [26,28], which allowed the most archaic of all the living speaking Indo-European languages [1] to survive. Thus, Lithuanians could retain their cultural identity.
Urnikyte et al., Patterns of genetic structure and adaptive positive selection in the Lithuanian population from high-density SNP data, Scientific Reports volume 9, Article number: 9163 (2019), DOI: https://doi.org/10.1038/s41598-019-45746-3

How Not To Write An Abstract

One of the minor annoyances of life and one of my pet peeves is when the abstract for an academic journal article announces that it has reached an important conclusions on a hot issue identified in the abstract, but doesn't tell you the conclusion that it reached even though it could be stated in a few words. 

The following article fits the bill (although I do applaud its implementation of the emerging practice of listing only the corresponding lead author and the collaboration for whom the author speaks, rather than every participant in the author line).

It announces that it tested lepton universality violation in the decay of charm quarks, which is a potential violation of the Standard Model suggested by other experiments, but doesn't say in the abstract what it concluded.
Leptonic and semileptonic decays in the charm sector have been well studied in recent years. With the largest data sample near DD¯ threshold, precision measurements of leptonic and semileptonic decays of charm meson and baryon are perfromed at BESIII. Test for letpon flavor universality is also performed. Sensitivity for rare leptonic and semileptonic charm decays is significantly improved taking advantage of the huge statistics in LHCb and the B factories.
S. F. Zhang (On behalf of the BESIII Collaboration) "Experimental study for leptonic and semileptonic decays in the charm sector" (June 21, 2019).

The conclusion, however, is a notable negative result, disfavoring results that have appeared to differ from the Standard Model assumption that charged leptons are identical in all respects except mass, including weak force transition probabilities. As the concluding summary in the paper explains:
In summary, BESIII has improved the precision of decay constants, form factors and CKM matrix elements in the charm sector with recent measurements. Meanwhile, LFU test at a very high precision (1.5% for Cabbibo favoured decays and 4% for Cabbibo suppressed decays) has been performed while no evidence of violation is found. Search for charm semileptonic decays to scalar mesons were performed at BESIII and the current results are in favor of the SU(3) nonet tetraquark description of a0(980), f0(500) and f0(980). Moreover, our sensitivity to rare charm leptonic and semileptonic decays has been improved by several magnitudes with the huge statistics at LHCb, and strong constraints have been set for various new physics models with recent measurements. With more data coming from BESIII, LHCb and BelleII, experiment study of charm leptonic and semileptonic decays will be further improved in the future.
Another of my minor peeves with regard to arXiv, is that it doesn't have categories that distinguish between proposed experiments and searches, such as this one related to Belle IIthis one proposing an ATLAS search, and this one at a proposed LHeC experiment, and actual experimental results.

Analysis

For what it is worth, I would really like to see a good review article attempting to reconcile results like this one that do not find LFU (lepton flavor universality) violation to a high precision with the results that do not, rather than cherry picking one or the other. I've blogged a lot of papers going each way, but haven't had the time or mental space to really try to determine if the results are simply contradictory or if there is something special about the apparently LFU violating cases from the LFU observing cases.

I am particularly critical of papers that have tried to combine multiple LFU violating results (at low individual significance) to get a higher significance, without considering either look elsewhere effects or actually non-LFU violating experimental results. My intuition is that the statistical significance of the LFU violating experiments considered in that manner is much lower than actually claimed.

But, part of that analysis requires some discrimination regarding what experiments should and should not be included as similar enough to be considered as part of the same global average.

For example, is it correct to lump decays of beauty mesons with decays of charmed mesons in this analysis? You can inappropriately exaggerate the significance of a result by making a distinction without a difference. But, it is also possible that there is some good theoretical reason for there to be LFU violation in some experiments but not in others.

Likewise, it is also possible that apparently LFU violation in multiple seemingly independent measurements of similar decays is actually subject to correlated systemic errors because everyone in the field doing similar experiments is inclined as a consequence of a common educational background and sub-disciplinary culture to make discretionary choices in setting up an experiment that can lead to systemic error in the same way.

So, an author of such a review article needs to have considerable wisdom and understanding regarding both the larger theoretical issues, and practical methodological details of these experiments, to reach insightful and correct conclusions about the likelihood of the existence or non-existence of LFU violation from a comprehensive review of experiments that test LFU.

Wednesday, June 19, 2019

Stacy McGaugh On Astronomy v. Astrophysics

There is more good stuff in the latest post at Triton Station, but this quote clears up important semantic issues and makes an important observation about the scientific effort to understand dark matter phenomena. 
When I say dark matter, I mean the vast diversity of observational evidence for a discrepancy between measurable probes of gravity (orbital speeds, gravitational lensing, equilibrium hydrostatic temperatures, etc.) and what is predicted by the gravity of the observed baryonic material – the stars and gas we can see. When a physicist says “dark matter,” he seems usually to mean the vast array of theoretical hypotheses for what new particle the dark matter might be. . . . 

To date, the evidence for dark matter to date is 100% astronomical in nature. That’s all of it. Despite enormous effort and progress, laboratory experiments provide 0%. Zero point zero zero zero. And before some fool points to the cosmic microwave background, that is not a laboratory experiment. It is astronomy as defined above: information gleaned from observation of the sky. That it is done with photons from the mm and microwave part of the spectrum instead of the optical part of the spectrum doesn’t make it fundamentally different: it is still an observation of the sky.
One could arguably slightly amend one sentence of the post to say instead: "To date, the positive empirical evidence for dark matter to date is 100% astronomical in nature." 

This is because while there is no positive empirical evidence for dark matter from any source other than observational evidence from astronomy, there are two other important means by which we better understand of dark matter phenomena.

One is computational work (both analytical and N-body) that looks at existing theories and select modifications of them to see what those theories predict and whether those theories are internally consistent and consistent with other laws of physics that are believed to be true.

The second is negative empirical evidence from laboratory-type experiments, such as particle collider experiments. Empirical evidence that rules out a possible explanation of something is still important empirical evidence, even though it can't provide us with an answer all by itself. Efforts to understand dark matter phenomena benefit greatly from negative empirical evidence that rules out a wide swath of dark matter particle theories including most of the parameter space for what was initially the most popular dark matter particle candidate: the supersymmetric WIMP.

Now, in fairness to the original language, negative empirical evidence is strictly speaking evidence "against dark matter", rather than "for it" even though it is still important evidence in conducting the overall scientific inquiry. And, arguably, the computational work is something you do with evidence, rather than evidence itself. But, the output of an analytic analysis or an N-body simulation are used in a manner very similar to that of observational evidence from astronomy and laboratory work, so maybe it is a distinction without a difference.

Meanwhile:

* "Brace for the oncoming deluge of dark matter detectors that won’t detect anything" with Twitter commentary.



* Dark matter interpretations of gamma ray excesses at the galactic center seen by the Fermi Gamma-Ray Space Telescope also doesn't look promising.

* This February 2019 conference on dark matter and modified gravity would have been great to attend.


Monday, June 17, 2019

CDM Fails Again In Describing Low Surface Brightness Galaxies

Empirically, low surface brightness galaxies (mostly, but with notable exceptions discussed in prior posts at this blog) have lots of "dark matter" effects which are apparent in their dynamics. The X-Ray emissions from low surface brightness galaxies should be high in low surface brightness galaxies with large halos and otherwise small. But, the low surface brightness galaxies that are observed are a poor fit to the CDM predictions to which they are compared in a new paper. They are, however, consistent with what a MOND-like theory would predict, where the apparently dark matter is due to dispersed matter distributions rather than halos creating "failed" spiral galaxies.

Constraining the dark matter halo mass of isolated low-surface-brightness galaxies

Recent advancements in the imaging of low-surface-brightness objects revealed numerous ultra-diffuse galaxies in the local Universe. These peculiar objects are unusually extended and faint: their effective radii are comparable to the Milky Way, but their surface brightnesses are lower than that of dwarf galaxies. Their ambiguous properties motivate two potential formation scenarios: the "failed" Milky Way and the dwarf galaxy scenario. In this paper, for the first time, we employ X-ray observations to test these formation scenarios on a sample of isolated, low-surface-brightness galaxies. Since hot gas X-ray luminosities correlate with the dark matter halo mass, "failed" Milky Way-type galaxies, which reside in massive dark matter halos, are expected to have significantly higher X-ray luminosities than dwarf galaxies, which reside in low-mass dark matter halos. We perform X-ray photometry on a subset of low-surface-brightness galaxies identified in the Hyper Suprime-Cam Subaru survey, utilizing the XMM-Newton XXL North survey. We find that none of the individual galaxies show significant X-ray emission. By co-adding the signal of individual galaxies, the stacked galaxies remain undetected and we set an X-ray luminosity upper limit of L0.31.2keV6.2×1037(d/65Mpc)2 erg s1 for an average isolated low-surface-brightness galaxy. This upper limit is about 40 times lower than that expected in a galaxy with a massive dark matter halo, implying that the majority of isolated low-surface-brightness galaxies reside in dwarf-size dark matter halos.
Comments:6 pages, 2 figures, accepted for publication to The Astrophysical Journal Letters
Subjects:Astrophysics of Galaxies (astro-ph.GA); High Energy Astrophysical Phenomena (astro-ph.HE)
Cite as:arXiv:1906.05867 [astro-ph.GA]
 (or arXiv:1906.05867v1 [astro-ph.GA] for this version)

Primordial Black Hole Dark Matter Not Quite Ruled Out

There is still a window of mass for which primordial black hole dark matter has not been ruled out by astronomy observation, although even if there are primordial black holes in the asteroid-mass size range, this still doesn't explain how these produce the halo distributions that are inferred that most popular variants of cold dark matter have been designed to address, so this is still probably a dead end.

Revisiting constraints on asteroid-mass primordial black holes as dark matter candidates

As the only dark matter candidate that does not invoke a new particle that survives to the present day, primordial black holes (PBHs) have drawn increasing attention recently. Up to now, various observations have strongly constrained most of the mass range for PBHs, leaving only small windows where PBHs could make up a substantial fraction of the dark matter. Here we revisit the PBH constraints for the asteroid-mass window, i.e., the mass range 3.5×1017M<mPBH<4×1012M. We revisit 3 categories of constraints. (1) For optical microlensing, we analyze the finite source size and diffractive effects and discuss the scaling relations between the event rate, mPBH and the event duration. We argue that it will be difficult to push the existing optical microlensing constraints to much lower mPBH. (2) For dynamical capture of PBHs in stars, we derive a general result on the capture rate based on phase space arguments. We argue that survival of stars does not constrain PBHs, but that disruption of stars by captured PBHs should occur and that the asteroid-mass PBH hypothesis could be constrained if we can work out the observational signature of this process. (3) For destruction of white dwarfs by PBHs that pass through the white dwarf without getting gravitationally captured, but which produce a shock that ignites carbon fusion, we perform a 1+1D hydrodynamic simulation to explore the post-shock temperature and relevant timescales, and again we find this constraint to be ineffective. In summary, we find that the asteroid-mass window remains open for PBHs to account for all the dark matter.
Comments:Comments welcome! 43 pages and 8 figures
Subjects:Cosmology and Nongalactic Astrophysics (astro-ph.CO)
Cite as:arXiv:1906.05950 [astro-ph.CO]
 (or arXiv:1906.05950v1 [astro-ph.CO] for this version)

Submission history

From: Paulo Montero-Camacho [view email]
[v1] Thu, 13 Jun 2019 22:20:18 UTC (1,058 KB)

What Causes Fast Radio Bursts?

Fast radio bursts have been observed many times since they were first observed in 2007, but there is still not a good, widely accepted understanding of what causes them. Wikipedia, quoted below with citations omitted, explains what they are, and then, a new review article, that follows this quotation, sums up the current situation. 

The prospects for finding answers to the FRB mystery, as in many lines of research for which astronomy data is relevant, is good, because we are in a golden age of astronomy in which immense waterfalls of data from multiple sources are gushing in and ready to provide answers once they are properly analyzed.
In radio astronomy, a fast radio burst (FRB) is a transient radio pulse of length ranging from a fraction of a millisecond to a few milliseconds, caused by some high-energy astrophysical process not yet identified. While extremely energetic at their source, the strength of the signal reaching Earth has been described as 1,000 times less than from a mobile phone on the Moon. The first FRB was discovered by Duncan Lorimer and his student David Narkevic in 2007 when they were looking through archival pulsar survey data, and it is therefore commonly referred to as the Lorimer Burst. Several FRBs have since been found, including two repeating FRBs. Although the exact origin and cause is uncertain, they are almost definitely extragalactic. When the FRBs are polarized, it indicates that they are emitted from a source contained within an extremely powerful magnetic field. The origin of the FRBs has yet to be identified; proposals for their origin range from a rapidly rotating neutron star and a black hole, to extraterrestrial intelligence
The localization and characterization of the first detected repeating source, FRB 121102, has revolutionized the understanding of the source class. FRB 121102 is identified with a galaxy at a distance of approximately 3 billion light years, well outside the Milky Way, and embedded in an extreme environment.

Fast Radio Bursts: An Extragalactic Enigma

We summarize our understanding of millisecond radio bursts from an extragalactic population of sources. FRBs occur at an extraordinary rate, thousands per day over the entire sky with radiation energy densities at the source about ten billion times larger than those from Galactic pulsars. We survey FRB phenomenology, source models and host galaxies, coherent radiation models, and the role of plasma propagation effects in burst detection. The FRB field is guaranteed to be exciting: new telescopes will expand the sample from the current 80 unique burst sources (and a few secure localizations and redshifts) to thousands, with burst localizations that enable host-galaxy redshifts emerging directly from interferometric surveys.  
* FRBs are now established as an extragalactic phenomenon.
* Only a few sources are known to repeat. Despite the failure to redetect other FRBs, they are not inconsistent with all being repeaters.  
* FRB sources may be new, exotic kinds of objects or known types in extreme circumstances. Many inventive models exist, ranging from alien spacecraft to cosmic strings but those concerning compact objects and supermassive black holes have gained the most attention. A rapidly rotating magnetar is a promising explanation for FRB 121102 along with the persistent source associated with it, but alternative source models are not ruled out for it or other FRBs.  
* FRBs are powerful tracers of circumsource environments, `missing baryons' in the IGM, and dark matter.  
* The relative contributions of host galaxies and the IGM to propagation effects have yet to be disentangled, so dispersion measure distances have large uncertainties.
Comments:To appear in Annual Review of Astronomy and Astrophysics. Authors' preprint, 51 pages, 18 figures. A version with higher quality figures is available at: this http URL
Subjects:High Energy Astrophysical Phenomena (astro-ph.HE); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
Cite as:arXiv:1906.05878 [astro-ph.HE]
 (or arXiv:1906.05878v1 [astro-ph.HE] for this version)