Thursday, March 26, 2026

SUSY Excluded Up To TeV Scale

The tight agreement between the experimental value of muon g-2 (i.e. the anomalous magnetic moment of the muon) and the Standard Model prediction for that observable quantity, and other observational constraints establish high minimum masses at which supersymmetric particles in supersymmetry can exist consistent with experiment, and the other constraints also limit ultra-high supersymmetric particle masses.

Generally, these supersymmetric particles would have to have masses in the 1 TeV to 3.8 TeV mass ranges (something for which there is absolutely no positive experimental evidence from collider experiment anomalies). 

Essentially, these estimates arise from the detection limits of the instruments used in the various observations that this study uses to constrain supersymmetric particle masses.

For example, direct dark matter detection experiments lose meaningful experimental power at about 1 TeV, because for such heavy dark matter particles, the number of expected interactions is smaller and the statistical power of the experiments collapse. 

In the muon g-2 calculation, virtual top quark loops aren't an important component of the hadronic vacuum polarization calculation (and by definition aren't part of the hadronic light by light component) and particles of 1 TeV or more with limited connections to ordinary Standard Model particles would impact the muon g-2 HVP calculation even less. 

The Large Hadron Collider (LHC) supersymmetry exclusions range from the many hundreds of GeVs to the low TeVs. 

Once those exclusions are considered, they just fit their dark matter candidate mass to the overall estimated amount of dark matter in cosmic microwave background estimates from the Planck experiment.

From the paper:

Current LHC searches already place important constraints on light sbottom scenarios. For example, ATLAS has excluded sbottom masses up to ∼ 1.5 TeV and ∼ 0.85 TeV in decay chains involving ˜ b1 → b˜χ0 2 → bh˜χ0 1 for ∆m˜χ0 2,˜χ0 1 = 130 GeV [52, 53]. Similarly, sbottom masses up to ∼ 1.6 TeV maybe excluded for ˜ b1 → t˜χ0 2 with ∆m˜χ± 1 ,˜χ0 1 = 100 GeV [54]. In the simplified topology ˜ b1 → b˜χ0 1(b − jets + E), NLSP sbottom masses up to ∼ 1.27 TeV for massless neutralinos can be excluded. In the compressed regime, m˜ b1 ≃ m˜χ0 1 , dedicated searches exploiting displaced vertices and soft b-jet signatures exclude NLSP sbottom masses up to ∼ 660 GeV 1 As discussed earlier, NLSP sbottom solutions are absent for µ > 0 in our scans. for ∆m˜ b1,˜χ0 1 ∼ 10 GeV [55], while monojet-based analyses constrain sbottom masses up to ∼ 600 GeV. Notably, for m˜ b1 ≳ 800 GeV, current analyses do not impose significant limits [55]. . . .

The LUX-ZEPLIN (LZ) constraints are shown by the solid purple line for LZ (2022), the solid blue line for the most recent LZ (2024) results, and the dotted blue line for the projected LZ 1000-day sensitivity [34–36]. From the m˜χ0 1–σSI plane, we observe that the majority of the relic-density-saturating solutions (red points) lie below the current XENONnT and LZ (2022) exclusion limits. Only a small subset of red points, with neutralino masses in the range 0.3 TeV ≲ m˜χ0 1 ≲ 3.5 TeV, has already been explored by the DD DM experiments. Importantly, a substantial fraction of the viable parameter space remains within the projected sensitivity of upcoming experiments: nearly half of the red points are accessible to the current LZ reach (solid blue curve), while approximately two-thirds of the solutions are expected to be probed by the LZ 1000-day exposure. 

But all of those exclusions have nothing to do with positive observations of supersymmetric partners or even the slightest hints of them, and everything to do with inability of existing observational evidence to probe the mass ranges in question for possible superpartners.

As a practical matter, this analysis is the dying gasp of a discredited beyond the Standard Model theory, optimized to make it seem like it could be found "just around the corner" of what current experiments allow us to access, but actually just kicking a dead horse.

WIMP dark matter made of SUSY super partner particles, for example, is already effectively ruled out by other data like galaxy dynamics and the inferred shape of dark matter halos in a dark matte paradigm.
Driven by the growing agreement between the experimentally measured muon anomalous magnetic moment and its SM prediction, we reexamine phenomenological consequences of the MSSM, which is embedded in the supersymmetric SU(4)(C)×SU(2)(L)×SU(2)(R) Pati-Salam model. In contrast to earlier studies that predominantly favored a specific sign for the Higgsino mass parameter, our analysis systematically explores both μ > 0, and μ < 0 scenarios in light of current collider, cosmological, and DM constraints.
Within this framework, we identify viable parameter space regions where the observed DM relic density is reproduced through multiple mechanisms: co-annihilations involving sbottom-neutralino, gluino-neutralino, stop-neutralino, stau-neutralino, and chargino-neutralino coannihilation, as well as resonant s-annihilation channel via the pseudoscalar Higgs boson. We demonstrate that all such scenarios are consistent with present bounds from LHC supersymmetry searches, the Planck~2018 DM relic density bound, and current limits from DD DM searches. 
Our results reveal characteristic mass spectra associated with these mechanisms. In particular, sbottom-neutralino coannihilation typically requires sbottom masses near 2.8 TeV, while gluino-neutralino and stop-neutralino coannihilation scenarios allow gluino masses in the range 1-3 TeV and stop masses between 1 and 3.5 TeV. In coannihilation-dominated regions, the stau and chargino masses may reach values as high as 3.8 TeV, whereas viable A resonance solutions are realized for pseudoscalar Higgs masses spanning approximately 1.6-3.8 TeV. We anticipate that a portion of the parameter space will be accessible to supersymmetry searches in LHC Run-3 and future runs.
Ali Muhammad, et al., "LHC Run-3, Dark Matter and Supersymmetric Spectra in the Supersymmetric Pati-Salam Model" arXiv:2603.24152 (March 25, 2026).

The Particle Data Group's compilation of SUSY limits is, in my humble opinion, too timid and is not a very fair statement of how much of the SUSY parameter space has been excluded.

10 comments:

neo said...

what do you think about CERN building a new 100TEV 91 KM collider after LHC at a cost of untold billions. China is backing out.
I think a magnet upgrade and HE-LHC and reusing the same tunnel is more feasible.

andrew said...

I think that it is unwise to build a new collider at this point without a clear research agenda to motivate it. We'd be better off pausing for a while.

neo said...

do you think HE-LHC a new collider or just upgrade for current one. 14tev to 27tev

andrew said...

Neither. Wait for 5-10 years and see what makes sense then.

Guy said...

There are many other pressing issues to be investigated. However to keep HEP-Phenom viable as a field I think the magnet upgrade is a reasonable and affordable path forward.

neo said...

@guy I agree with you and perhaps in 50-70 years another magnet upgrade for HE-LHC and reuse the 27km tunnel rather then a new 100 km tunnel

jd said...

To go to higher energies in the same tunnel requires higher magnetic fields. This lowers the critical temperature and raises the risk of quenches. More energy being dumped in the quench. Quenches happen even in the current accelerator. The only solution found so far is just to manage the energy dump. So work it out.

The last step up in energy was an order of magnitude from 1 to 14 Tev. To go up only a factor of two is an act of desperation. There is little chance of learning anything new and significant. Yes, one needs to have some new ideas on how to proceed. Quit doing the same thing over and over.

neo said...

@jd current lhc magnet are 8.33T. Towards 16 T Dipole Magnets

A future circular collider (FCC) with a center-of-mass energy of 100 TeV and a circumference of around 100 km, or an energy upgrade of the LHC (HE-LHC) to 27 TeV require bending magnets providing 16 T in a 50-mm aperture. Several development programs for these magnets, based on Nb33​Sn technology, are being pursued in Europe and in the U.S. In these programs, cos-theta, block-type, common-coil, and canted-cos-theta magnets are explored; first model magnets are under manufacture; limits on conductor stress levels are studied; and a conductor with enhanced characteristics is developed. This paper summarizes and discusses the status, plans, and preliminary results of these programs.
https://ieeexplore.ieee.org/document/8645687/

andrew said...

"The last step up in energy was an order of magnitude from 1 to 14 Tev. To go up only a factor of two is an act of desperation. There is little chance of learning anything new and significant. Yes, one needs to have some new ideas on how to proceed. Quit doing the same thing over and over."

Precisely!

jd said...

I repeat to go up to only 27 Tev is an act of desperation.

Nb33Sn is not a high T superconductor. Running at higher magnetic fields coupled with higher currents lowers the critical temperature. This requires lower temperature helium production. The magnets are less stable with a higher risk of quenches. In a quench with higher magnetic fields more energy has to be dumped. The quench mitigation has to handle this, and it is likely the present method is not up to the task. Early when the LHC was started up, it had a quench with no mitigation. Tens of millions in damage and mitigation was installed. The Tokamak people are also having to cope with this. They are designing for high temperature superconductors with quench mitigation. Quenches happen.