Before I get to that, however, a little rant is in order.
If some bright young theorist were proposing these ideas for the first time, a few stumbles while exploring some pretty interesting possibilities would be noteworthy. But, the reality is more alarming. A huge share of the fundamental physics community has devoted almost an entire generation's worth of their efforts to formulating and testing this theory and it is turning out to be a dud.
This is a risk that every theoretical physicist takes. Dozens of new theory papers are published every week, and only one or two of them can hope to be correct. The rest are flights of fancy that don't come close to describing our world for one reason or another in the best of times. When one theorist pursuing an ideal strikes out, that's O.K.
But, the big problem is that their incredible collective intellectually resources have been ill utilized because a huge proportion of them are simply recycling and noodling the same old shopworn ideas that have already been found wanting. The answer to the ultimate question of life, the universe and everything is almost surely somewhere out there on the left fork in the road and 90% of the people trying to answer it are still stuck on the right fork where they have been for the last thirty years. When a very large share of everyone in this line of work is pursuing the same theory that turns out to be a dud, it is not O.K.
Basically, far too many people are still asking the wrong questions and getting no new answers as a result. Clearly, a frontal attack on basically aesthetic concerns, like the "hierarchy problems", "naturalness" and gauge unification, which suggested some version of SUSY as a place to look for solutions in the first place are not getting us anywhere.
Knowing what we know today, and thinking like a Baysean, we should be removing all but 10% of so of theoretical physicists who on are that fork from the search for SUSY and tell them to start working on anything but SUSY instead. Ideally, these newly displaced physicists should work on something that no one else has every developed very well before now.
But, mid-career and late-career theoretical physics researchers have incredible sunk costs that they have incurred in mastering knowledge relevant only to SUSY, so this kind of wholesale repurposing of the theoretical physics workforce is probably an impossible pipe dream. We are institutionally incompetent to devote adequate resources to BSM alternatives other than SUSY any time in the next couple of decades. Add the tenure factor and the future doesn't look bright for a long time to come.
We are basically betting the future of theoretical physics research on Earth for the next few decades on the very small cadre of lone wolves or small packs of researchers doing something different at places like the Perimeter Institute in Canada (arguably the most important institution to the future of physics in the world other than CERN, bar none).
Of course, true believers like Lubos Motl have another view, and maybe he's right on this score. Maybe experimental constraints are sufficiently stringent that any internally consistent theory that also explains the evidence to date that is not the Standard Model has to look a lot like the Standard Model. Maybe once one considers the various no go theorems and constraints the theoretically constrain the BSM alternatives, almost all of theory space highly constrained to be SUSY-like, or at the very least M-theory-like. The one thing physicists did learn over the last few decades that they didn't know before is that the seemingly distinct classes of models that they had been exploring all turn out to be ultimately equivalent to each other for all practical purposes. If one is careless it is easy to end up just putting old wine in new skins, which would also be a waste of time. SUSY may not be the only possibility, but maybe it doesn't take many theoretical physicists to explore the mere handful of other alternatives that are still viable.
Still, anytime someone comes up with a "no go" theorem, you always have to wonder if the people who are proposing it and reviewing its proof are really right, or have simply overlooked loopholes in it that they have been insufficiently creative to conceive. What shared assumptions does everyone involved have that they shouldn't? Are we really rightly devoting immense theoretical and experimental research to exploring possible SUSY theories because there are no other viable options? Or have theoretical physicists, collectively, just become lazy and unoriginal?
Part of the reason that loop quantum gravity, variations on Koide's formula, modified gravity theory research and similar line of inquiry are interesting and are discussed at this blog, is that the people doing this kind of research are at least asking questions that haven't been beaten to death for decades by teams of theorists as large as the credits reel of a superhero movie. Even more importantly, after asking these new questions, they are getting new answers and making genuine progress of some kind.
Right, wrong, or "not even wrong", at least they are thinking out of the box, which is the only way that we are going to ever make any progress in theoretical physics after a generation of chasing dead ends and stagnation.
Peter Woit sums up the latest news on SUSY parameter bounds from LHC data. For instance, he notes that:
[Leading SUSY theorist] Arkani-Hamed and co-authors have a recent paper out discussing Simply Unnatural Supersymmetry, i.e. “the simplest picture of the the world arising from fine-tuned supersymmetric theories”. Here calculations are done for gluino masses ranging from 1.5 to 15 TeV, and the story is that we’ll have to be lucky to get any experimental evidence for this model. They end with:
Put another way, even if SUSY does exist, the LHC has made clear that it is almost completely irrelevant to what we observe in the real world except at extremely high energy scales that have mostly not been present in nature since not very long after the Big Bang.If Nature has indeed chosen the path of un-natural simplicity, we will have to hope that she will be kind enough to let us discover this by giving us a spectrum with electroweak-inos lighter than ∼ 300 GeV or gluinos lighter than ∼ 3 TeV.
SUSY has become the "god of the gaps" for theoretical physicists; a mirage that grows ever more distant as you approach it.
Woit includes a link to a new paper by Jonathan Feng on the the impact of new LHC data on the available parameters of plausible supersymmetry theories. Feng sums up his conclusions with the following:
For some varieties of supersymmetry models, the LHC now requires superpartner masses well above 1 TeV, but there are also well-motivated examples in which superpartners may be significantly lighter without violating known bounds. The 125 GeV Higgs boson mass prefers heavy top squarks in the MSSM, and longstanding flavor and CP constraints strongly suggest multi-TeV first and second generation sfermions.
We have especially emphasized the robustness of the EDM constraints, which are present even in flavor-conserving theories. In the absence of a compelling mechanism for suppressing CP violation, the EDM constraints require first generation sfermions to be well above the TeV scale.
Against the backdrop of these indirect constraints, LHC bounds on supersymmetry are significant because they are direct, but they are hardly game-changing. One may like supersymmetry or not, but to have thought it promising in 2008 and to think it much less promising now is surely the least defensible viewpoint. . . .In other words, if SUSY particles exist, they are considerably heavier than supersymmetry theorists had naively expected that they would be not so terribly long ago. Gluinos or stop quarks of less than 1 TeV in mass are disfavored. There are the barest hints in the data that it is possible that there could be some beyond the Standard Model physics out there, but it isn't very impressive at this point.
[W]e have critically examined attempts to quantify naturalness. There are many studies embodying philosophies that differ greatly from each other. We have expressed reservations about some, but for many, one can only acknowledge the subjective nature of naturalness and make explicit the underlying assumptions. Very roughly speaking, however, current bounds are beginning to probe naturalness parameters of [N on the order of 100] to gluino masses of 1 TeV. . . .
[W]e have described a few of the leading frameworks that attempt to preserve naturalness in viable models, giving their key features and implications for experimental searches. . . . Although supersymmetry does not work "out of the box," these models provide longstanding (pre-LHC) and well-motivated frameworks that remain viable and preserve naturalness at the 1% level. . . .
In summary, weak-scale supersymmetry is neither unscathed, nor is it dead. The true status is somewhere in between, and requires a nuanced view that incorporates at least some of the many caveats and subtleties reviewed here. . . . [A]fter the two-year shutdown from 2013-14, the LHC is currently expected to begin running again at 13 TeV in 2015, with initial results available by Summer 2015, and 100 [inverse] fb of data analyzed by 2018. Such a jump in energy and luminosity will push the reach in gluino and squark masses from around 1 TeV to around 3-4 TeV, and probe models that are roughly an order of magnitude less natural.
The LHC data and the structure of the models suggest that if there are squarks, that the heaviest quarks will have the lightest squark superpartners, with the stop quark being the lightest squark of all in an "inverted hierarchy" of superpartner masses.
If one chooses just the right versions of SUSY theories, one might see a couple of the lightest SUSY particles before the end of the LHC's next run, but then again, one might not, and many of the SUSY particles are constrained to have masses far too heavy to be seen at even a next generation LHC type collider that is many times as powerful many decades from now.
Also, reality bites and SUSY theorists are going to have to settle for models that are ten to a hundred times less natural than they had previously hoped to see because the evidence we already have makes it clear that their original subjective expectations about naturalness were wrong.