The analysis of this finding by Tommaso Dorigo is more interesting, both because it explains how distinctive a signature this kind of event would leave, which means that not many "hits" are necessary to constitute a positive experimental result, and because he hints at why one would imagine that a Z' boson might be out there. This is because something so heavy would have a high likelihood of decaying into top and anti-top quark pairs that have a very distinctive signature and can be further analyzed for total system mass-energy in a way that tells us a fair amount about the original source of this decay chain.
New heavy gauge bosons, electrically neutral and quite similar to Z bosons, are the result of adding a simple one-dimensional unitary group to the group structure of the Standard Model. Such an extension, which appears minimal and as such "perturbs" very little the low-energy behaviour of the theory, is actually the possible outcome of quite important extensions of the model. But I do not wish to delve in the details of the various flavours of U(1) extensions that have been proposed, on where these come from, and on why they are more or less appealing to theorists.
He may not want to delve, but I am certainly inclined to look at what would motive this Standard Model extension myself at some point, because we have never found an extra generation of boson before now. It could be a function of the fact that gluons and photons lack rest mass, while W and Z bosons have it, but at any rate it is worth examining. I'm less inclined at first blush to think that a Kaluza-Klein gluon (Kaluza-Klein being famous for bringing us the rolled up extra dimension concept) has much theoretical motivation worth considering.
A collection of Z' annotations called the Z' hunter's guide has a nice review of the literature and competing theories that call for a Z' boson. It provides a way to explain the scale at which ordinary standard model physics segregate themselves from new physics in SUSY models, can help control the proton decay that is endemic to many grand unified theories, can explain anomalous muon magnetic moments, can provide dark matter candidates, is an extra piece that falls out of group theories devised to explain other things that needs to be characterized, and can substitute for the Higgs boson in some respects.
Alternately, a Z' might boost the Standard Model expected Higgs mass which has already been ruled out:
The Standard Model fit prefers values of the Higgs boson mass that are below the 114 GeV direct lower limit from LEP II. The discrepancy is acute if the 3.2 sigma disagreement for the effective weak interaction mixing angle from the two most precise measurements is attributed to underestimated systematic error. In that case the data suggests new physics to raise the predicted value of the Higgs mass. One of the simplest possibilities is a Z' boson, which would generically increase the prediction for the Higgs mass as a result of Z-Z' mixing. We explore the effect of Z-Z' mixing on the Higgs mass prediction, using both the full data set and the reduced data set that omits the hadronic asymmetry measurements of the weak mixing angle, which are more likely than the leptonic asymmetry measurements to have underestimated systematic uncertainty.
In short, efforts have been made to task the Z' with solving a wide range of possible failures to theory to match experiment, but is something of a dark horse in terms of theoretical motivation that is nonetheless attractive to experimentalists because it should be pretty easy to identify if it is out there.
Randall-Sundrum models are models that eschew the full fledged ten or eleven or twenty-six dimensions of M theory (aka String Theory) and instead have just five dimensions - the four that we know and love in an "anti-de Sitter" background in which the strong and electroweak forces operate, and a fifth dimension which some specific characteristics that makes a graviton mediated version of general relativity work at the proper strength.
The search is also notable because it shows particle physics experiments starting to penetrate the 1 TeV energy scale which is where a lot of theory supposes that we should see new physics. For example, it is the characteristic energy scale of a lot of SUSY/String theory new physics that are on the verge of being ruled out by experiment.
Meanwhile, Lubos notes a presentation from Genoble that observes in the BaBar experiment a 1.8 sigma excess in the number of tau leptons produced in charm quark decays relative to the Standard Model prediction. This is probably a case of theoretical calculation subtly or random variations in experimental outputs (i.e. a fluke) or a product of the general failure to the Standard Model to predict sufficient CP violation in decay chains that start with bottom quarks (the decaying charm quarks in this experiment have their source in B mesons). But, it might be supportive of SUSY that features charm quark decays that include the possibility of decay to a charged Higgs boson of somewhat implausible tan beta and Higgs boson masses if it amounted to anything.
Finally, the rumor mill is currently pinning its hopes on a 144 GeV Higgs boson with a measurement suggestive of something going bump in the night at that mass at 3 sigmas, give or take, of significance. Electroweak precision fits argue for something much lighter, which clearly isn't there, but one or two screw ups in the least reliable data points (of a great many) used to make that fit could allow the other data to permit a 144 GeV Higgs boson. The observation is too low in significance (in a place where more observations are being made than anything else, making the risk of finding a fluke much higher than usual), and too vague to characterize what, if anything, is creating the signal, which isn't nearly as definitive as one might hope, because it is over a wider range of energies than one would generally expect (perhaps due to the inability of the methods used to be more precise). This is at the higher end of the permitted range from previous experimental constraints and has the virtue of being somewhat robust as it flows from combined measurements of multiple experiments that reinforce each other. But, the downside is that the experiments individually show fairly low signficance.