If this is really an excess and not a QCD background miscalculation or a fluke from a small data set, or an experimental error, this could be a sign of a Higgs boson, although not necessarily of the plain vanilla Standard Model Higgs boson variety. It could be a result that looks significant in a small data set, but is muted in the vast amount of data that has been collected since the dataset that was first collected. It could be something else.
Some of the strongest hints of a Higgs boson in the 115-119 GeV range come from the diphoton channel of results, but so far, the data haven't been definitive enough to say that the significance of the results from the diphoton channel in that mass range amounts to a Higgs boson, because the expected signal is subtle relative to the experimental uncertainty at that mass range.
In answer to the headline question, the answer is, maybe but probably not.
The long delay from the collection of the data to the publication, without updating the data with new results, invites tea leaf reading about the implications that can be drawn about the new data and the quality of the data analysis, which Lubos Motl in the link above indulges at length.
The line of metanalysis that makes sense to me is that the strong Beyond the Standard Model result invited delay due to extensive searches for error to maintain the experiment's credibility, that new data (not necessarily so strongly) has not disproved this finding entirely, and that they are publishing the early result now, so that they can buy time for a similarly careful analysis of the new data by being first to announce the result so that they won't get scooped by the ATLAS experiment, which should be seeing something similar if CMS hasn't screwed up, but is probably likewise taking extra care to make sure that they've done their analysis correctly and ruled out experimental error. A leaked memo from the ATLAS experiment in April 2011 suggested a similar result but hasn't yet produced a published paper. The caution being shown is also emphasized by the fact that the very anomalous result is being reported in a paper that is very light on theoretical explanations or context, that instead simply focuses on raw experimental results, so that any wild eye'd inference from the experimental results can be plausibly denied by CMS.
Lubos also notes that if the mere trickle of data upon which this much delayed paper produces results as significant as they seem to predict, that the results with the data collected to date should have a result that is completely unmistakeable (ca. 48 standard deviations from the predicted value). On one hand, that makes a strategy of publishing a preliminary result in hope of crowd sourcing possible sources of error before publishing another paper that announces an unmistakeable discovery but overlooks something basic that should have come out in peer review plausible. On the other hand, it is hard to believe that anyone could prevent a 48 sigma excess in anything relative to a Standard Model result at LHC where thousands of physicists are chomping at the bit for any hint of beyond the standard model physics, for very long, and this intuition is supported by the observation far more trival bumps and excesses have already been leaked already.
My instinct is to think that the early result hasn't been supported at nearly the same level of significance in later results, but that since nobody can find anything actually wrong with the analysis, that it is getting published anyway.
Professor Matt Strassler observes at his blog that "my preliminary impression is that it’s most likely . . . either a problem with the theoretical calculation of what the Standard Model predicts, or a problem with the way this theoretical calculation was used by the experiments." He notes that the result is robust, as fairly similar data were found in four different experiments, suggesting that experimental error is not to blame, but the calculation is not.
His instincts have merit because the theoretical prediction is a "next to leading order" QCD prediction, but it is quite reasonable to think that something like odd angled diphotons in the background might arise mostly from beyond next to leading order QCD terms. In particular, he states:
ATLAS has accounted for this imprecision [in QCD theoretical predictions due to different calculation methods] by citing two theory predictions. They would claim no discrepancy: they would say that theory is rather imprecise here, and the data differs no more from the theoretical predictions than the theoretical predictions differ from each other.
CMS, for some reason — maybe a good one, but the reason is not stated in their paper — has instead has chosen only one of the two theoretical predictions shown by ATLAS. They say there is a discrepancy. But note they don’t say anything about this discrepancy being due to a new physical effect not predicted by the Standard Model.
When data deviates from a theoretical prediction, one can only claim observation of a new phenomenon if one has strong confidence in the precision and accuracy of that theoretical prediction. . . .
For both ATLAS and CMS the number of events at angles close to 3 radians (approaching 180 degrees) is significantly less than predicted by the theoretical calculations. Since the predictions overshoot here, where the majority of the data is located, one should not be surprised if the entire shape of the distribution comes out wrong. . . .
While we’re at it, the theory predictions also fail badly at CDF and DZero at the Tevatron, for a nearly identical measurement (with vastly more data, but with not that many more events, since the Tevatron ran at lower energy than the LHC). . . . the nature of the failure is vaguely similar to that shown in the CMS and ATLAS plots, but not exactly the same either.
Strassler's observations about just how hard it is to get the QCD backgrounds calculated properly also could provide an alternative explanation for why it has taken so long to publish a result based on just 1% of the data collected to date.
Thus, Strassler is inclined to think that the Standard Model QCD backgrounds are wrong in the CMS paper, while Motl is inclined to think that the equations from which the backgrounds were calculated are wrong. Strassler rather coyly implies that the off angle results may be recoils from jets produced in the events that are unaccounted for in the theoretical prediction.
Suppose Strassler is right that the QCD background underestimates the number of diphoton events because it is miscalculated. This implies that the strength of the apparent 115 GeV-119 GeV Higgs mass signal is probably also overestimated because some of the apparent signal is really background. This, in turn, would strengthen the likelihood that there simply isn't any Standard Model or light SUSY Higgs boson at all.
From my glass half-empty perspective, the possibility that the only beyond the Standard Model physics found at LHC will be an experimental exclusion of the Standard Model Higgs boson seems increasingly likely. However, I'm not very persuaded that walking technicolor advocates (the 2011 state of the art for these theories is discussed here) are on the right track either, despite the fact that it is arguably the leading Higgsless model and the differences between the Standard Model Higgs and technicolor composite Higgs should produce a measurable excess in the diphoton channel of decays. Other versions of technicolor have been ruled out experimentally. As a 2011 paper by one of the leading theorists in the field notes:
[The Tevatron experiment] strongly indicates that viable Technicolor models should feature colorless techniquarks as it is the case of Minimal Walking Technicolor or that a light composite Higgs made of colored techniquarks is excluded by the Tevatron experiment. These results are in perfect agreement with the LEP electroweak precision data which require a small S parameter.
He also has an interesting little paper on supersymmetric versions of walking technicolor involving maximal supersymmetry in four dimensions.