Are the current hints of a signal in the search for the Standard Model Higgs particle (currently underway at the Large Hadron Collider experiments ATLAS and CMS) really reflective of a Higgs particle? Should one be confident that they are, or not? I’ve written an article that presents strong arguments on both sides. Where you go from there is up to you, of course. Personally I tend to be conservative about these things, so if there’s a strong argument not to be confident, that’s how I lean. Others lean the other way — that’s fine with me, as long as they use a legitimate argument to get to that point of view.
Clearly if experienced scientists aren’t showing much agreement yet, a layperson can’t be too certain about what is going on. Don’t worry — over time, with more data in 2012, the disagreements about the current hints will die away, and you’ll hear a more and more confident consensus emerge as we get more evidence.
25 Responses
I usually avoid blogs, but I have a passing familiarity with this measurement, and I think some points need to be emphasized.
First is that the experiments could have said “We think this is the Higgs”. They chose not to. If you look at Matt’s posting on the Chi_b(3P) today, it tells us that the experiments are not afraid to announce a discovery, provided they believe the case is solid.
Second, it is hard to underepmhaisze how fast this turnaround is. Run II of the Tevatron began on March 1st, 2001. Their first paper was submitted on July 29th, 2003. The 2011 LHC proton run ended on October 30th. That leaves ten weeks for the data to be reconstructed, calibrated and aligned, analyzed, compared with Monte Carlo, cross-checked and discussed. I get the feeling that people think that once the data is collected the experiments have their answer. Nothing could be further from the truth – it takes hundreds of people to ensure that by the time they make their final plots, the data, calibration and analysis is all understood and can be trusted.
In a related point, the experiments continue to make progress in calibration, alignment and analysis. These improvements will mean there will be events that are in the plot that will move out, events that are out of the plots that will move in, and events that stay in may move to somewhat different masses. With a robust signal, on average this will improve things. With a small number of events, tiny changes in where and how many the events are can make substantial changes to the significance.
If I paraphrase the argument as follows: “ATLAS has a large excess (indeed, larger than one would expect from a SM Higgs, by about a factor of 2), and while not convincing in and of itself, just look at all the statistically marginal effects in the exact same spot. What are the chances of that?” The odds of the statistically marginal effects moving around may be higher than you think. We don’t know, and won’t know until the experiments apply the latest calibrations, but it would not surprise anyone on the experimental side to see substantial effects in the significance plots from these small changes to the analysis. Anyone in this field long enough has seen this happen time and again. This is part of what “Preliminary” means.
To stretch an analogy, just as nine women can’t have a baby in one month, you can’t make a significant and credible peak by combining a lot of marginal ones. And, like the baby, you can’t rush things – it will take time for the final calibrations to be done, it will take time for all the channels to be included, it will take time for a proper combination between both experiments to be done, and in all probability it will take more data. We know how much data on average it should take for a discovery in this mass region, and it’s quite a bit more than what’s been collected.
Third, Matt is absolutely right that one should distinguish between evidence and expectation.
And finally, one needs to remember what a “sigma” it is. It is a measure of how consistent one’s data is with the null hypothesis, which in this case is “there is no Higgs, only a background that is perfectly predicted by theory and perfectly simulated in our software”. Statistically rejecting the null hypothesis is a necessary but not sufficient condition for discovery, and the question “how many sigmas” is only part of the story.
Give it time. By 2012 we will have a much clearer picture of what is going on. We’ve waited more than 30 years, a few more months or a year won’t kill us.
Thanks for your reply.
Conformal invariance may be already broken by gravity, but I don’t see how this scenario relates to the Higgs mechanism, as long as we don’t clearly understand the relationship between Higgs and gravity.
I am afraid I disagree with your optimistic assessment that “…knowing the mass (and other properties) of the Higgs particle is a crucial ingredient that may make it a lot easier down the road to figure these things out.”
This clearly remains to be seen. Although I don’t have definitive proof to back up my view, the puzzles I mentioned seem to indicate much serious problems at the foundational level. Take for instance the “little hierarchy problem” as it relates to the LEP paradox. Some theorists worry that simply confirming the SM Higgs and its properties will fall short of getting the picture crystal clear.
Only time (and data) will tell.
There is one more challenge I’d like to add to the above list and this is related to breaking of conformal invariance due to the Higgs mass.
These are the theoretical challenges I am referring to:
1) vacuum instability,
1) fine-tuning problem,
2) discrepancy between SSB via SM Higgs and the numerical value of the cosmological constant,
3) gauge hierarchy problem,
4) inability to account for the number of fermion generations,
5) inability to account for CP symmetry breaking,
6) inability to account for neutrino masses and mixing.
vacuum: for the Standard Model, or for supersymmetry, 125 GeV is fine, as far as I understand; if the previous results you’ve seen in the literature make you nervous, my understanding is that more detailed calculations are fine with 125, and don’t forget there is dark matter in the world and any coupling to it by the Higgs field may change the calculation.
fine-tuning: if all we have is the Standard Model along with a 125 GeV Higgs, the fine-tuning problem remains very confusing (unless you claim it is a selection effect, which it might be.) if there are other new elementary particles to be found in LHC data, then the answer depends on what they are.
cosmological constant: the cosmological constant problem is not addressed here — again, it may be a selection effect. many people try to tie together the cosmological selection effect with a possible selection effect on the Higgs, but we know far too little about nature to do that with confidence
gauge hierarchy: this is just the fine-tuning problem, restated
#generations: 125 GeV Higgs adds no information by itself; we need more information from the LHC or elsewhere
CP violation in weak nuclear interactions and CP preservation in strong nuclear interactions: same as previous
neutrino masses/mixings: same as previous
And your last question:
conformal invariance is already broken by gravity, so any conformal invariance that the Higgs mass term breaks is accidental anyway — in other words, there was no problem here to start with. In fact the right way to understand your last question is as a yet another restatement of the fine-tuning problem.
All of this is to say: if we want to understand all the puzzles facing particle physics, just finding a 125 GeV Higgs isn’t going to do the job by itself. But knowing the mass (and other properties) of the Higgs particle is a crucial ingredient that may make it a lot easier down the road to figure these things out.
Prof. Strassler,
I commend your efforts in keeping an objective and cautious view on these preliminary findings. I would like to know what your opinion is on two questions:
a) how would a 125 GeV SM Higgs line up with the long list of theoretical challenges related to the minimal Higgs scenario?
b) what experimental crosschecks are required to definitively prove that the excess seen is indeed a Higgs boson and not other signature (for instance, a scalar resonance due to strong TeV coupling that is just starting to show up).
Thank you!
a) could you clarify the question? You say “long list of theoretical challenges related to the minimal Higgs scenario” but I am afraid that can be read in different ways, so I’m not sure exactly which ones you had in mind.
b) this is described in some detail in my article http://profmattstrassler.com/articles-and-posts/the-higgs-particle/the-standard-model-higgs/seeking-and-studying-the-standard-model-higgs-particle/ Please let me know if it does not answer your questions.
Screw the Higgs, I’m waiting for Sparticles!
Was Fermilab Tevatron data from its last years runs able to reach into the 125 GeV regime ? If yes do they see any such bump ? If they did not , why didn’t they see this ?
The Tevatron experiments were not highly sensitive to this region, but they did have a small excess (not very significant) in the lightweight mass range. So there is no contradiction with a 125 GeV Standard-Model-like Higgs.
What I find most encouraging about these results is that both Atlas and CMS, operating independently of each other, produced results that were almost identical — and with the particle mass in the expected range. I have to hope that with higher luminosity and reduced noise, they’ll zero in on that bad boy.
I don’t see the results as “almost identical”, when you look at them in detail. What makes you say that?
If the total excesses are within one sigma and only two GeV from each other, surely they’re almost identical? Unless of course you think a deviation of that size is “firmly” different 😉
If you have resolution of 1.0 GeV, 2.5 GeV (NOT 2 GeV) is a substantial distance. Let’s let the experimenters obtain their final results and revisit this then.
And I am not the one using the word “firm” around here — nothing looks firm to me yet.
Dear Matt, your conclusion doesn’t quite seem right to me. First of all, 126 GeV is only the diphoton ATLAS channel. The overall ATLAS curve has a peak closer to 125 GeV – the four-fermion ATLAS events are unusually concentrated between 125.3 and 126.3 GeV.
So I think that the difference between the ATLAS’ and CMS’ combination peaks is actually less than 2 GeV, not more than that. But even if it were 2 GeV, isn’t it exactly what you would expect from statistical fluctuations? Just imagine that the systematic error is zero and reconstruct the typical average (statistical error)^2 in the determination of the peak from the ZZ four-lepton channel, assuming that the strength of the signal is 3-sigma. Do you really think the result will be (much) less than 2 GeV?
I agree with JollyJoker that these differences between ATLAS and CMS are exactly of the order one would expect (from statistical errors).
All the best
LM
Lubos —
I think you are not correctly accounting for the fact that the diphoton channel has better resolution than the four lepton channel. Which is important, since you are wrong about the ATLAS numbers (check page 28 of Gianotti’s talk, they are at 123.6, 124.3 and 124.6 — though their close clustering is a fluke, even if they are all from a signal.) It is the low four-lepton events that pull the ATLAS two-photon bump down toward CMS. The reverse is true at CMS, where the four-lepton events pull their two-photon events up. But this is very sensitive to the precise locations of the measured four-lepton events — which are only accurate to a couple of GeV. Move the ATLAS and CMS four-lepton events around by a couple of GeV to see what I mean. And if there is a calibration issue — so that you have to systematically move all of the ATLAS or CMS events of one class by a GeV — think about how much the final combined results can shift.
with all due respect, but what you are doing looks a lot like hunting sub-1-sigma effects to me. this is a fun game that you can play for years but in the end you are trying to get more information from the data than there actually is.
yes, there is some strange energy dependence of the individual channels, but this has been taken into account of course in the combination. not everything will line up perfectly at one energy.
I am not sure if you are agreeing with me or disagreeing with me. You and I certainly agree that there is a big risk of trying to get more information than is actually there. That’s my point, in fact… that’s why you can make two contradictory arguments from the same limited data.
My point of view, from the beginning, has been that the data is too limited and too preliminary for us to draw conclusions at this time.
I would say that those arguing that there is firm evidence that the Higgs is there are taking the stated statistical significances far too seriously and the subtleties of the analysis not seriously enough. And they are playing a game, one not worth playing, because it can be played both ways, as I have demonstrated here. I think you ought to consider that a single 2-sigma effect combined with a few sub-1-sigma effects would take their case apart. In this article I gave you a lot of reasons to worry. I do not know that any of them are serious worries, but not all of them need to be realized in order to ruin the evidence here — just one or two would do it.
Also, I am not doing this because I like to play a game. I am doing it because I am a teacher, and young people entering the field need to understand why this situation is not as simple as it looks at first glance.
Thanks for your answer, Matt.
Concerning Chris, my guess is that he disagrees with you as well and he means that it is you who is trying to extract more information from the data than what they contain. In particular, you are trying to extract the information about the Higgs mass from the individual detector/channel combinations at a higher accuracy than what the actual data allow us to do at this moment.
The discrepancy in the measured Higgs mass you’re reporting isn’t statistically significant (by far); this seems to be what all of us are trying to communicate. On the other hand, the excess of the Higgs-like events is significant, well, there are several 3-sigma-like excesses that are nearly overlapping.
It would be quantitatively bizarre to treat the 3-sigma excesses – combining to something like 4-sigma in the overall LHC combination – on par with the deviations of the individually “measured” Higgs masses; the latter is only 1 sigma or so whenever it makes sense to talk about it all. (CMS has a very fuzzy diphoton channel that still has the excess from 119 GeV up to the right values so it can’t be interpreted as a measurement of the Higgs mass at all.)
All the best, Luboš
Lubos — Obviously Chris disagrees with me (I’m not that dumb) but my point was that his argument cuts both ways. A big deal is being made out of very limited information, whichever side you want to take.
I will say it yet again: the issue is *not* the statistical significance, but the reliability with which it can currently be computed.
And the fact that things do not line up well is not an argument against there being a Higgs there; I am making no such argument.
It is an argument that we do not have enough information to conclude there is likely a Higgs there… because small shifts in the wrong direction due to errors or recalibrations can significantly affect the statistical significance that you keep wanting to quote.
In any case, since ATLAS has a large fluctuation upward (whether there is a Higgs there or not) it will be months before we know. So let’s sit back and relax, and go back to work on something that we can do something about — such as triggers and analysis strategies.
Dear Lubos and Matt,
thank you for your replies and sorry for not being clear enough. What I meant was exactly that the energy resolution is not good enough to really declare the bump in the 123-127 GeV region to be split up into two or more in any statistically significant way.
In fact, the 3 ATLAS 4-lepton events, the clearest individual signal i believe, is at the upper edge of the expected SM cross section. For the overall consistency of the signal with a SM Higgs it is a “good thing” that the peak location does not so exactly overlap with the other channels, because if it did, the Higgs cross section might start disagreeing with the SM (too large). But again, all this is trying to get information from sub-1-sigma effects or slightly over 1-sigma effects.
So far for the part where i agree with Lubos. Now on a slightly broader picture i think that Matt also has a point. If one focuses on the narrow region around 125GeV there really is a sizeable excess. But if you look at total event counts over a somewhat broader region then there is no excess. in the 4l channel, the 3 events are so striking because there are no background events close in energy. Also in the ATLAS gamma-gama channel you see a slight blip on top of a smooth curve around 125GeV. But you do see a similar one at around 100GeV, too. So with a little less confidence in the absence of observer bias, it might very well be that the signal will collapse with more data. But to be honest, this is pretty much what the collaborations officially claimed.
Chris — thanks for your careful reply. One does have to be careful here. That’s all I’m trying to say.
One point I think that isn’t emphasized enough is that if one is willing to accept photon peaks at 123 and 126 as being at the same location, then what one is really saying is this: given that ATLAS sees something at 126, we would accept CMS seeing anything between 123 and 129 as being coincident. (Let’s leave out CMS ZZ* because it is simply too ambiguous right now.) Now that is a 6 GeV subwindow out of a 26 GeV window (115 to 141) that was left after the last measurement. AND CMS has two 2-sigma bumps within the region — which they say had a bit below 20% probability. So the probability that one of the CMS bumps would line up with the ATLAS bump — if you are going to be that loose with the allowed agreement — is not as low as you would think.
Again, this is not to say that the degree of concordance is inconsistent with a Higgs boson — far from it. But one really has to be careful about over-stating the evidence in favor of a signal — and even more, about inserting one’s belief into the evidence and using it to boost the evidence itself, rather than allowing it merely to increase one’s confidence that the weak evidence supports one’s belief.
Would it not be better to write one article with significance then writing many without any ?