Of Particular Significance

About Those Rumors That The Higgs Has Been Discovered

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 12/07/2011

While I was on my way to Johns Hopkins University Monday evening, I wrote the following post, for publication today (Wednesday). Tuesday morning, CERN stole some of my thunder by putting out a press release consistent with the conclusion I drew below.  Not that I mind.   A quote from the press release follows this post.

The rumor mills on various blogs have been going berserk, with claims that a Higgs particle of Standard Model type (the simplest possible version of the Higgs particle) has been found by ATLAS and CMS (the two large general purpose experiments at the Large Hadron Collider.)  The claims are that the Higgs signal is at a mass-energy [E=mc2] of 125 GeV;  that CMS, in its search for a Higgs particle decaying to two photons, sees a small excess (that’s two or so standard deviations, or 2 σ, away from zero signal); and that ATLAS sees a larger excess (perhaps 3 σ) in their similar analysis. (You may find it useful to read my recent article about a lightweight Standard Model Higgs particle, and why searching for it through its decays to two photons is the best way to find it but takes frustratingly long — or you might like my recent guest post on Cosmic Variance about the Higgs search.)

Well, rumors are sometimes true, and this one might be, more or less.

  • More precisely, it might be true that ATLAS and CMS see excesses of the claimed type and size. We’ll find out on December 13th.
  • And also, it might even be true that these excesses are signs of the Standard Model Higgs particle. We will not find that out on December 13th.

Why not? There’s just not enough data yet. We might see some evidence in favor of some type of Higgs particle being at some particular mass, and the evidence might be intriguing, even moderately compelling, but it certainly won’t be convincing. We need to remember a few things about statistics, before we get overly excited. It’s very easy to misinterpret claims about statistics, if you don’t ask exactly the right question.

The classic examples: the probability that you will win the lottery is very different from the probability that someone will win the lottery. If you and twenty other people are in a room together, the probability that you have the same birthday as someone else in the room is small, but the probability that some pair of people in the room have the same birthday is large. And the probability that, in a plot of an experiment’s data, any particular bin will have a 3 σ excess by pure accident is very small, but the probability that one of 25 bins will have a 3 σ excess is much larger.

To think about this within the Higgs context, let’s do a very rough estimate by looking at one bin at a time. Let’s say ATLAS and CMS bin their data on two photon events in 1 GeV-wide bins. Background from non-Higgs processes will contribute about 400 events per bin, for putative Higgs masses near 120 GeV.   Statistical fluctuations in the background by 1 σ upward would mean 20 extra events; a 2 σ fluctuation would mean 40 extra events. Meanwhile, a Higgs signal might mean perhaps 40 events total, which, with a favorable fluctuation upward, might turn out to be 50 or 60. Putting the favorable signal on top of an upward fluctuation on the background could mean a 2 or even a 3 σ excess. That’s for one bin. But just because you have a 3 σ excess in one bin does not mean you have a 3 σ excess in your measurement… because there are many bins. The question is: if you see  a 3 σ uptick in one of the bins, is that a signal of a Higgs, or perhaps something that happened just by chance? The probability that any one bin will fluctuate upward, by chance, by 3 standard deviations is much less than a percent. But the probability that one among 25 bins will do so — or that two or three bins together will fluctuate up by something less than 3 standard deviations each — is much more than a percent. And the probability that you’ll get a 2 standard deviation upward fluctuation somewhere within 25 bins is closing in on 50 percent!  (This reduction in the statistical significance due to multiple bins is often called the “look-elsewhere” effect, which I discussed previously here in the context of this summer’s nearly coincident earthquake and hurricane in the northeastern USA.)

So I can believe that, if there is a Standard Model Higgs at 125 GeV, CMS might have now an excess of 2 σ in one bin, or collectively in two or three neighboring bins. I can even believe that ATLAS might have a 3 σ excess of a similar sort; they might have gotten lucky, and a small signal might be sitting on an upward fluctuation of the background. But I cannot believe that the correct statistical significance of the results, once the look-elsewhere effect from the multiple bins is accounted for, will be as large as the raw rumors have suggested.

In fact, we saw this same issue this summer at the Grenoble conference, when ATLAS reported an excess in their Higgs search that reached 2.8 σ. Any expert knows that you can’t just take a number like that at face value, without understanding it better, and that’s why, immediately following Kyle Cranmer’s talk on the subject, I went to ask him about it. And indeed I learned (and reported to you) that one had to be very careful interpreting that “2.8”, because the look-elsewhere effect was not included in it. Instead the probability of the excess was a few percent, not the fraction of a percent that 2.8 σ would naively imply.

The really important question that December 13th will answer, aside from whether these excesses exist at all, is whether the rumors are true that the ATLAS and CMS excesses are located at the same value of the Higgs mass. They’d better be almost exactly in the same place; if one is at 126 GeV and the other is at 123 GeV, that won’t count for much, because the mass measurement from collisions that make two photons is precise to the level of 1 to 2 GeV. If both experiments have an excess at the same location, that will be favorable evidence in favor of a new particle, but given the current amount of data it will not be statistically enough for any claim of a discovery, much less confirmation that the new particle is the Standard Model Higgs particle. Fluctuations of this size do happen occasionally.

[For instance, earlier this year, at that same Grenoble conference, the CDF experiment at the Tevatron reported four events with two charged leptons and two charged anti-leptons that were roughly consistent with a non-Standard Model Higgs particle at 327 GeV. And in this case, there was essentially no background to worry about; these four events were all alone, off by themselves.  It certainly looked significant!  Yet the observation appears to have been a fluke; CDF could not confirm the result when they looked for other related processes that they should have been able to observe, and neither DZero (CDF’s competition at the Tevatron) nor ATLAS nor CMS has seen comparable events, which would have happened by now].

Assume for a moment the rumors are all correct; then, to be sure that the current Higgs evidence isn’t a statistical fluctuation that will disappear with more data, we’ll need to see what things look like when there’s at least twice as much data available and analyzed as there is now. That is unlikely to happen before June 2012, since the LHC is off for the winter and won’t restart til March.  [And really, we’d probably want four times as much data to reach the coveted level of 5 σ evidence, which might mean late summer or fall.]

So. Everyone should just relax. Even if there is evidence for the Higgs at 125 GeV, and even if, down the line, the evidence turns out to be correct, the Higgs search is very unlikely to deliver a clear message on December 13th…

… unless the Higgs particle which is the subject of current rumor is not a Standard Model Higgs particle, and is being produced and/or is decaying to two photons at a larger rate than would be the case for a Standard Model Higgs. In that case my estimate of the size of the signal could be too small, and the evidence in favor of the Higgs particle could be much stronger than I expect. That would be fantastic!  I wouldn’t bet on it though.

After this post was written but before it appeared, CERN produced a press release, dated December 6, in which it is stated:

“A seminar will be held at CERN on 13 December at which the ATLAS and CMS experiments will present the status of their searches for the Standard Model Higgs boson. These results will be based on the analysis of considerably more data than those presented at the summer conferences, sufficient to make significant progress in the search for the Higgs boson, but not enough to make any conclusive statement on the existence or non-existence of the Higgs.”

Remember the CERN management has actually seen the data… So you may or may not trust me, but you should trust them.

Share via:

Twitter
Facebook
LinkedIn
Reddit

17 Responses

  1. Questions from a non-professional:
    1. can someone explain what effect the SM Higgs mass has on the stability or energy of the vacuum? I’ve seen this written with respect to Higgs coupling? Can someone explain that?

  2. As always, you Matt have made such a complex issue an easy read, at your Guest Post: Matt Strassler on Hunting for the Higgs. Thanks.

    “…it might even be true that these excesses are signs of the Standard Model Higgs particle. We will not find that out on December 13th.” This again is an honest and fair statement.

    Your flow chart of search phases is absolutely correct (at that guest post), but it does not assign the probability for each branch. With the current data, I think that we can do some probability estimate for each branch.

    1. If there is a X Gev Higgs (such as 125 G) which can be produced with the Y collision energy, then any event with Yi > Y collision energy can have a non-zero probability (Pi) of producing this Higgs. Thus, the summation of NiPi for all Yi > Y should be the total signals for this Higgs in our data (Ni is the number of events at Yi) in addition to the direct measurement at Y. Thus, these Yi > Y events will place a very strong constrain on X Gev Higgs even without the direct measurement at Y.

    2. The argument that the light weight Higgs can only be discovered via the photon channel is a bit odd. In most cases of boson decay, the photons are not the mean channel. The photon channel should only be the circumstantial evidence. Without the solid evidence from the main decaying channels, the photon channel excess can simply be the unexpected background, missed by the Standard Model calculation.

    3. In the case of a light weight Higgs, it was reachable by Tevatron for over 7 years. For any energy level beyond the Tevatron, the LHC data cannot be challenged. Yet, in the light weight Higgs case, without a confirmation from Tevatron, no discover can be claimed.

    From the three points above, I will place the 95% at the red branch (SM Higgs excluded).

  3. Great post. The fact that after Cranmer’s talk you had to go ask him what he actually meant makes me wonder why experiments don’t include that information as a matter of standard practice in their announcements. And you are a particle physicist. If it wasn’t clear to you, imagine to a journalist!
    Perhaps the whole system for announcing the sigmas of results could use some reform?

    1. Davide — What I think is really needed is better awareness of the fact that you can get very different statistical significance numbers by asking different types of questions. In other words, this involves educating the public (and journalists) that you always have to inquire, when you see a number quoted for statistical significance, what was the question to which that number was the answer.

      This issue comes up across science, of course, including isses that matter much more to daily life than the Higgs particle — such as medical research, Gallup-style polls, and the like.

  4. Some comments about the look-elsewhere effect (LEE) and “5σ”. The 5σ requirement taken at face value is an overly conservative requirement. Some digging into the origin of the 5σ convention shows that it was introduced specifically to protect against the LEE (though through a crude means). Of course, it also protects one against underestimated systematic effects to some degree. The best argument I’ve heard in favor of 5σ was from Wouter Verkerke, who pointed out that empirically the requirement has kept us pretty safe from false discoveries and not stopped scientific progress.

    The ATLAS and CMS joint statistics meetings discussed this in September and the general consensus was that 5σ after correcting for the LEE was overkill. Instead the feeling was roughly that once the LEE-corrected significance goes beyond about 3σ we are entering a different regime: one where the signal has clear statistical significance and that the focus turns to assessing true systematic effects and other criteria that are more difficult to handle statistically.

    There is a lot more here that one could discuss in terms of how the LEE is taken into account and the statistical paradoxes that one can get into by sticking with a fixed criteria like 5σ for discovery. There is also the intellectually shallow practice by some journals not to let experiments put words like “observation”, “evidence”, and “discovery” into their titles based on the statistical significance alone, overriding the experiment’s assessment of these other critical factors.

    I’m looking forward to hearing from CMS on Tuesday!

    1. Kyle (Professor Kyle Cranmer of New York University) — I agree that 5 sigma after the look-elsewhere effect is probably overkill. It is almost certainly overkill *statistically*.

      I also strongly agree with Wouter Verkerke (“who pointed out that empirically the requirement has kept us pretty safe from false discoveries and not stopped scientific progress”) because the 5 sigma criterion is partly about protecting us from non-Gaussian tails on systematic errors (i.e., for non-experts, from the fact that rare events are not always as rare as we assume when we model their probability using the fancy mathematics of standard statistical methods) and partly protecting us from the effect of subtle unknown mistakes and bias in our experiments.

      I would further add that one of the best things about the search for the Higgs decaying to two photons is that it is very clean, and that unlike the search for the Higgs decaying to two W particles, it has very low systematic errors, and therefore is more trustworthy on its own. If we have 3 σ significance (after look-elsewhere effects) in two-photon events alone, I will see that as pretty convincing. I view the Higgs decaying to two charged leptons and two charged antileptons as similar.

      On the other hand, if in order for us to claim 3 σ significance after look-elsewhere effect requires using the search for the Higgs decaying to two W particles, where the systematic errors are more important, then I will be much less convinced… because then I think we do have to worry about non-Gaussian tails on systematic errors, and requiring something more than 3 σ would be wise.

      On the other other other hand, my real point of view on this is that none of these details really matter… because we are not even close to running out of data. If the Large Hadron Collider had just come to an end, and the data we have now was all the data we’d get for many years, we’d really have to push hard on the statistics question. But we have lots more data coming! And no matter what significance any Higgs signal has now, and where we as individuals and as a community want to draw the line between convinced and not convinced, we know that six months to a year from now the issue will be much closer to settled. I’ve waited thirty years for this particle to show up; I can certainly handle a few more months during which we’re not yet sure what we’re seeing.

      After all, the knowledge we’re in the process of creating right now isn’t something to sell during the Christmas holiday or needed for next year’s presidential election. It’s knowledge for the ages. It really doesn’t matter whether we know it this December, or in June next year, or in March 2013; it matters that we get it right.

  5. Hi Matt:
    These are good points (here is my own recent take on the look-elsewhere effect:
    http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.107.101801).

    Question: How much should our expectations shape our confidence in a result? For example, if LHC experiments were to announce 3 sigma “evidence” of a Higgs boson between 115-130 GeV, should we assign that more credence than a 3 sigma signal at a much higher mass, in apparent disagreement with indirect precision data? If so by how much? I suppose I’m asking what a sensible “prior” is in this situation. Such a prior should not be used make a strong claim of discovery on weak evidence, but I suppose it could be used to require even more significance than usual for a really unexpected claim. OPERA comes to mind, which is why we all want data from a separate experiment (though that is more about potential systematic errors).

    Best, Robert

    1. Robert (for those who do not know him, Robert is one of the editors at the journal Physical Review Letters)

      Your question is a good one, but I’m going to un-ask it. I think your question would matter enormously if we weren’t going to get any more LHC data. But because we’re going to get so much more LHC data over the next few years, I think it is best to leave our prior expectations at the door. This is a wonderful situation to be in, where nature is talking to us, and we physicists can be a little quieter than we usually are. There is every expectation that if nature has a Standard Model-like Higgs particle at any still-allowed mass, low or high, and if the LHC works as it should, then the ATLAS and CMS experiments will deliver the evidence of the Higgs’ existence with significance far beyond any possible question. Indeed, when the LHC was designed, this was one of the criteria that it was required to easily meet.

      If instead nature does not have a Standard Model Higgs particle, and the Higgs particles that are out there are of a type that is harder to discover, we may find ourselves in a situation where we do have to answer your question. I think there will be a long debate and intelligent people will disagree. But that’s a few years off; I don’t see this becoming urgent before 2016 or so.

  6. Matt–I wouldn’t take the CERN management statement too seriously, since their definition of “conclusive” is probably five sigma, which is clearly beyond reach until June or so. Many of the blog rumors are 3.5 sigma for ATLAS and 2.5 for CMS, with a 2 GeV difference in the mass. If those rumors are true, and the photon resolution is 2 GeV, then this would still be quite improbable even with the look-elsewhere effect. Probably not as strong as the 4.3 sigma one would get by crudely adding the sigmas in quadrature, but likely above the 3 sigma that constitutes “evidence”. All of this shows the wisdom of the community in deciding that 5 sigma is needed for discovery.

    1. Marc (Professor Marc Sher, of William and Mary) — at some level, the details will really matter. See my reply to Kyle Cranmer below — does all the evidence come from low-systematics channels such as two-photons, or are there higher-systematics channels being combined in? Is the difference in the mass location 2 GeV or 3? Is it 3.5 sigma or 3.2? 2.5 or 2.8? What mass range nearby has actually been excluded?

      On the other hand — see my replies both to Kyle and to Robert Garisto — really this doesn’t matter very much. If after Tuesday you and I and other physicists are still publicly debating whether we are or are not convinced yet, then that should tell the public that probably we don’t have enough data yet to be sure, and they should check back in June to see if significantly more of us are convinced at that point.

Leave a Reply

Search

Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC

Related

Recently, the first completed search for what is nowadays known as SUEP — a Soft-Unclustered-Energy Pattern, in which large numbers of low-energy particles explode outward

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/15/2024

About a month ago, there was a lot of noise, discussion and controversy concerning CERN‘s proposal to build a giant new tunnel and put powerful

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/08/2024