Of Particular Significance

The 2016 Data Kills The Two-Photon Bump

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 08/05/2016

Results for the bump seen in December have been updated, and indeed, with the new 2016 data — four times as much as was obtained in 2015 — neither ATLAS nor CMS [the two general purpose detectors at the Large Hadron Collider] sees an excess where the bump appeared in 2015. Not even a hint, as we already learned inadvertently from CMS yesterday.

All indications so far are that the bump was a garden-variety statistical fluke, probably (my personal guess! there’s no evidence!) enhanced slightly by minor imperfections in the 2015 measurements. Should we be surprised? No. If you look back at the history of the 1970s and 1980s, or at the recent past, you’ll see that it’s quite common for hints — even strong hints — of new phenomena to disappear with more data. This is especially true for hints based on small amounts of data (and there were not many two photon events in the bump — just a couple of dozen).  There’s a reason why particle physicists have very high standards for statistical significance before they believe they’ve seen something real.  (Many other fields, notably medical research, have much lower standards.  Think about that for a while.)  History has useful lessons, if you’re willing to learn them.

Back in December 2011, a lot of physicists were persuaded that the data shown by ATLAS and CMS was convincing evidence that the Higgs particle had been discovered. It turned out the data was indeed showing the first hint of the Higgs. But their confidence in what the data was telling them at the time — what was called “firm evidence” by some — was dead wrong. I took a lot of flack for viewing that evidence as a 50-50 proposition (70-30 by March 2012, after more evidence was presented). Yet the December 2015 (March 2016) evidence for the bump at 750 GeV was comparable to what we had in December 2011 for the Higgs. Where’d it go?  Clearly such a level of evidence is not so firm as people claimed. I, at least, would not have been surprised if that original Higgs hint had vanished, just as I am not surprised now… though disappointed of course.

Was this all much ado about nothing? I don’t think so. There’s a reason to have fire drills, to run live-fire exercises, to test out emergency management procedures. A lot of new ideas, both in terms of new theories of nature and new approaches to making experimental measurements, were generated by thinking about this bump in the night. The hope for a quick 2016 discovery may be gone, but what we learned will stick around, and make us better at what we do.

Share via:

Twitter
Facebook
LinkedIn
Reddit

15 Responses

  1. Hi Matt, do you know where I can find any updates on the Higgs’ properties based on the large new amounts of new events from the LHC? Like properties confirmed, or decay channel ratios confirmed?

  2. Well, the ATLAS Collaboration has weighed in at arxiv (8/9/16) and reported: dark matter particles still AWOL. That is, unless you want to redefine MACHOs and PBHs as really HUGE “particles”.

  3. Matt, could you clear something up for me? I remember (though perhaps misremember) from your writings that the naturalness problem was that empty space had energy in a miniscule (but non-zero) amount–we would have expected the quite large positive and negative contributions to the energy of empty space to result in a much greater absolute value. But from reading Natalie Wolchover’s article at Quanta today (“What No New Particles Means for Physics”), it appears that according to her, the naturalness problem is that the Higgs has the low mass it does. Wolchover writes that the various large contributions, positive and negative, to the Higgs’s mass, would be expected to add up to something much more energetic, and the coincidence that they almost perfectly cancel each other out requires explanation. My question is: are these two obviously similar accounts just different descriptions of the same underlying difficulty with reconciling theory and observation? Or are they two independent issues whose descriptions happen to be somewhat parallel? And, if they are independent coincidences, are both of these indeed known (confusingly) as “the naturalness problem”, or are they two specific instances of a more general category of “naturalness problem”, which includes these two as well as possibly other instances of calculations that proceed along similar lines?

    1. The latter, I’d say. These are two different problems with a somewhat similar flavour. Physicists do not usually use the phrase “the naturalness problem” (at least not without context), but rather call them the “hierarchy problem” (the one about the Higgs mass) and the “cosmological constant problem” (the one about the zero-point energy of the vacuum), respectively.

      Both are related to the fact that quantum field theory predicts that quantum fluctuations affect the values of observable quantities like particle masses or the vacuum energy (observable by tracing the rate of expansion of the universe).

      There are quantum fluctuations at every length scale (e.g. 1 cm, 0.1 cm, 0.01 cm, 0.001 cm, etc.) and they all contribute, as long as the theory is to be trusted. So you need to accumulate the effects over the range of length scales over which you expect the theory to hold, and usually the shortest length scales (highest energy scales) have the largest effects. Thus one expects the observable properties to be proportional or otherwise naturally related to the highest energy scales where the theory holds — unless there is a very good reason that a certain observable is not affected by the quantum fluctuations.

      For example, if physics is supersymmetric within a certain range of length scales, the quantum fluctuations at those length scales will in total not affect the Higgs mass. Only the length scales were supersymmetry does not hold (and we know it does not hold at very long length scales, i.e. at low energies) would contribute. So supersymmetry could solve or at least greatly mitigate the hierarchy problem, if it holds at very high energies and “reaches down far enough to lower energies”. Unfortunately, it currently seems that supersymmetry, if it is there, does not reach down far enough to be detectable at the LHC.

  4. Should I infer that the news on particle dark matter is the same as it has been for 45 years: nada?

  5. Matt, your comparison to the Higgs in 2011 is misguided. The reason it was appropriate to have some serious confidence that what was seen in 2011 was real and was the Higgs was that the Higgs almost certainly had to exist. We knew that by far the only reasonable things that could fix the problem of WW scattering unitarity violation was the Higgs or technicolor. By 2011, technicolor was extremely constrained and highly unlikely to be the correct model. That left the Higgs as by far the most likely thing to exist, and 1000 times more likely to exist than nothing at all which would ruin quantum mechanics.

    So to compare that to this 750 GeV bump is really comparing apples and oranges. There is absolutely no prior reason why there should be some random new scalar particle at that mass scale. It fixes nothing in the Standard Model, it is not the dark matter, it is totally random. That doesn’t mean that a priori it couldn’t have been real, but it required much more evidence for it to be taken seriously.

    The evidence required for a new particle beyond SM needed to be extraordinary for this extraordinary claim. While the Higgs, being part of the SM, only needed ordinary evidence for it was only an ordinary claim of the null hypothesis.

  6. The fact that both experiments saw the same fluke may mean that there was a bias? Or maybe there was a specific period of data taking which introduced some pathology ?
    This can be checked if one measures the inelastic cross section vs run for example.

  7. The energy range over which there could be a diphoton excess is much larger than where a Higgs could be in December 2011; leaving just a 115 – 140Gev range to look at. The Look Else Where Effect, therefore, didn’t affect the significance of the Higgs signal as severely, making the appearance of a bump at the same location for ATLAS and CMS highly significant.

    1. In 2011, CMS barely had any evidence at all in the diphoton channel… or anywhere else. It had to cobble the evidence together from many channels — and some of that evidence proved questionable later. Of course, by June 2012, both experiments had discovery-level evidence, which is the only thing that matters.

  8. The CDF discovery of the top quark in 1995 was only 4.8 sigma and no accounting for LEE or global significance:

    https://arxiv.org/abs/hep-ex/9503002

    It seems like 5 sigma even without accounting for global significance does pretty well when there’s a solid theoretical foundation for the discovery (it was pretty well assumed the top quark had better be there somewhere, similarly the lack of a higgs sector somewhere in the regime where it was found would have been more startling than finding it). For “bump hunting” discoveries it makes sense that you have to make sure to account for global significance which pushed the 750 GeV bump down to around 2 sigma in each experiment, and it shouldn’t be too surprising that it went away.

Leave a Reply

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.

Related

Back in April 2022, the CDF experiment, which operated at the long-ago-closed Tevatron particle collider. presented the world’s most precise measurement of the mass of

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 09/19/2024

In my role as a teacher and explainer of physics, I have found that the ambiguities and subtleties of language can easily create confusion. This

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 07/09/2024