Was yesterday the day when a crack appeared in the Standard Model that will lead to its demise? Maybe. It was a very interesting day, that’s for sure. [Here’s yesterday’s article on the results as they appeared.]
I find the following plot useful… it shows the results on photon pairs from ATLAS and CMS superposed for comparison. [I take only the central events from CMS because the events that have a photon in the endcap don’t show much (there are excesses and deficits in the interesting region) and because it makes the plot too cluttered; suffice it to say that the endcap photons show nothing unusual.] The challenge is that ATLAS uses a linear horizontal axis while CMS uses a logarithmic one, but in the interesting region of 600-800 GeV you can more or less line them up. Notice that CMS’s bins are narrower than ATLAS’s by a factor of 2.
Both plots definitely show a bump. The two experiments have rather similar amounts of data, so we might have hoped for something more similar in the bumps, but the number of events in each bump is small and statistical flukes can play all sorts of tricks.
Of course your eye can play tricks too. A bump of a low significance with a small number of events looks much more impressive on a logarithmic plot than a bump of equal significance with a larger number of events — so beware that bias, which makes the curves to the left of the bump appear smoother and more featureless than they actually are. [For instance, in the lower register of CMS’s plot, notice the bump around 350.]
We’re in that interesting moment when all we can say is that there might be something real and new in this data, and we have to take it very seriously. We also have to take the statistical analyses of these bumps seriously, and they’re not as promising as these bumps look by eye. If I hadn’t seen the statistical significances that ATLAS and CMS quoted, I’d have been more optimistic.
Also disappointing is that ATLAS’s new search is not very different from their Run 1 search of the same type, and only uses 3.2 inverse femtobarns of data, less than the 3.5 that they can use in a few other cases… and CMS uses 2.6 inverse femtobarns. So this makes ATLAS less sensitive and CMS more sensitive than I was originally estimating… and makes it even less clear why ATLAS would be more sensitive in Run 2 to this signal than they were in Run 1, given the small amount of Run 2 data. [One can check that if the events really have 750 GeV of energy and come from gluon collisions, the sensitivity of the Run 1 and Run 2 searches are comparable, so one should consider combining them, which would reduce the significance of the ATLAS excess. Not to combine them is to “cherry pick”.]
By the way, we heard that the excess events do not look very different from the events seen on either side of the bump; they don’t, for instance, have much higher total energy. That means that a higher-energy process, one that produces a new particle at 750 GeV indirectly, can’t be a cause of big jump in the 13 TeV production rate relative to 8 TeV. So one can’t hide behind this possible explanation for why a putative signal is seen brightly in Run 2 and was barely seen, if at all, in Run 1.
Of course the number of events is small and so these oddities could just be due to statistical flukes doing funny things with a real signal. The question is whether it could just be statistical flukes doing funny things with the known background, which also has a small number of events.
And we should also, in tempering our enthusiasm, remember this plot: the diboson excess that so many were excited about this summer. Bumps often appear, and they usually go away. R.I.P.
Nevertheless, there’s nothing about this diphoton excess which makes it obvious that one should be pessimistic about it. It’s inconclusive: depending on the statistical questions you ask (whether you combine ATLAS and CMS Run 2, whether you try to combine ATLAS Run 1 and Run 2, whether you worry about whether the resonance is wide or narrow), you can draw positive or agnostic conclusions. It’s hard to draw entirely negative conclusions… and that’s a reason for optimism.
Six months or so from now — or less, if we can use this excess as a clue to find something more convincing within the existing data — we’ll likely say “R.I.P.” again. Will we bury this little excess, or the Standard Model itself?
24 Responses
IMHO the basic Higgs field particle is massless, however by transformation and merging it is able to form all particles needed in the SM. as extra possibility I would suggest that byond the SM weyl fermions and Majoranas can form after transformation and merging also found in the LHC at 125 and 750 GeV.
see: https://www.flickr.com/photos/93308747@N05/23627741770/in/photostream
what I understand is that our play is with data and graphs. how can I learn these graphs very well?
Some subtle and interesting physics may have been discovered. But enough of trivial matters, it is time to discuss the best words to use describing it. Do we have any philosophers to weigh in?
Kent, If you don’t mean RIP then Matt should never have written “The Standard Model isn’t dead yet.” That clearly implies that the SM could become dead. Words matter.
As can be calculated from the table entries below; a Top-Super Diquark Resonance is predicted as a (ds)bar(ss)=(ds)barS or a (ds)(ss)bar=(ds)Sbar diquark complex averaged at (182.758+596.907)GeV=779.67 GeV.
atlas_cms_diphoton_2015-.31707.
In the diquark triplet {dd; ds; ss}={Dainty; Top; Super} a Super-Superbar resonance at 1.194 TeV can also be inferred with the Super-Dainty resonance at 652.9 GeV and the Top-Dainty resonance at 238.7 GeV ‘suppressed’ by the Higgs Boson summation as indicated below. Supersymmetric partners become unnecessary in the Standard Model, extended into the diquark hierarchies.
https://www.researchgate.net/publication/287347236_The_Top-Super_Diquark_Resonance_of_CERN_-_December_15th_2015
Reblogged this on In the Dark and commented:
Thoughts from a proper particle physicist on the recent announcement from the LHC…
Plato,
I think you are missing the point – what we are saying is that even if this signal turns out to be true, it is not going to mean “RIP” for the standard model. Yes, we need to watch this closely, but it is to see whether or not we are going to get a clue of how to go beyond the Standard Model, not a clue that we need to bury it.
“Conclusion? The Standard Model isn’t dead yet… but we need to watch this closely… or think of another question.” CMS and ATLAS present their results -http://profmattstrassler.com/2015/12/15/cms-and-atlas-present-their-results/
I think following blog post conclusion makes it pretty clear?
I agree with commenters that this language of “crack”, “demise”. “RIP” of standard model is inacurrate and very misleading.
The standard model is a consistent theory of all particles, that share appreciable couplings to familiar matter, such as atoms, nuclei, light, that applies as a consistent renormalizable theory at low energies, where “low” appears to be several hundred GeV or so.
Of course it is incomplete at high energies. The inventors of the standard model understood this very well. Did we say “RIP” QED when we learnt about nuclear forces? No, because QED did not claim to be the complete theory for nuclei.
Matt,
I remember that you wrote a post not that long ago about a significant study you and several others published. (I have looked for the post but could not find it.) I believe the study, in preparation for the second run, concerned various ways new/interesting results at the LHC might manifest themselves in less looked-for ways.
Granted the existence of this signal is far from certain.
My question — was there anything like this 750 GeV “bump” in that work? Are there any favored interpretations, if this signal is indeed real?
Thanks for the interesting blog post.
I agree as well. Most physicists know that when Dr. Strassler says “a crack” in the standard model, he means that we may have discovered a regime in which it must be modified. But as this is a blog targeted mainly to non-physicists, we should be very careful how this is phrased. We don’t want to give any science journalists an excuse to print a headline like “Standard Model Proved Wrong! Scientists bark up wrong tree for 50 years!” Claiming that we may have to say “RIP” to the Standard Model is dangerously close to this for someone who is generally extremely careful how he explains things to lay readers. That said, I am very glad he has started blogging again, even if it is only occasionally for momentous news!
I agree, Stam. We’ve always known the Standard Model is incomplete. This would just be an extension.
No. Because, assuming this signal does survive, it will not indicate any inconsistency of the Standard Model, but its, well known, incompleteness. So it is not correct to state that this is any indication of a “crack” in the Standard Model.
“the diphoton excess is inconclusive” but the fact that there may be a “crack” in the Standard Model has to be considered exciting news.
Thanks for your blog survey ‘manopc’.
Reblogged this on Current Affairs and commented:
Standard Model being overtaken – Lubos Motl is very optimistic, Matt Strassler is excited, Peter Woit is pessimistic
One well-known blogging astrophysicist who does not have an iron in the fire put the odds against a real detection at 6000 to 1 when sensible statistics are employed in the assessment.
I would like to know more of that argument, not about this instance but rather to better understand “real statistics”
Can you please outline some of his argument? I interested in learning more about “sensible statistics” as an outsider biologist. Thanks.