Quick post: the CMS experiment at the Large Hadron Collider [LHC] has updated its measurement of the rate for Higgs particles to be produced and then decay to two photons. We’ve been waiting for this result with considerable interest. Recall the history: in July, both CMS and its cousin, the ATLAS experiment, found this process to be in excess of the prediction of the Standard Model [the equations we use to describe the known elementary particles and forces]. Indeed, these excesses were part of why the Higgs particle was discovered a few months earlier than was widely expected. Although it was exciting that both experiments saw something amiss, the statistical significance of these excesses wasn’t that high, so more data to confirm the excesses was needed before we could take them very seriously.
Then ATLAS updated their results, first in November and then last week — the latter measurement based on the full 2011-2012 data set. Each time the excess has remained, though it has become a bit smaller each time, and therefore the statistical significance of the result has not really increased. That’s an unfortunate sign, if one is hoping the excess isn’t just a statistical fluke. [Note that, by contrast, the statistical significance of the evidence for the Higgs particle’s very existence has gone up each time we’ve seen new data.]
Meanwhile CMS did not update their results for this process in November. Rumors as to why swirled around, and there were some comments about this by CMS spokesman Joe Incandela in January; but we can forget about the rumors now, because the result based on the full 2011-2012 data set has now appeared at the Moriond conference. And the result basically agrees with the Standard Model. (Details to follow in a future post.)
To repeat: with more data, CMS does not confirm the excess that they saw in July, and does not confirm the excess seen currently by ATLAS. There’s nothing unusual about this, unfortunate as it may be; the results are all consistent with the overall measurement uncertainties that have been quoted at each stage by the two experiments.
So if, in fact, it is really the case that Higgs particles are produced and then decay to two photons more often than the Standard Model predicts, we will not see convincing evidence of this until well past 2015, when the LHC starts running again and much more data is collected by ATLAS and CMS. Sorry, but if there’s anything about this Higgs particle that is dramatically different from a Standard Model Higgs (the simplest possible type of Higgs particle), we’ll have to look elsewhere in the 2011-2012 data to find it. Or we’ll have to wait til 2015.
40 thoughts on “CMS sees no excess in Higgs decays to photons”
So , dear Matt…..is it crunch time for physics as newscientist magazine scientists predict ??
Don’t believe everything you read.
Are we to look for BSM forever ? when do you think we must stop looking ?
2015 , 2018 , 2020 or never ?can it be a mirage we are looking for ?
We know the Standard Model is incomplete. There is dark matter; there is some source of neutrino masses; we have no explanation for the precise masses of the fermions; and gravity cannot be combined with the Standard Model without additional physics. So I don’t know how long forever is, but we are certainly not going to stop looking for Beyond the Standard Model Physics, since we already know it exists.
This is all very cool. When I was a undergrad we barely knew there is structure in the proton. Now we are probing one of the most basic components of the universe. To me, this is kind of like a goldfish discovering water.
No dave , it is not fish discovering water , this is man discovering his meta-physical mind , pure higgs associates does not care to discover higgs .
Allow me to state a prediction here :
There will never be a BSM , our missing explanation will not be found in it , the only explanation will be in : pattern creating active information reality.
Remember this for the next 50 years till it is confirmed.
This is not a scientific prediction, and I assure you will not be remembered, or given credit even if it is true. If you want to make a prediction worth the name, make one with a scientific argument.
Prof. Strassler— Can you comment, sometime, on Dr. Lykken’s reasoning re ours being an “unstable universe”, and particularly his calc’s (??) for the probability of the “wipe-out” event? Thx!
It is not due to Professor Lykken. These ideas are due to other people. It’s a very old idea, in fact. But it’s also very sensitive to things we don’t know. It needs a post but I’m not very interested in it as stated. Overly hyped.
The next thing they need to find is tetraneutrons as these are nuclei of element zero
whether true or not, this would be a terrible experiment in which to look for such things. There’s far too much energy, and entirely the wrong type of detection equipment.
What a terrible fate to read and answer comments like these.
Educating general public was never meant to be easy.
It is also a terrible fate to fight constantly with university administrators and write grant proposals with a 5% chance of success. Choose your poison.
I’m sorry to hear that Matt but please be a bit more careful to do answer people that make a really good remark, like AB32.
Overlooked, but now answered. And it was *not* a really good remark, despite appearances. A 2 sigma change in a result? Big deal.
Good remark or bad remark on this blog, everything is relative as Einstein didn’t say :-). “Maybe they found their detector problem and ATLAS still hasn’t found theirs… or maybe” Seem like an appropriate discussion on your blog to me.
Fair enough. It’s partly a matter of style; it’s one thing to raise questions, and another to accuse people of incompetence or misconduct. To say that we shouldn’t trust CMS because their result changed by 2 standard deviations, while we should trust ATLAS because their result changed by less than 1 standard deviations, is to misunderstand statistics and uncertainty estimates.
The fact that an outcome changes with more data by a few standard deviations is indeed a normal statistical effect BUT only if the method hasn’t changed! I have to admit that I’m not well informed on the data analysis but as an outsider I would have appreciated a better communication by CMS/CERN to the outside world about statistics verus method, especially because it took them so long to publish the diphoton result. You already explained some issues in your answer to AB32 and that already clears up some things. Thanks Matt.
The methods have not changed much if at all (they’ve been using multiple methods from the beginning, as cross-checks.)
What changed most (but not very much — we’re talking less than 1%) was the calibration data which goes into all of photon energies in all of the events that they are using. When they measure the photon energy, they need to know exactly how each part of their calorimeter (i.e. energy detector) is working, in order to get the right number. This is the trickiest (and most central) part of the measurement; you derive the mass of the particle that could have created the two photons (and most of the time they are created some other way) from those photon energy measurements. Now, when you have a very large amount of background, and a very small amount of signal, a relatively small shift in calibration at each point in the detector, which will cause each event you’ve measured to move slightly to the right or slightly to the left in your plot, can cause a small bump in the data (i.e., the Higgs signal) to grow or shrink, depending on what happened to the background that lies underneath it. When this happened to CMS this fall, the effect was larger than they naively expected, and some people worried that an error had been made. It took them some time to convince themselves that no, they had not made an error, and yes, the effect, though larger than it might have been, was not larger than was reasonable, given the uncertainties they had quoted originally back in July. So having convinced themselves of this, they set out to do analyses on the full data set. And the result of this analysis is what we have now seen.
According to the experts in CMS, they decided on their method before they did the analysis, and they stuck to it. The results that we now see are not cherry-picked, they claim; they made the decision first, then studied the data, and presented the results based on their pre-analysis decision.
When you say “I would have appreciated a better communication by CMS/CERN to the outside world about statistics verus method”; (a) it’s not CERN’s job, it’s CMS’s job, and (b) the communication to scientists hasn’t been that bad. They did present some of the relevant numbers, though maybe not all of them.
The problem here is all political and not scientific; they are now being accused of not having had enough signal to claim discovery back in July, by some who would want ATLAS to get credit and CMS not. This is just silliness. The “5 sigma” criterion is arbitrary, and CMS wouldn’t have been far below 5 in any case. Furthermore, I am told the improved calibration that reduced the photon signal also increased the tau signal, which was low (as you may recall) back in July, and is now much higher, so the two effects would have partly canceled. Meanwhile, one could raise questions about whether the misalignment of ATLAS’s two mass measurements means that perhaps they too have some problems. (I think it is likely to be a statistical effect, personally.) And lastly, both experiments have been performing with comparable excellence, skill and quality. To deny one of them credit based on the vagaries of statistical luck is absurd. Obviously both of them now have strong evidence and can claim credit.
Many thanks Matt for this clear description of the CMS diphoton analysis methodology. I’m sure it will also be appreciated by other readers of your blog!
I hope it is entirely right… I may still learn things and have to make small adjustments…
“It is very exciting to be here, and this year just has been quite exhilarating as a particle physicist!” Meenakshi Narain, a professor of physics at Brown University, wrote to LiveScience from the conference.”
Wow, that is the best news in human history! … we finally know how we will all die. 🙂
🙂 we don’t know anything more than we did before about that…
Seriously, what excites me is fact that we humans, organisms on a speck of dust in this vast universe, could workout this mysteries of the cosmos that made us. There is no chance of aligning ourselves with the complexities of the universe itself without some guiding mechanism, as in “guiding light”, to show us the way.
Some will call it God others will call it universal consciousness that we linked to when our consciousness “turned on”. There must be some kind of quantum entanglement between us and the “Almighty One”.
Maybe, there is something to the idea that complex systems develop consciousness and logical thinking. Maybe, the fact that the largest structures, the filaments that make up the 3D universe and the nerve structures in our brain are very similar, is not a coincidence. But in fact, one drives the other, The Almighty One guiding us.
We have almost workout the force of nature, now the Higgs revealed itself, maybe now we can look for the force of God.
I am existed because this wonderful find, the Higgs, confirmed my faith in God.
Bravo, Mr. Peter Higgs and the all others who gave us this wonderful moment, while we still are alive to marvel it. God bless.
I find it a little irresponsible for CMS not to say anything on what they would see if the new analysis was applied to their previously published dataset / ICHEP dataset. This wouldn’t be a necessity if they didn’t have such a large change in the observed significance.
Either they saw nothing in the newly arrived data, thus indicating their new analysis potentially problematic, or their ICHEP analysis was just wrong. At this point, their result is just not trustworthy.
a) On the contrary, they *did* say this. At least it was in the talk I saw on Thursday. They said that the new analysis (which includes a reanalysis of the old 8 TeV data that was included at ICHEP) is consistent with the old result at less than the 2 sigma level. Remember the original measurement had an uncertainty (at one sigma) of about 25%, so a 50% change in the answer is about 2 sigma.
b) Your conclusion that their results are not trustworthy is inconsistent with the well-known fact that results often become more trustworthy with time as issues with the detector are understood better. Maybe you should distrust the ICHEP results, because they were done with older calibration data. With newer calibration data the current results may well be more trustworthy, not less. You need to take statistics seriously; they gave you an uncertainty estimate at ICHEP and now you are complaining that things changed by 2 sigma? It happens! And why should you trust them more or less than ATLAS? Maybe they found their detector problem and ATLAS still hasn’t found theirs… or maybe ATLAS has a larger background fluctuation and still doesn’t have enough statistics for their result to overwhelm this fluctuation.
c) they said very clearly that with the new result the ICHEP signal did get smaller; also the data since ICHEP has less of a signal. So what? That’s the way data, and statistics, works all the time.
d) the uncertainty on the result (at one sigma) remains over 25% (because the result has gotten smaller but the absolute uncertainty still hasn’t shrunk that much.) Since the result is smaller, if it bounces back up by 50% it won’t differ that much from the Standard Model.
In any case, if you want to distrust them and trust ATLAS, go ahead; that’s between you and your conscience. What you’re really doing is not trusting statistics and uncertainty estimates. ***They told you how big the uncertainties were at ICHEP, but clearly you didn’t believe them!!!***
Dear Matt, I should inform you that dark matter better has no WIMP origin. It would be a disaster for our Galaxy, bringing us unobserved satellites instead of explaining various properties of Earth and Moon. So I object to WIMP dark matter, the Galaxy is too dear to me.
You aren’t seriously “informing” me of conjectures as though they were facts, are you? Send me a paper [by email] and I’ll consider your arguments.
/We know the Standard Model is incomplete. There is dark matter;
gravity cannot be combined with the Standard Model without additional physics…/ – Matt Strassler | March 14, 2013 at 12:01 PM.
Professor Strassler is correct!
From my small knowledge, there is dark matter wrapped in extra spatial dimensions. The 3D spacetime not include dark matter. I came to this simple guess, when Aether was replace by dark matter, energy. Dark matter is within the reach of experiment, without disturbing the human perception of observable spacetime – is only a matter of time?
Please note that my statement does not support yours.
Professor you are correct. You have responsibility. I must swallow my own bitter pill.
Same Helicity create mass (time invariance).
Invariance in time leads to conservation of energy.
This is unique for matter. Allow photons not exceed its own speed?
Change in Chirality(parity or angular momentum?)create mass.
Invariance under translation leads to conservation of momentum. Parity is not a symmetry of the universe.
In dark matter, it is only a matter of time? The time invariance is not a truth here, thus Photons exceed its own speed or cease to exist ?
thank prof Matt …
what after higgs boson ? like new particle if there are in a universe .
Comments are closed.