[Note Added: this afternoon the author of the Scientific American article made a few corrections. I leave it to you to judge for yourself whether he addressed the issues raised here.]
There’s been a little silliness floating around (sadly, in Scientific American, whose article contains at least two factual errors) unscientifically speculating that ATLAS’s new results on the Higgs-like particle, from data collected at the Large Hadron Collider [LHC], suggest there are two such particles rather than one. The mass measurement of this particle using the data when it decays to two photons, 126.6±0.3±0.7 GeV/c², is different, by 2.7 standard deviations, from the mass measurement obtained from its decays to two lepton/anti-lepton pairs, 123.5±0.9+0.4-0.2 GeV/c². So… huh… gee… maybe there are two Higgs-like particles, a lighter one which rarely decays to two photons and a heavier one which rarely decays to two lepton/anti-lepton pairs?
[Note Added: I should emphasize, lest anyone blame ATLAS for this implausible line of speculation, that in the ATLAS presentation last week, which was one of several presentations that morning, these two mass measurements were presented simply and responsibly, as results from data. Not a single speculative word was said about there being a hint of two Higgs particles. I don’t know who got the ball rolling on that idea, but it wasn’t ATLAS. And it’s not a plausible idea: see below.]
Take a deep breath. For not only would the two types of particles somehow have to be magically and implausibly arranged to mimic, at first glance and to a rough extent, a single Standard Model Higgs particle (the simplest possible type of Higgs particle), there’s another experiment, which unfortunately the writer of the Scientific American article neglected to consult.
ATLAS’s mass measurement from the events with two lepton/anti-lepton pairs also disagrees with CMS’s mass measurement obtained from the same type of events: 126.2±0.6±0.2 GeV/c². Two similar experimental detectors, same measurement, moderate disagreement. Nature is nature; there’s no way that ATLAS can be making one type of particle all the time, while CMS is making a different one all the time. So there is no evidence here, taking ATLAS and CMS together, favoring the existence of a separate particle with a mass of about 123.5 GeV/c² that decays to two lepton/anti-lepton pairs.
What is behind these discrepancies, then? ATLAS and CMS each have scarcely a dozen of these two lepton/anti-lepton events, and their extraction of the Higgs particle’s mass from each event is somewhat uncertain, which is why many events are required for a good mass measurement. When you still have small amounts of data, funny statistical fluctuations will often occur. We’ve seen this before; back in 1989, when the Stanford Linear Collider (SLC) produced its first few Z particles at the Stanford Linear Accelerator Center, the plot of the Z particle’s mass gave a double resonance peak, instead of the single peak that was expected. A brief moment of speculation occurred, but with more data the anticipated single peak structure emerged. I’ve heard at least one other similar story from an earlier decade. In fact ATLAS and CMS had a 2 GeV mass discrepancy when the first Higgs hints came in; that was just an effect of statistics. Combine a fluctuation of this form with a minor detector calibration problem, and you’ll get discrepancies like this.
Multiple types of Higgs particles are certainly possible; people have considered this scenario for decades, and I’ve written about it here, for instance. Efforts to search for a second type of Higgs particle have been going on since the discovery of the first one. But let’s not manufacture one out of thin air by looking selectively at the data; that’s not how reliable science gets done.
19 Responses
I do not see any problem in Scientific American’s article, no silliness about it. I do see a big under current about this Higgs story.
a. In the July 2012 announcement, it did not claim that the new particle is a Higgs.
b. On 16 October 2012, in the article “The Inner Life of Quarks, by Don Lincoln, Scientific American”, it discussed a long dead idea of preons.
c. In the 13 November 2012 reports (almost 5 months after the initial report), CERN did not confirm that the new particle is a Higgs. At the same time, SUSY received a deadly blow by the LHCb data.
d. In the 13 December 2012 report, Atlas reported the problems in the Higgs data.
e. On December 14, 2012, in the article “Have Scientists Found Two Different Higgs Bosons?, by Michael Moyer, Scientific American)”, it hints that the Higgs is in trouble.
The above is not a good trend for Higgs. Instead of being silliness, Scientific American is seemingly knows something which we do not know about. Yet, we do (at least, should) know the simple physics fact.
i. SUSY is the extension of Higgs. If SUSY is out, the Higgs alone cannot survive.
ii. Whatever SUSY is, it must have a tail sitting right around the weak-scale. If not, it is disjointed from “this” universe and cannot be a part of “this” physics. Let it be the physics of the “other” universe.
Nice clarification of another nonsense in the Scientific American 🙂
Maybe they should stop writing about particle physics, if they are not able to do it correctly (and unbiased too sometimes)…
What is most upsetting is that it is always the scientists, who have done nothing wrong but honestly reported or talked about their results (maybe even at a scientific conference that was NOT intended for a wider public in some cases?), who get the blame from the public, if science journalists overdo it, are not careful enough, or even make up or distort things to sell a better story.
What is an electron so stable?
It sounds like there are several measurements that are all very close to each other around 126 GeV and one that is pretty far from the mark at 123.5 GeV that could be off due to an experimental error, a statistical fluke (in a quite small sample) or a bit of both. As the OPERA debacle revealed, it can take several months of intense investigation to get to the bottom of a subtle apparatus problem (although it never looks subtle once you find it).
The point that it can’t be two Higgs because all of the measurements would have seen two is well taken.
If I were a betting man, I’d bet that the 123.5 GeV data point was in error, and make predictions on the assumption that the 126 GeV value was correct rather than including that data point in the overall estimate.
Looking at the situation from an engineer´s point of view, it seems to me rather normal that a new experiment with two new measurement devices gives slightly different readings on both devices; in fact, it would be very suprising if both detectors yielded the same value already in the first place. Calibrating the detectors is a difficult task, and there is always the possibility that such complex devices show some sort of systematic error that was not taken account for in the design.
I think it was correct to finally publish the measurements, without any hype. The teams running the operations at LHC need to know about the difference as this may indicate that further calibration is necessary, which is a normal procedure with such experiments.
This particular difference is only at one device, ATLAS. Still, everything you said applies to differences between ATLAS and CMS.
Well, that’s not entirely true, Carl. The measurement of the energies of photons, of electrons and of muons use different combinations of the sub-devices that make up ATLAS. And for that reason, separate calibrations are required for these particles. So in an important sense Markus was right.
As a laymen, was pretty obvious all your remarks. However I have seen many media picking up the Scientific American story as an actual report from all LHC. Kind of sad how these people are clumsy…
ATLAS held the result back for a month looking for systematic errors. Eventually they decided to go ahead and publish with an extensive analysis of how significant the discrepancy between the mass peaks is. The answer is that the peaks are reasonably different, but not inconsistent with a statistical fluctuation. This was handled with the highest grade of responsibility and professionalism.
Thank you Anonymous. Then this really follows on from Prof. Strassler’s previous post about how to help the media avoid running sensationalised stories based on a fraction of a press release. I’m sure if some science journalist read your reply, the words “the peaks are reasonably different” would be lit up in red letters like some synesthetic hallucination.
Maybe it is wise to hold off giving such results until the point where we know they are different peaks or the peaks have converged with additional data. In its current form, is it really a result, or just a result of half-a-job done? We all know peaks come and go. Why rush to report them?
PS. I’m certainly not singling out ATLAS. This happens in all fields of science. Just read the BMJ for copious examples!
As Anonymous said, this result was not rushed. These results were intended for the HCP conference, but they sat on them for a month, looking them over very carefully, until they were fully confident they could stand behind them. As I emphasized, flukes happen in data when you have small numbers of events, and they usually go away over time. The ATLAS people know this, of course; they are among the best in the world. But imagine the debate within ATLAS between two sides:
1)”This result is weird-looking and will mislead people into thinking there’s something unusual with the Higgs, so we should not make it public. We should wait until we have more data and this funny statistical fluke gradually disappears.”
2)”If a result doesn’t look the way we expect, but we can’t find anything wrong with it, why should we keep it private just because we don’t like the answer? If we do that, we introduce bias into our results, releasing things that look as expected and holding on to things that don’t. We should release all measurements when we’re confident we’ve done our best.”
In this case, (2) won the day, which scientifically is best, in my view. Politically it does cause some problems, and it is unfortunate those problems came out of Scientific American.
I should also emphasize that ATLAS has not made a press release about this. They are downplaying this discrepancy, as they should. Their presentation was made as part of a whole morning of talks by all the LHC experiments; it didn’t have any special billing. As far as I can tell, these speculations and resulting hype were generated entirely by others.
I’d be interested to know how the ATLAS result was presented to the Scientific media community. Was it given in a matter-of-fact way which required, or even enticed the writer to conclude the possibility of a second Higgs boson?
Or was it given with a caveat that, ‘we don’t suspect for one moment that this is a new boson’ ?
Sometimes I get the feeling that ATLAS is a little bit too quick to release its results to the media and to journals, as if to beat others to the ground breaking story (if it turns out to be ground breaking and not just a statistical blip!) This can be detrimental to the image of CERN (“Oh no. Another exciting result just evaporated!”) and even more detrimental to PhD students working on the equivalent analysis with other experiments, who suddenly find themselves to be the 2nd to publication just because they were more stringent about their final results and have to justify why their result differs – without treading on a political minefield (as I’ve probably just done! 🙂 )
I agree with the anonymous comment below: the result was presented in a fully matter-of-fact fashion, and with the utmost professionalism.
I’m not sure where your “feeling that ATLAS is a little bit too quick to release its results to the media and to journals”. I don’t see any evidence of that.