Of Particular Significance

Holiday Higgs Hints: Confidence-Inspiring or Not?

Matt Strassler 12/19/11

Particle physics sure had an exciting week, with the latest update on Phase 1 of the search for the Higgs particle. (During Phase 1, the ATLAS and CMS experiments at the Large Hadron Collider (LHC) focus on finding or excluding the simplest possible Higgs particle, called the Standard Model Higgs particle.) The unambiguous news is that the two experiments collectively have excluded the Standard Model Higgs particle unless its  mass-energy [E = m c2] lies in a very small window between about 115 and about 128 GeV, or in a second window between about 600 to 800 GeV (which is disfavored by indirect evidence.) That’s not to say that a more complicated type of Higgs particle (or particles) might not have a mass in the excluded range, but if the simplest type of Higgs particle is the one found in nature, then the experiments are closing in on it.

The ambiguous but even more exciting news was that both ATLAS and CMS happen to have seen some hints of the Higgs particle, and some of their hints are at about the same mass, namely about 125 GeV. The question that the current article addresses is this: how confident should you be that these hints actually reflect a real signal of the real Higgs particle?

This summer, we had some hints too, and I wrote an article titled “Why the current Higgs hints rest on uncertain ground.”   And indeed they did; they’re long gone now. The current situation is resting on firmer footing, but as you’ll see, I think you can make good arguments both in favor of confidence and in favor of caution. (You’ll also see that I think you can make bad arguments in favor of confidence, and I’ll try to explain why you should avoid them.) So I’m going to show you persuasive arguments that point in opposite directions, and I am not going to try to convince you which one is right. In fact, I’m going to try to convince you that while each of us will ascribe different levels of plausibility to the two arguments, it is really difficult to dismiss either of them out of hand. That’s why we need a successful 2012 run with lots of data; only then can the situation change.

The Data from the Higgs Search Update

ATLAS data from the search for Higgs decaying to two photons. Notice the size and shape that a signal of a Higgs particle at 120 GeV would be expected to have, red dotted lines on and below the red curve, which is an exponential fit to the background data. Bins are 1 GeV wide.

Let’s begin with the data itself. Eight separate measurements played a role in last week’s update of the search for the Standard Model Higgs particle. Four of them each separately have a small impact, and their plots themselves don’t contain much information. Let me show you the plots from the other four. These are what I called (in this article from the summer and in this more recent article about how one searches for the Standard Model Higgs particle) the “easy” searches: the ones whose plots should tell you by eye (eventually) what is going on, because with sufficient data they should give a clearly observable peak in the data, poking up above a smooth background like an isolated volcanic cone on a gently sloping desert plain. (If you’re a layperson feeling lost already, you might instead want to listen to the excerpts from my public lecture at the Secret Science Club; these are the two searches I described.) These are the searches for a Higgs decaying to two photons, and for a Higgs decaying (through real and virtual Z particles) to “four leptons”, by which we really mean two charged lepton-antilepton pairs (specifically electron-positron and muon-antimuon pairs, which can be measured precisely.) In both of these cases, the search for the Higgs particle is relatively easy, because when one takes the two photons (or four leptons) and adds up their energies (or more precisely, combines their energies and momenta to form their invariant mass), one will find it equals the Higgs mass-energy if the photons or leptons came from a Higgs particle, while other non-Higgs sources of two photons or four leptons will be smoothly distributed. This can be seen in all four figures, which show the results of these two searches at both ATLAS and CMS. On each plot are shown data (black dots), the expected average background (a smooth distribution) and one or more examples of what a signal would look like on average (little peaks — but read the captions and labelling carefully to avoid being misled.)

CMS data from the search for Higgs decaying to two photons. Notice the size and shape (blue curve) that a signal 5 times as large as that of a Higgs particle at 120 GeV would be expected to have. The red curve is a polynomial fit to the background data. Bins are 1 GeV wide.

Now we proceed with the arguments. Before we do that, let’s note that various statistical arguments about probabilities are going to come up. These include the look-elsewhere effect (described here) and also the question of how likely it is that two features in two different plots should be close together. The problem (as always in statistics) is that the answers to statistical questions always depend on exactly what you ask. We already know from the summer’s data that the lightweight Standard Model Higgs particle is restricted to the range 115 to 141 GeV; should we only compute the look-elsewhere effect within this restricted range of any plot, or should we compute it across the full range of the plot? The conservative thing to do is to use the full range when computing a look-elsewhere effect, but unfortunately when asking the probability that two features line up, the conservative thing to do is to use the restricted range, so it just isn’t clear what you should do, and your answers depend on what you do. Keep an eye on this, as in making the case for confidence I’ll be tacking back and forth, using the conservative argument whenever possible yet still showing there’s a strong case. You might yourself want to make a less conservative case; that’s a judgment call.

ATLAS data from the search for a Higgs particle decaying to "four leptons". The average expected background is in red; the data are the black points (at integers, of course) and three different peaks showing what a Higgs particle signal would look like at three different masses are shown. The peak in blue is at 125 GeV, along with the observed three isolated events. Notice the bins are 5 GeV wide.

A Good Argument that the LHC Experiments are Seeing Signs of the Higgs Particle

Let’s start with the ATLAS two-photon results. These are easy to interpret, because the data is an almost featureless curve except for two significantly high bins, between 125 and 127 GeV. How significant is the excess? It is (locally — that is, within those bins) a 2.8 sigma excess, almost reaching the point of official “evidence” for something deviating from a smooth curve. With the look-elsewhere effect (that is, accounting for the fact that there are there are 80 bins on the plot) this drops to a 1.5 sigma excess — meaning the probability of having a 2.8 sigma excess somewhere on the plot is about 7 percent. That’s not so exciting, but still, it could be argued that is somewhat pessimistic, since we’re really only looking for the Standard Model Higgs particle now in the range 115-141 (other regions were removed after the HCP conference) so the number of bins where such an excess would be taken seriously is smaller than that.

Now, if that were all there was, we would not get that excited; we’ve seen equally exciting bumps in LHC plots before — even in the plots of the Higgs search for two photons. But of course, there is more.

Next we go to the ATLAS search for Higgs particles decaying to four leptons. If there were no signal, what we would have expected to see in the restricted range of 115 to 141 GeV is about 3 events, scattered around in different bins. Instead, three events were observed within 1 GeV of each other. That’s surprising; it’s quite different from what one would expect from background, and much more what one would expect from a Higgs particle signal. It’s a 2.1 sigma excess, though admittedly after the look-elsewhere effect (for this measurement alone) the probability of such an excess is somewhere in the range of about 50%. (Why so big? I think because the resolution on the measurement is 2 GeV, so the extreme closeness of the three events is somewhat of an accident. Again one could argue this is a pessimistic number.) A bit striking, but since we expected 3 events, it’s not as though we’re seeing more than anticipated in the absence of a Higgs signal. The only surprise is how close they are together.

But the really striking thing is that the two excesses just mentioned are within 2 GeV of one another. If these localized excesses were located at random bins, the probability that they would be within 2 GeV of one another is conservatively about one in 6. (Set the two-photon excess at 126 GeV; then the range 124-128 GeV, about 15% of the restricted range before this measurement, would get you within 2 GeV.) So that makes the likelihood that this is a pure fluctuation at least 6 times smaller yet. Altogether the probability for all of this to happen in these two searches is about 1 percent, conservatively.

In short, ATLAS has got something you might want to call “strong hints approaching the point of preliminary evidence” for a new particle around 125 GeV. Both the excess in two photons and the excess in four leptons are significantly bigger than expected for a Standard Model Higgs particle, so you might argue that the evidence is for a non-Standard Model Higgs particle, with an increased production rate. ATLAS’s case is further bolstered (slightly) by the small excess seen in the sensitive but subtle, and less distinctive, search for a Higgs particle decaying to a charged lepton, anti-lepton, neutrino and antineutrino (“leptons + neutrinos” for short). (It would seem that ATLAS decided that only half of its data in this search was fully ready for prime time, so the influence of this search is rather weak, but it does show a 1.4 sigma excess.) Taken together, ATLAS data shows a hint whose probability to appear as a pure fluctuation is down somewhat below 1 percent, after look-elsewhere effects. That’s a pretty decent hint!

CMS data from the search for a Higgs particle decaying to "four leptons". The average expected background is in pink; the data are the black points (at integers, of course) and two different peaks showing what a Higgs particle signal would look like at two different masses (120 and 140 GeV) are shown. Notice the bins are 2 GeV wide

Now we can ask whether CMS’s results are consistent with ATLAS’s case. CMS has five measurements, none of which is compelling on its own, but there is a case to be made when they are combined. The two-photon result at CMS shows a couple of 2 sigma excesses. That’s not surprising — as the CMS people say, the probability of that is about 20%. But what is interesting is that one of those excesses is within 2.5 GeV of both ATLAS’s two-photon and four-lepton excesses, which is something that again has a probability of only about 1 in 5 (see the reasoning above for ATLAS’s four-lepton results). CMS’s four-lepton results also involve a number of events strewn about, but two of them (one more than expected) lie again within 2.5 GeV of ATLAS’s excesses. And finally, CMS has small excesses in its search for Higgs particles decaying to leptons + neutrinos, to bottom quark/anti-quark pairs, and to tau lepton/anti-lepton pairs. All the excesses in the five different searches are of roughly the right size to be consistent with a Standard Model Higgs particle with a mass about 124 GeV.

So ATLAS has some hints, and CMS has some hints. How much better does the situation get if we combine them? Somewhat better, but if we’re honest we can’t go overboard here. First, the ATLAS excesses are somewhat more consistent with a Standard-Model-like Higgs particle with an enhanced production rate, while CMS’s excesses are not. That’s not an inconsistency, but it also means there isn’t exceptional consistency yet, and it means that either ATLAS got very lucky to get so large a hint in both photons and leptons, or CMS got really unlucky in not seeing signs of an enhanced non-Standard Model Higgs particle signal. Second, the ATLAS two photon excess is at 125.9 and the nearest CMS excess is at 123.5, while the stated resolution of the two photon measurements by the two experiments are better than that. Honestly, they ought to be closer together, if they’re seeing the same real thing. It’s not impossible for the backgrounds to fluctuate in such a way that they look further apart right now than they will after more data is obtained, but at least right now we can’t say they are remarkably consistent. But we certainly can’t say they disagree with one another. Let us say they are “roughly consistent”, certainly enough to add some weight to the case.

Now that’s just the evidence from the data. It’s somewhere between weak and moderate, perhaps crossing the threshold where you would officially call it “evidence” using the statistical convention that particle physicists use. But it’s not the only information we have.

We also know that the Standard Model is a remarkably successful theory, agreeing in detail with thousands of different measurements at many different experiments of widely varying types. At any given time there are disagreements here and there, but outside of dark matter and neutrino physics, none of them have stuck. And the Standard Model has the simplest possible Higgs particle in it — the so-called Standard Model Higgs particle. So the Standard Model, along with its Higgs particle, remains the best assumption we’ve got until we learn something’s wrong with it. That’s a theoretical bias, but a reasonable one. Bolstering that bias is that high precision measurements of many types allow a prediction, if we assume the Standard Model is correct, of the Higgs particle mass — a rather imprecise one, to be sure, but the preferred value of the Higgs mass would be lightweight. The most preferred value from the indirect evidence is actually below 115 GeV, but that is ruled out by the LEP experiments; 115 GeV would be the most likely value not already ruled out by experiments, but 125 GeV would still clearly be well within the natural part of the range still remaining. So a Standard Model Higgs particle at around 125 GeV is very much consistent with all the world’s experiments. And this point arguably can be combined with the evidence from the ATLAS and CMS data — preliminary as it may be — to create confidence, a strong hunch, a willingness to bet, that this is what the experiments are seeing. It makes a coherent, compelling story.

In my view this is a strong and reasonable argument, and it persuades some very reasonable people. Before we look at why there’s a strong argument that points in a different direction, I am going to point out what I  consider to be a bad version of the argument, one that concludes that there is firm evidence in favor of the Higgs particle.  You can skip that part if you want — it’s really more of an aside than anything — and jump to the good argument in favor of skepticism.

A Line of Argument to Avoid

A bad version of the above argument would use the success of the Standard Model as an additional source of evidence that the Higgs particle has been observed, instead of as a reason for belief, as it is used above. The reason this is a bad idea is that the Standard Model is precisely what we are trying to test through the search for the Standard Model Higgs particle, so assuming it biases the evidence. (It’s similar to weighing evidence against the most likely suspect in a murder without first having ruled out suicide as the cause of death. Assuming a murder has taken place artificially inflates the likelihood of guilt, and so the consistency of the assumption with the evidence should not itself be included in the weighing of the evidence.) Obviously if we assume the Standard Model is right, there must be a Standard Model Higgs particle in nature; and the success of the experiments in ruling out such a Higgs particle everywhere except 115 to 127 GeV then implies that it must be somewhere in the remaining 12 GeV window. Since we can only determine the Higgs mass right now to two or three GeV, that makes the probability of the Higgs being in the 125 GeV range already 15-25% before we even start weighing the data itself, artificially inflating the weight of the evidence.

Not only is this a biased argument, it also rests on a logical flaw. The past success of the Standard Model is not strongly correlated with whether there is a Standard Model Higgs particle in the LHC data. Imagine we extend the Standard Model of particle physics by adding just one type of undetectable particle — perhaps this is even the type of particle that makes up dark matter, so this is even reasonably well-motivated.  Doing so will not affect any of the thousands of measurements that agree with the Standard Model, or any of the precision measurements that predict a lightweight Higgs particle. Yet this one new particle can put the Higgs discovery far out of reach. If the Higgs particle often decays to a pair of these undetectable particles, none of the search strategies currently seeking the Standard Model Higgs particle have a chance of finding it anytime soon, and discovery of the Higgs will be significantly delayed.  Discovery may possibly require a search specifically aimed at an invisibly-decaying Higgs, which requires a lot of data and has not yet been undertaken.

So to use the success of the Standard Model as an ingredient in weighing the evidence in the data is faulty logic as well as artificially inflationary. It is completely consistent with all the world’s data for the Higgs to decay invisibly 90% of the time, and then Phase 1 of the Higgs particle search will exclude the Standard Model Higgs particle rather than find it. For this reason I don’t think you should assume the Standard Model is correct when evaluating the data. It seems obvious to me that you should evaluate the data first, on its own merits, and only then determine your level of confidence in the recent hints based on your prejudices regarding the Standard Model. Otherwise you will confuse “firm evidence” with “weak evidence, supported by a strong prejudice, leading to firm belief”.

A Good Argument that It’s Too Early to Be Confident that the LHC Experiments are Seeing Signs of the Higgs Particle

Before I begin, let me apologize to the experimenters involved, lest I offend. I have to take devil’s advocate positions here in order to make my point, and I certainly do not mean to cast any specific aspersions on any one of your measurements. I am just trying to illustrate the kinds of questions that a reasonable outsider might ask about your results — recognizing that although you are all professionals, you’re all still human, and we all know that even the best experimenters can be swindled by nature, or make subtle errors, when doing the hardest measurements. Nor, as I said earlier, am I recommending to anyone to take this particular position. I am aiming just to show that there is a good argument to be made.

Let’s start with the CMS two-photon data, looking at it in detail. The excess near 123.5 is not very big (2 sigma) and there is another one in the same data at around 136, along with two 2-sigma dips. The probability of getting two 2-sigma excesses somewhere in CMS’s data is 20%, not unlikely at all. Indeed, if you just showed me that data without first showing me ATLAS’s data, I’d probably conclude that I was looking at perfectly natural fluctuations. So there’s not much to go on there.

Here’s another more subtle point. The excess is best fit with a new-particle signal at 123.5 GeV, and is declining back to expectation by 126 GeV. Yet the data point that most exceeds the background curve is at 125-126 GeV. How can it be that the best fit point and the most discrepant point differ by 2 GeV, which is larger than the resolution on the measurement? Because there are four different classes of photons into which CMS divides its search, and this largest excess comes from the class with the least-reliable photons, which has the largest relative error and thus the largest probability of a large fluctuation. (Do not simply add ATLAS and CMS histograms!) And since they do differ in this way, doesn’t this teach us we really can’t interpret anything about that plot by eye? How much should you trust a measurement that you cannot yet interpret by eye as well as by statistical arguments? Altogether you might conclude that CMS’s two-photon data doesn’t really point in any clear direction. Maybe it could support a stronger case within CMS, but we need to find that case.

Perhaps the four-lepton case is stronger? Not if we look at it by eye without knowing that ATLAS has a hint at 125 GeV. There are again multiple hints in this data; it looks reasonably consistent with background, with small upward fluctuations that are completely typical with the Poisson statistics that characterize samples with very few events. The overall rate is a little high, but we’ve seen excesses like this in many other plots.  Yes, there are two events in one bin at 125-127 GeV, but as you can see from the peaks drawn on the plot, a Higgs signal at 125 would be distributed from 122 to 128 GeV, so the extent to which this looks striking is misleading.

So it is only by putting two weak cases together that we really find ourselves even talking about something happening at 124 GeV. The evidence for a new particle there is very slim.

To make the case stronger we go to the measurements of the rates for Higgs decaying to the three other channels for which there is rate information but very poor mass information. Each of these rates is a bit larger than expected — by 1 sigma — none of them very significant. Worse, to interpret these measurements we have to trust the data-driven extraction of a background rate. (The data-driven extraction of a rate for a Higgs particle decaying to two leptons + two neutrinos gave us the hints of a Higgs particle at around 143 GeV this summer. I said then it rested on uncertain ground; for all I know, it still does. Similar issues affected the Tevatron’s first exclusion limits for the Higgs particle around 160 GeV; after the limits first appeared, they got worse before they got better.) It is true that the mistakes in estimating the backgrounds that one could make in the three cases are independent, which might lead one to argue that the three excesses taken together are significant.  But one also has to remember that any mistake that comes from having omitted a source of background inevitably underestimates the total background; in other words, the systematic errors on many LHC measurements, including these, are not only non-Gaussian but also skewed to positive values — which makes the possibility of fake signals much larger than naive statistics would suggest. So we have to choose whether to trust that CMS estimated all of these errors correctly.  Recalling how often during recent decades there have been underestimates in the determination of overall backgrounds rates in various measurements at hadron colliders, we may also reasonably choose not to, at least not until their methods have been fully vetted by independent experts. And we may not wish to rest a case for a discovery of a Higgs particle upon these searches at all.

Also, when one tries to make a strong case by combining many weak arguments, one can no longer be so confident in uncertainty estimates.  It is easy to imagine underestimating them, because of the likelihood that a mistake lurks in one of the measurements, making it inappropriate to combine it with the others using standard statistical techniques.  Another issue is that CMS uses the fact that the various excesses are all consistent with a Standard Model Higgs particle at 124.5 GeV. That’s fine as far as it goes, but it involves putting in a lot of assumptions. There might be no Standard Model-like Higgs particle in nature; there could be one with a non-Standard Model production rate, or non-Standard Model decay rates to b’s, tau’s and photons, and in this case the five excesses would not, in fact, be correlated as expected. Given that we are in the process of testing the Standard Model, we should be careful about assuming it in building an evidentiary case.

But without that assumption, the CMS case is a lot less convincing. Meanwhile, if you do make that assumption, you are then somewhat surprised that ATLAS has such a big excess in its searches for two leptons and four photons.

Let’s now see what we can learn from ATLAS. The same complaint about the measurement in leptons + neutrinos applies to ATLAS as to CMS — the uncertainties are hard to interpret — so for evidence let’s focus on the others. Let’s start with four leptons. It looks pretty solid: three events in one bin. But the expectation in the range from 115 to 141 GeV was for three events, and the expectation for a Standard Model Higgs signal would be two more events. So to see three events in one bin and none in any others requires the signal to be there and fluctuate up, and also for the background to fluctuate significantly down (or for one background event to land in the same bin as two signal events, etc.)  Moreover, the experimental resolution in 4 leptons is about 2 GeV, which is pretty darn good, but that still means (see the figure at left) that we would expect three events from a pure signal to be spread out much more than they are. The point is that what ATLAS observes is actually overly striking, misleadingly so; it is not a particularly typical distribution of events if in fact we’re looking at the predicted background plus a Standard Model Higgs signal. Of course, with the number of events so low, there are wide fluctuations around “typical”. But it’s not the kind of distribution that immediately looks like a Higgs signal sitting over a Standard Model background. No matter what, this is a big fluctuation in either pure background or in background plus signal.

The effect of finite detector resolution at ATLAS; the very narrow Higgs peak in four leptons is expected to be spread out by detector imperfections across several GeV.

It also must be said that we tend to badly underestimate how often funny things like this happen. Something similar happened this summer, almost drowned out in the Higgs-signal hullaballoo (I was planning to write about it and it just got lost in the shuffle.) The CDF experiment reported finding 4 events that were (on the face of it) consistent with a new particle of mass of about 325 GeV decaying to two Z particles, which in turn each decayed to a lepton/anti-lepton pair. And the background in this case is really tiny! Look at the plot in the figure: Four events, isolated from any others by tens of GeV, with none to the right of them. The total number of events expected across that upper range is two or three. Is that a new particle? Why isn’t everyone jumping up and down about these four clustered events?  Especially since CMS and ATLAS also have events in that region?!  (CMS even has a 2 sigma excess!)

CDF's results on four leptons, showing four 4-lepton events within 10 GeV of one another at around 325 GeV, with very low expected background. Bins are 5 GeV wide.

(1) Because CDF immediately looked for the other signals of the production of two Z particles: a lepton/anti-lepton pair plus a neutrino/anti-neutrino pair, and a lepton/anti-lepton pair plus a quark/anti-quark pair. I am not sure I believe their methods really excluded all possibilities, but they claim to have ruled out the possibility of a particle at 325 GeV decaying to two Z particles.  Also, (2) almost any production mechanism you can think of for any such particle would hand ATLAS and CMS at least twice as many events as CDF by now. So it’s a fluke either way: either there’s nothing there and CDF was subject to a big fluke in its background, or there is a new particle there, and either CDF got a big fluctuation upward or ATLAS and CMS have large fluctuations downward.  And finally, (3) in contrast to the hint of a Higgs at 125 GeV, this hint is located where no one is expecting a new particle.  So we downplay these hints, and meanwhile we play up the current Higgs hints because we are expecting something there. We must not confuse evidence with prejudice, and with belief and disbelief.

And let us not forget about one of the last decade’s great (un-)discoveries in the physics of the strong nuclear interaction: pentaquarks, a new class of hadrons. I haven’t described them on this website because after several years of data and hundreds of papers, the pentaquarks all apparently turned out to be mirages. Here are some of the plots showing the evidence for the most convincing pentaquark, which had a mass of 1.54 GeV. With nine experiments seeing something similar, the evidence looks pretty good. But it wasn’t.  (Thanks to a commenter for helping me find this particular plot.)

Nine experiments that all saw signs of a new particle at 1.54 GeV during the period 2003 to 2005. Unfortunately that particle does not exist.

The point is that weird things happen in real data. And ATLAS was expecting three events in the search for Higgs decaying to four leptons. Maybe they got them, and they all ended up in the same bin. Could happen.

Now is it really so striking that they are so close to the two photon excess that ATLAS sees? Well, as I also emphasized in the good argument in favor, they’re borderline close; the gap between the photons events and the leptons events is almost 2 GeV, and the resolution on the leptons is about 2 GeV, so basically any cluster of events in the range between 123 and 129 GeV would have gotten our attention. That’s a pretty good chunk of the range between 115 and 141, so this coincidence of peaks is not quite as unlikely as it looks. Yes, it is moderately striking that the three events in one bin at ATLAS are near to the two photon excess. But let’s not overstate it.

Finally, what about the two-photon excess at ATLAS? It’s too big. It’s too big for a typical background fluctuation, which is why we tend to think it is signal; but it’s also too large, and very misshapen, for a typical signal fluctuation, as you can see on the plot, where the dotted red line shows what a signal ought to look like on average. What this tells you is that the specific shape of the excess is most likely driven by a background fluctuation; though it is unlikely for a large background to fluctuate that much, it is also unlikely for a small diffuse signal to do it.  So the fit that tells us that any signal is most likely at 126 GeV might shift over time as the background fluctuation dies away, and perhaps it’s apparent concordance with the other evidence will die away with it.

It is also interesting that the point (126-127 GeV) in the ATLAS data that most strongly deviates from the background curve is an exceptionally low point in the CMS data. The two plots are not very much in accord.

Also, let’s not forget the wiggles in the two-photon data that we saw in the summer and last winter. They’re no less significant than the one we see now. In fact, if we look back at the two-photon data in CMS and ATLAS from the summer, the combined peaks at 120 GeV were more significant than the one we see in ATLAS now.

So again, we really have to be careful about over-ascribing significance, and underestimating the possibility of flukes, in data. And from this line of argument, one might conclude that this is really a circumstantial case. If we did not already believe that there was a strong possibility that there is a Standard Model Higgs particle at 125 GeV, we would not be persuaded of it by this data; and therefore the evidence is too weak to inspire confidence, because confidence should not be based on prejudice. The data might be pointing us toward a Higgs particle, and it might not.

There is one more issue that we should remember, and we should not be confident until it is resolved. These results are preliminary, which many commentators seem to forget, or at least not to understand. What might preliminary mean in this case? It means that there are various cross-checks and calibrations that the experiments have not yet completed. And one has to remember that the energies of all the particles observed — the leptons and the photons — have to be measured to something like 0.5% or better in order that the extracted invariant mass in each event be measured to better than 1%-2%, that is, to 1.25 – 2.50 GeV. That is not easy. [We sometimes forget how difficult a measurement this is because of the suppressed zeroes on the horizontal axes of all the plots; if we plotted the mass range from zero to 150 GeV, you’d be more impressed at what the experimenters are doing.] But the entire case for a Higgs particle rests upon this having been done correctly. The fact that this data is preliminary means that between the time that the data was presented and the time that it appears in its final version, individual events, or classes of events, might migrate, in mass, perhaps by 0.5 to 1 percent. (It is unlikely in this case that an event or two might even be removed due to its dropping below a quality requirement, but that does occasionally happen too.) These shifts could be enough, potentially, to either significantly improve or significantly worsen the concordance internally within each experiment (for instance, what if two of the ATLAS four-lepton events move down by 1 GeV, drawing the average of the four-lepton mass measurement down to 123.3 and away from the two-photon measurement at 126?). And they could worsen or improve the concordance of the two experiments’ results with each other; what if the four-lepton results at CMS move from 125 to 124 and the four-lepton peak at ATLAS moves from 124 to 125? I am not sure which of the potential uncertainties from the uncompleted calibrations and other details are included in the current error bars. But commentators who try to combine the results of the two experiments without accounting for the possibility of shifts in the results  might, in my opinion, be leaving out potentially the largest uncertainty in the estimate of the significance of the combination. Since the ATLAS and CMS results are close but not perfectly aligned, especially in the case of the two-photon searches at ATLAS and CMS, one may wonder whether the final results might show significant changes, not in any one of the eight experimental results from ATLAS and CMS viewed separately, but in how well they are correlated with one another, and how consistent they are collectively with a signal of a new particle.

All this is to say that the evidence of a new particle at ATLAS and CMS is pulled together from a number of pieces of weak and in some cases questionable evidence from eight different measurements, all of which are preliminary and might shift slightly. The uncertainties and instabilities in the combination may well be underestimated. One may worry that it would only take one or two of these pieces of evidence to shift significantly, or crumble, to cause the entire case to unravel, or at least weaken sharply. The reverse might happen, of course; it might be that the final results are in greater accord, making all of us more comfortable and the combined significance larger. But until the final results are out, the significance of the combination is unstable, and one might wish to reserve judgment on whether there’s anything there in the data to be trusted.

And never forget the pentaquark debacle, its many cousins that are recorded in scientific history books, and its many cousins that are not .

What next?

I don’t know how to tell you how to choose between these two lines of argument. I know how I choose; when I see an argument in favor of caution that seems as strong or nearly as strong to me as an argument in favor of confidence, I remember how commonly false signals have fooled us throughout history, and err on the side of caution. But you won’t hear any complaints from me if you choose to be more confident — as long as you use sound reasoning, and only apply your prejudice in favor of the Standard Model when determining your level of belief, rather than in your claims of evidence.

Personally I think the chance that a Standard Model-like Higgs particle is  at 125 GeV is pretty decent, so it won’t surprise me at all if it turns out to be there.  That’s not merely because of the evidence in the data, which I view as pretty thin, but because it aligns with some very reasonable prejudices about nature — in particular, the very wide variety of theories which predict or at least allow a Standard Model-like Higgs particle in that mass range.

But we won’t know without more data, and with more data we will know — on that I think we all agree. And if all goes well, the LHC will take enough data in 2012 to change the highly ambiguous situation we’re in into one that creates a consensus in the community regarding the hints that ATLAS and CMS are seeing. Not that this is likely to happen very quickly; if there is a real Higgs particle there, ATLAS got lucky and got more events than expected, so it is unlikely that with double their data they will get double the signal. And if there is no signal there, it will take a lot of data (as ATLAS speaker Fabiola Gianotti emphasized) to wash the large fluctuation in the two-photon data away. (Of course CMS’s data, currently less exceptional, will be crucial in settling the case one way or the other.) So we need to be patient. No matter what each of us personally believes, the community consensus that marks a scientific achievement is probably a year away. It will be an exciting year of nail-biting anticipation, and we should not be surprised if the case first becomes more ambiguous before it becomes clear.

33 Responses

  1. Hi Heather,

    3.5 sigma is actually a pretty huge fluctuation – you’re talking about 2000 measurements or so to get one that far off. Now, true, we do have many many bins in each histogram, but when you’re talking about individual searches (e.g. for dark matter or proton decay) I think you need to count experiments, not histogram bins (assuming you account for look elsewhere in reporting the experimental result). And from that perspective we really don’t have that many measurements, particularly of the “searching for an expected standard model particle” type.

    The usual way you get 3.5 sigma fluctuations is that there is that the systematics are done wrong. So if you’re talking about a specific search, and not searches in general, i.e. “What are the odds that the search for the standard model higgs as of December 2011 will have a 3.5 sigma fluctuation”, I think that the odds of a fluctuation are very small.

    I guess this is part of why some people might get annoyed at Matt’s 50-50 assertion. Since the odds that this particular search (which is the most noteworthy of the last several years, and therefore not subject to the “we make so many measurements” argument) would have a big fluctuation are small, his number implies that he thinks it’s roughly 50-50 that ATLAS screwed up. Even the assumption that it’s a bit of a screw up and a bit of bad luck with a statistical fluctuation still presumes an ATLAS screwup. I know the results are “preliminary”, but you can be sure that these results have already been far more scrutinized than almost any of the experimental results from the early decades of particle physics.

    Anyway, to conclude, the higgs search isn’t some random decay channel, so it’s not fair to claim a fluctuation there is likely because we make “so many measurements”.

    1. A few comments.

      I don’t agree that the only way this could fall apart is if ATLAS screwed up. I do not find the ATLAS data compelling entirely on its own. How you evaluate the statistics depends on what question you ask, so I do not take 3.5 sigma overly literally. 3.5 is constructed in a complicated way, and is only a rough measure of where we are, and it seems to me one should be cautious comparing it to a simple pure probability, such as that of getting n heads in p coin flips.

      I have no idea where any screwups might actually lie — I just know that when you analyze so much data in just six weeks, you have ample reason to be concerned about small effects. Some of these concerns have been aired publicly, in my presence and that of over 100 witnesses, by both ATLAS and CMS members. My concern is not that there is a big screwup by anyone, but that a combination of small screwups and shifts due to low statistics could combine together to bring the case down.

      Finally, we’re talking about the most important measurement that the LHC is making right now. And we have a bias that there should be something there, making us more likely to see a mirage than in many of the other measurements that we are making. We ought to be cautious on those grounds alone.

      In any case, I will relax somewhat when the final results come out at the end of January. I’ll relax again when ATLAS has more results on more channels. And I’ll relax further when CMS’s results in two-photons and four-leptons become more distinctive, or once ATLAS’s significance starts to climb.

  2. I’d like to add a comment about statistics, inspired by arXiv:1112.3620. These two statements,

    a) The odds of a fluctuation producing a 3.5 sigma discrepancy are small.
    b) The odds of a 3.5 sigma discrepancy being a fluctuation are small.

    are certainly not equivalent, and only (a) is correct, unless you are explicitly testing the consistency of the data with two different pre-defined hypotheses. Since we make so many measurements, we’re going to wind up with a number of 3.5 sigma discrepancies just due to background fluctuations, and all of them will go away with more data. Only the rare discrepancy that is actually the first sign of new physics will grow into a discovery.

  3. Sorry, I missed one of your questions:

    “You say “I think that part of the reason experimentalists were so annoyed at theorists chasing 3 sigma bumps in the past is that they wished that the theorists were instead phenomenologists.” Do you have any evidence for this remark?”

    In terms of a youtube video, of course not. But you could try asking an experimental colleague from a different university this: “We are going to get a grant from a wealthy donor to expand our department. We had hoped to expand our phenomenology group, but he’s really excited about string theory, and told us he’ll either fund 2 new string theory chairs and 4 new string theory postdocs, or 1 new chair and a single matching postdoc in phenomenology. Which do you think we should do?”

    1. Well, if that’s the kind of thing you’re referring to, I know about it more intimately than you realize; the phenomenon has dominated my entire career, and sometimes affected it in ways that might surprise you.

      Knowing now your background, I would interpret your original remark differently than I did before. But I don’t think your remark is really correct. The amount of ambulance chasing that collider theorists and model-building theorists do is enormous, and in fact most string theorists and other formal theorists do it much less, because frankly many of them have neither the interest nor the tools. As far as I understand, the 5 sigma criterion really has emerged internally out of the experimental community, partly out of wanting to account for the look-elsewhere effect (Gary Feldman’s name is mentioned but I have not done proper research on this) and partly from a generalized concern with the high fraction of 3 sigma results that evaporate. (Anyone who knows more is encouraged to correct me if they believe I am wrong.)

  4. It’s true that when you discover something truly new the old theory must be changed or discarded, but I don’t think that violates Occam’s razor. The idea isn’t that the truth must be simple, it’s that when faced with two theories that describe all of the known data so far, the simple one is probably more likely. Wouldn’t you say that we believe in dark matter (over e.g. some failure in gravitational attraction at long distances) because of Occam’s razor?

    When I say that hundreds of experimentalists claim the odds that this is simply a fluctuation are very small, I’m referring to their reported result. When they say they observe a 3.5 sigma difference from the null hypothesis that means that (assuming their systematics are correct) the odds that it is a fluctuation are very small. They can choose to believe their own result or not, but when you report even a 2.5 sigma effect (taking into account look elsewhere and all systematics) what you are saying is that if you didn’t screw up, then there is only a very slight chance that this is a fluctuation.

    Systematics are wrong all the time, and that’s why the burden of “discovery” is 5 sigma. But let me ask you this – do you really believe in your heart of hearts that until you have 5 sigma the case is “inconclusive” and/or 50-50? Surely you don’t undergo a binary switch from 50-50 to 100-0 when it goes from 4.9 to 5 sigma? Presumably you think the case is more and more likely until you reach 5 sigma, at which point I presume you’re still not exactly 100% sure 🙂

    I haven’t actually talked with a CERN experimentalist since the report, but the reason that I am confident in understanding how experimentalists think is that I was one. (Even if I had called one of them up I would know better than to name names, something you’re wisely not doing either.) I presume that when new a new theory is proposed you have a good feeling for how excitement/skepticism other theorists will react with, even before you actually ask them.

    The reason that I think saying 50-50 is extreme is that it’s totally impossible to quantify the odds so exactly. For the exact same reason that experimentalists use 5 sigma they are loath to pulling such numbers out of the air. Since you’re at CERN, why don’t you try asking them this question: “Do you think the odds that this signal will go away is close to exactly 50-50?” It’s the 50-50 I object to, not that people have doubts about there being some possibility that the signal will go away.

    This is parenthetical, but I always found the lack of a right handed neutrino really bizarre, and that’s why my gut feeling was that all three generations probably had non-zero mass. I guess maybe you’re saying that the smallness of the neutrino masses is what’s failing Occam’s razor? Occam’s razor is for discarding competing theories, and I would assume that there are far more theories on the bonepile than famous examples of new physics.

    I’m kind of curious, I assume you mean that the muon wasn’t the pion? The pion WAS there, though, so I’m not sure how that fails Occam’s razor, it’s just that there was more going on than we knew.

    1. 🙂 Thanks, Gillian, for your very detailed reply. Knowing you’re a former experimentalist is helpful (I find it amazingly difficult to figure out who is who from context when reading/replying to comments.)

      Let’s set the Occam’s razor discussion aside for the sake of brevity! It’s a good discussion and I will write an article about it next year.

      You ask, “do you really believe in your heart of hearts that until you have 5 sigma the case is “inconclusive” and/or 50-50?” Of course not, and that’s not my point. (And let’s not get hung up on “50-50”, which is just shorthand for “wouldn’t be surprised either way.”) My point is that not all 4 sigma results are created equal. That’s the whole message of the last portion of this article. The question I always ask is: how sensitive is the claimed statistical significance to errors, one-offs, assumptions, accidents. A 4 sigma result that is cobbled together from eight pieces of information, four of which are 1 – 1.5 sigma and require tricky and error-prone background estimates, and all of the rest of which are 1%-2% measurements that still do not have their final calibrations, is not the same as a 4 sigma result built entirely from 1%-2% measurements with their final calibrations. When the 2-photons and 4-leptons results from ATLAS and CMS are final, I will want to see how concordant they are and what statistical significance one gets from combining all four. Suppose in late January the experiments publish their results, and all of the excesses in WW, bb and tau tau all become less significant, but the 2-photon and 4-lepton measurements move a bit closer together, and the result is 4 sigma again. Since I will then know the results are fully calibrated, and that the significance is driven by the less error-prone measurements, my confidence will go up — even though the statistical significance is exactly the same.

      I am not myself an experimentalist, of course, but I am an unusual theorist. (I do not use the word “phenomenologist”, which is a catch-all basin for many different subfields; I prefer “collider theorist” for myself.) In addition to doing string theory and quantum field theory, I have worked very closely with experimentalists at various points during my career, and perhaps my greatest achievement as a scientist is that ATLAS uses a couple of novel trigger strategies that I partially proposed, and helped develop. So all I can say is that I do have my ear to the ground and I’m not just making up my statements about the level of caution that is widely (certainly not universally) felt. I suspect (and this is interpretation on my part, not justified by anything specific) that many people feel there is a rush to judgment going on — that we are in danger of seeing what we want to see — and that sure, what we’re seeing might be the Higgs, but there’s far too much at stake not to demand a higher burden of proof than was demanded for, say, pentaquarks. The pentaquark debacle was bad but didn’t hurt the field. A similar debacle for the Higgs, on the world’s stage, could be damaging. And we only have two experiments to verify one another; if they both get it wrong, we won’t realize it for a long time. Arguably we should reserve judgment until both experiments have a nice, clean 3 sigma result from their leptons and photons alone. That’s opinion, of course, but surely not crazy.

      1. Thanks for your explanation. I guess I really am surprised that you “wouldn’t be surprised” if it goes either way, since if the signal disappears then it will mean the standard model higgs is basically completely excluded. I would have thought that if someone could go back in time and tell you in 2005 that the result of the LHC would be “exclusion of everything under 600 GeV” you would have found that result surprising!

        I understand (in an abstract way) that if this “bump” becomes a 5 sigma signal there is still work to be done before it’s understood what kind of a higgs (or whatever) this really is. But the signal slowly disappearing over the next two years and nothing at all being there? That just really doesn’t seem very likely to me.

        I’m also not concerned about people “guessing” for the moment that this is probably going to develop into a standard model higgs as time goes on. Since we’re currently statistics limited, and are expecting huge amounts of future data, the problems of systematic bias that plague single event detection experiments are just not there (e.g. proton decay, monopoles, dark matter). And you can bet that there are going to be huge numbers of experimentalists and theorists poking and prodding the thing to make sure that it looks, sounds, and smells the way it should.

        If I was still in the field, I would be much more worried for what it means for the field if all that the LHC sees is a standard model higgs and nothing else, than than that the public temporarily thinks the higgs is discovered but the end result turns out to be something excitingly more complex (or no higgs at all).

        1. “I would have thought that if someone could go back in time and tell you in 2005 that the result of the LHC would be “exclusion of everything under 600 GeV” you would have found that result surprising!”

          Think again. 🙂

          I have done a good bit of model-building during my career. A model-builder’s job is to think outside the box.

          More generally, I have worried incessantly over how theoretical bias within the theoretical and experimental community might limit our vision and our strategies at the LHC. Even in public: http://www.symmetrymagazine.org/breaking/2010/02/14/do-particle-theorists-have-a-blind-spot/

          Even in my recent article on the Higgs search you will find me even-handed: http://blogs.discovermagazine.com/cosmicvariance/2011/12/06/guest-post-matt-strassler-on-hunting-for-the-higgs/

          And you will find similar points of view expressed all over this website, if you dig into some of the articles about the Higgs that aren’t specifically about the Standard Model Higgs.

      2. Maybe I should have waited with the following comments until your announced article about Occam’s razor appears next year, but anyway. The question with Occam’s razor is always which of two proposed explanations should be regarded as being simpler. The “horrible record” list of Occam’s razor you wrote down above contains already several disputable items: Are these now widely accepted explanations really more complex (and therefore Occam-disfavoured) than the assumptions they replaced? More complex in which sense precisely?

        I argue in particular against the inclusion of neutrino masses in the list. The modified Standard Model with massive neutrinos is indeed more complex than the massless version in the sense that the massive theory contains more adjustable parameters (masses and Maki-Nakagawa-Sakata matrix elements). But in another sense at least the version with Dirac masses is *simpler* than the massless version: writing it down requires — at least in principle — fewer characters (bits), because in contrast to the massless version, the Dirac-massive version involves leptons and quarks in a highly symmetric way.

        Like the lack of parameters, the size of the symmetry group is apparently not the correct measure of simplicity either: the Dirac-massive version is contained in the SU(5) Grand Unified Theory, which to me does not look simpler than its S(U(2) x U(3)) subtheory, in the sense that I couldn’t write it down with fewer characters.

        Our current best theories in high-energy physics are regarded as effectively resulting from an unknown theory at unobservably short length scales. In this situation, Occam’s razor can only be applied to the *particle and symmetry content* of a proposed theory at observable scales. Once this content is fixed, the set of allowed Lagrangians and thus the set of adjustable parameters are determined. Occam’s razor does *not* apply to the number and/or values of these parameters (although it might apply to the values of the free parameters, if any, of the unknown fundamental theory at short length scale). *Any* particular value we might observe for the parameters in experiments would require an explanation (which is hard to find before we know the fundamental theory), even the value 0. In this sense the model with Dirac-massive neutrinos is simpler than a model in which these masses have been set to zero artificially, because its particle/symmetry content is easier to describe.

        In a similar sense I argue against the idea that Dark Energy — in the form of a cosmological constant — is a failure of Occam’s razor. The additional adjustable parameter occurs naturally in every theory with the given symmetry content, so (as Weinberg had pointed out long before Dark Energy was observed) it would require an additional explanation to set it to zero a priori.

        Only the _muon_ and _third generation_ entries in your “horrible record” list seem uncontroversial to me: from the (then and now) current state of knowledge, it was/is indeed mysterious why the number of particle generations is 3. But even in this case, the Occam-disfavoured theory is not so weird. It does not contain a continuous parameter with completely new properties, just a discrete one: the particle/symmetry content of the simpler theory is just “multiplied by three”.

        I regard Occam’s razor in the described form as a useful principle, which in my (obviously controversial) opinion disfavours for instance 1. Majorana-massive neutrinos, 2. GUTs, 3. supersymmetry. Let the experiments decide whether these predictions are correct.

        The upshot is that we have to be very careful which kind of complexity/simplicity we talk about in Occam’s principle. In order to apply the razor, we need in advance a good measure of the complexity of a proposed explanation. Often one can see only in hindsight, after additional phenomena have been taken into account that the original problem did not care about, how simple a proposed explanation really was. Of course this limits the applicability of the razor extremely, but my predictions stand…

  5. Can I make an observation as a non-physicist, but as a reasonably well-informed outsider? I think the problem boils down to beliefs that happen to highly-intelligent people that get magnified and enhanced into scientific consensuses before there are facts available that are definitive or even repeatable or testable. I think plots like the ones shown can either be the first hints of a new particle or statistical flukes.

    Its like a Rorschach test for people whose entire career is predicated largely on their ability to spot patterns. And although each of these people like to think of themselves as original thinkers, in groups of two or more they can convince themselves of any number of foolish notions – the history of science is full of stories of scientists who went spectacularly off the rails chasing chimeras of one sort or another because their powers

    The discovery of the Higgs would be a phenomenal moment for physics and the Standard Model. The non-discovery of the Higgs would also be a phenomenal moment for physics and the Standard Model (arguably even greater). But the (Nobel) prize is not symmetric – no-one gets anything and no-one gets to go to Stockholm for having failed to find the Higgs boson. No-one appears on the front page of Scientific American or People Magazine or whatever for having nothing to show for all of the money spent. Just imagine what would have happened had SU-5 been confirmed by experimental evidence of proton decay.

    To my shame, earlier in my life I spent quite a few years with evangelical churches, leaders and congregations. The power of belief in enclosed groups in the most preposterous nonsense is a clear Force of Nature Yet To Be Described by Physics. As I’ve got older, I’ve grown to distrust more and more the powerful compulsion to believe in evidence that confirms my prior prejudices especially when those prejudices are shared by the people around me and more importantly discount evidence that disconfirms my prejudices – but its a never-ending struggle.

    That’s the problem with bandwagons, before you know it the financial momentum is too great for any but the foolhardy to stand in its way. And where there are bandwagons, there are hangers-on.

    In the absence of definitive evidence, there’s too much belief too early and not enough skepticism.

    1. “But the (Nobel) prize is not symmetric – no-one gets anything and no-one gets to go to Stockholm for having failed to find the Higgs boson.”

      Are we really sure of this? 🙂
      Some people have already said that giving a Nobel prize to a single person, or few persons, doesn’t really make sense anymore in HEP… if ATLAS and CMS managed to convincingly exclude the standard model Higgs, I’d say that a collective prize for both teams would be very much in order… As you say, it would be the greatest discovery in thirty years *anyway*.

  6. Matt,

    Very nice article, you made your point. I also agree the confidence (for experimentalists and theorists) that grew with the excess being a signal is strongly driven by our expectations on the SM being correct and the Higgs being there. We would unquestionably be less confident if it were not because of it. Some months ago I was ready to give up on the Higgs but now I expect (a belief , as you said) this to grow to a a real signal. I expect however this to grow to some scalar (because of the photon decays) resonance signal that I am not sure I will be convinced so easily on what it is. The data, together with my own prejudices, points to me that this is real. My skepticism (or maybe my hopes) keeps me however unconvinced this is a SM Higgs (at least for now). We will know next year.

    1. Thanks Bernhard for your remarks. There is definitely room for intelligent people to disagree.

      One point about your concern about whether this is a Higgs or not — the key issue to watch is the ratio of two-photon decays to four-lepton (from two-Z) decays. If the number of decays to four leptons remains anywhere vaguely near 1/30 of the number of decays to two photons, as data currently hints, then it follows that the strength with which this scalar interacts with Z particles is much larger than the strength of its interaction with photons (since only 1/300 Z pairs decay to four leptons, and one of the Z particles must be virtual, which causes additional suppression.) It is almost impossible to arrange for a garden variety scalar or pseudoscalar to have large interactions with the Z and small interactions with the photon — the very structure of the weak interactions, in which the Z and photon emerge as mixtures of other particles, makes this situation very fine tuned. Only with a Higgs particle that gives the Z its mass and leaves the photon massless is this situation automatic.

      So if in fact this particle is there *and* it shows up both in the two-photon search and the four-lepton search, I will be essentially convinced that it is a Higgs particle. Not necessarily THE Higgs particle (there still might be more than one) and certainly not necessarily the STANDARD MODEL Higgs particle (we’ll need other measurements over several years to convince outselves of that). But almost certainly some type of Higgs.

    2. Hi again Matt,

      indeed 1a is what I would naively expect, according to Occam’s razor. Several years ago I kind of expected SUSY, since it “solved” the dark matter problem. And if you combine Occam with Murphy these days then you get the standard model being everything at low energy scales plus some kind of dark matter that couples only to gravity at low energy scales…

      There is a big difference between the current situation and speculations by a small number of authors from many years ago. A good chunk of all of the world’s hep-ex people have worked to exclude the standard model higgs from everywhere except this last hiding place, and now they finally see a fairly large bump in the last place it could be. According to them, this bump is highly likely to be not just a statistical fluke (although it is not a discovery by the standards of the field).

      Now, with hundreds of experimentalists claiming that the odds that this is simply a fluctuation are very small, you are claiming that the odds are instead 50-50.

      I think that part of the reason experimentalists were so annoyed at theorists chasing 3 sigma bumps in the past is that they wished that the theorists were instead phenomenologists. Given all of the screwed up systematics in the history, 5 sigma was chosen as an arbitrary “safe” discovery level, and had the side benefit that it could be used as a tool to beat string theorists over the head with.

      I completely understand that there is a (relatively small) chance that this is “just another fluctuation”, and this is exactly why CERN isn’t claiming discovery. I would call this “not certain” rather than “inconclusive”. But to claim that the odds are 50-50 is, to my mind at least, a fairly radical position, and one that I am quite sure is not shared by many (any?) experimentalists.

      Thanks for responding to my comments, and best wishes for something other than vanilla!

      1. Well, I hear you, but I disagree with almost every point you’ve made. Have you ever calculated the success of Occam’s razor in particle physics? The muon. Neutral currents. Neutrino masses. Parity violation. CP violation. The third generation. Dark matter. Dark energy. Occam’s razor has a horrible record. I was just talking with some experimentalists about that (who were agreeing with me.)

        You say “with hundreds of experimentalists claiming that the odds that this is simply a fluctuation are very small, you are claiming that the odds are instead 50-50.” Which experimentalists are you talking to? I talked to over 30 on ATLAS and CMS in the last week, many of whom were participants in the searches.

        You say “I think that part of the reason experimentalists were so annoyed at theorists chasing 3 sigma bumps in the past is that they wished that the theorists were instead phenomenologists.” Do you have any evidence for this remark?

        You say “I would call this “not certain” rather than “inconclusive”.” I’m ok with that. I would say that inconclusive means you can’t draw a conclusion yet. It’s related to “not certain”, and we don’t need to debate exactly how closely.

        But they you say “to claim that the odds are 50-50 is, to my mind at least, a fairly radical position, and one that I am quite sure is not shared by many (any?) experimentalists.” I am glad you are quite sure. I wonder how you reached that conclusion. Do you (as I did) actually talk to many of them?

        I am honestly just trying to understand why you are so confident you understand how experimentalists think. You don’t seem to recognize that several of the comments supporting my point of view on this and other recent posts were from experimentalists on LHC experiments.

  7. Matt, you emphasized that “The past success of the Standard Model is not strongly correlated with whether there is a Standard Model Higgs particle in the LHC data.” I think this is kind of a weird statement.

    The point of a good theory is that it does a good job of making predictions. The past success of a theory (at making predictions in advance of the data) is what makes it a good theory. Say I observe that flicking a switch on the wall in my bedroom makes lights that are on turn off, and lights that are off turn on. Then I notice the same thing in my kitchen. And in my bathroom. So I formulate this as a theory that “switches on the wall make lights go on and off”, and it turns out to be a really successful theory. Every time I test it it seems to work: at my office, in the hallway, in the closet, when the switches are red, when they’re blue, when they’re white, etc.. So one day you invite me to give a talk, and the room is too bright for the projector. I see a switch on the wall… Would you really argue that the past success of my light switch theory is not strongly correlated with whether that switch will turn the light off in your room?

    I don’t know how you could possibly evaluate the correlation coefficients for new experimental tests of old theories. But presumably what makes theories like evolution, relativity, QM, and the standard model successful is that so far they have provided excellent guidance when facing new tests.

    1. Gillian — I am not sure you get my point yet.

      First, my point is neither new, nor radical, nor unique to me. The first paper to observe something like this (at least the earliest I am aware of) is from 1982,

      Invisible Decays Of Higgs Bosons. Robert E. Shrock (SUNY, Stony Brook), Mahiko Suzuki (UC, Berkeley).
      Published in Phys.Lett. B110 (1982) 250

      It’s out of date, but the basic point is right, and there are many related papers with similar consequences over the years. And this general concern motivated entire research programs, see for example

      Observing an invisible Higgs boson. Oscar J.P. Eboli (Sao Paulo, IFT), D. Zeppenfeld (Wisconsin U., Madison). Published in Phys.Lett. B495 (2000) 147-154 ; e-Print: hep-ph/0009158

      Your analogy is not relevant to my point, and maybe the problem is my use of the word “correlated”; I couldn’t come up with a better one. What’s at stake is the following: we can imagine four classes of theories

      1a) All predictions of the Standard Model work up to now, and there is a Standard Model Higgs particle
      1b) All predictions of the Standard Model work up to now, and there is not a Standard Model Higgs particle
      2a) Not all predictions of the Standard Model work up to now, and there is a Standard Model Higgs particle
      2b) Not all predictions of the Standard Model work up to now, and there is not a Standard Model Higgs particle

      All such theories are easily written down by theorists and it is easy to move from one to the other. Since 1a is what you would naively expect (if you are not a theorist) and inventing things like 1b is what theorists do for a living, it is perhaps not surprising that you weren’t aware how easy this is to arrange. But ask my many model-building colleagues.

      Or are you telling me that you *know* the Higgs doesn’t decay to dark matter? 🙂

    1. Thanks, I forgot about that 1984 example. There are so many…

      About the dividing and recombining — it is probably only a big deal if you don’t know it’s going on and misinterpret the plots, becoming psychologically misled into feeling that the excess is more dramatic than it is. That said, I’m not educated enough to know if dividing/recombining introduces any important subtle pitfalls.

      1. Many bumps have come and gone (another favorite is in the first edition of Perkins… Introduction
        to High Energy Physics, by Bruno Maglich… the split A_1 etc).

        Few of those bumps, however, were thought to be the Higgs… the 1984 was seriously
        sold as the Higgs. I’m not sure there is another Higgs bump in history.

        Good question on subtle pitfalls… I’m not sure either. However, there probably is
        a more systematic way to make the sub categories than Atlas and/or CMS have done.
        I’d just look at bins of high resolution on the gamma gamma (in one dimension) versus
        background level (in the other dimension). Make 9 bins of that , 3X3, or maybe even
        just 4=2X2. But maybe endcap/central/converted/unconverted is just as good.

  8. Matt,
    We have seen a lot of discussion on what constitutes proof for the existence of the Standard Model Higgs particle. What is the standard of disproof?

    1. An excellent question. This is under debate (I think Tommaso Dorigo, in one of his many quite reasonable posts, discussed this recently.) Clearly, since it’s the Higgs we’re talking about, standards should be very high. So 95%-probability exclusion, which is usually what people aim for, is not going to cut it this time. The question is going to arise at the end of 2012, if the Higgs particle is not found by then. And if a Higgs particle is found, then it is going to arise over and over as the experimentalists search for a rarely-produced second Higgs particle that might be in the data, and as they measure the decays and production rates of the Higgs as well as possible.

      Really the problem is that any standard you set is arbitrary, and any reasonable expert can question it. There simply is no sharp line between knowledge and ignorance.

  9. Thanks, Matt, for such well-reasoned post. I think your point about physicists mixing “belief” with “evidence” is spot on. I myself take a cautionary line of argument. Mostly because the excitement is so obviously tied to the fact that other masses are fairly reliably excluded, and if one starts from a belief that SM Higgs exists, the current data suggests that 125 GeV is the best spot for it.

Leave a Reply

Search

Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC