Ok, hurricane over — and I was in a spot where it happens to have been particularly mild. Power was never lost. South of here, north of here, east of here, it is a different story. Nature deals in chance.

As I prepare some more particle physics posts, it seems a good moment to ask a more general scientific question: *What are the chances that there would be a rare significant east-coast earthquake and a relatively rare New England hurricane affecting the New York area within five days of each other?* Or more generally: *What are the chances of two rare natural disasters affecting the same place within such a short time?*

You will notice, if you are reading carefully, that the second question is not at all a rephrasing of the first. The two queries are utterly different, and they have entirely different answers. Not that I know the answers — with rare events, we usually do not have enough data to know what the probabilities are, which is a real problem in scientific research, and also in public policy, where a term such as “100-year flood” is just this side of pure guesswork. But I can tell you which of the two answers is larger: *the chance of two particular rare events is far smaller than the chance of two rare events of a general type.* It’s the same as the standard example that if there are 23 people in a room, the chance that two of them have the same birthday is 50 percent, while the chance that two of them were born on a particular day, say, January 1st, is quite low, a small fraction of a percent. The more you specify the coincidence, the rarer it is; the broader the range of coincidences at which you are ready to express surprise, the more likely it is that one will turn up.

Humans are notoriously incompetent at estimating these types of probabilities… which is why scientists (including particle physicists), when they see something unusual in their data, always try to quantify the probability that it is a statistical fluke — a pure chance event. You would not want to be wrong, and celebrate your future Nobel prize only to receive instead a booby prize. (And nature gives out lots and lots of booby prizes.) So scientists, grabbing their statistics textbooks and appealing to the latest advances in statistical techniques, compute these probabilities as best they can. Armed with these numbers, they then try to infer whether it is likely that they have actually discovered something new or not.

And on the whole, it doesn’t work. Unless the answer is so obvious that no statistical argument is needed, the numbers typically do not settle the question.

Despite this remark, you mustn’t think I am arguing against doing statistics. One has to do something better than guessing. But there is a reason for the old saw: “There are three types of falsehoods: lies, damned lies, and statistics.” It’s not that statistics themselves lie, but that to some extent, unless the case is virtually airtight, you can almost always choose to ask a question in such a way as to get any answer you want. If you want to argue that it is an amazing coincidence that a New York City resident would experience an earthquake and a hurricane in the same week, you can do that. Alternatively, you can argue that it is not a big coincidence at all that residents of *some* city, *somewhere* on the planet, should experience two natural disasters in quick succession. [For instance, in 1991 the volcano Pinatubo in the Philippines had its titanic eruption while a hurricane (or `typhoon’ as it is called in that region) happened to be underway. Oh, and the collapse of Lehman Brothers on Sept 15, 2008 was followed *within three days* by the breakdown of the Large Hadron Collider (LHC) during its first week of running… Coincidence? I-think-so.] One can draw completely different conclusions, both of them statistically sensible, by looking at the same data from two different points of view, and asking for the statistical answer to two different questions.

To a certain extent, this is just why Republicans and Democrats almost never agree, even if they are discussing the same basic data. The point of a spin-doctor is to figure out which question to ask in order to get the political answer that you wanted in advance. Obviously this kind of manipulation is unacceptable in science. Unfortunately it is also unavoidable.

Why? It isn’t just politics. One might expect problems in subjects with a direct political consequence, for example in demographic, medical or psychological studies. But even in these subjects, the problem isn’t merely political — it’s inherent in what is being studied, and how. The experimental object is people; and people are complicated, with many biases they may or may not reveal. Moreover, accumulating a large data set is difficult; medical studies that ought to study thousands of individuals often study a few dozen. On top of that, statistics can be very subtle, and not all researchers have enough math background to handle the complex statistics that ought to be used. Now you might think that, in contrast, particle physicists, who study very simple and largely apolitical systems, work with very large data sets, and know mathematics very well, should not have huge debates about which statistical techniques to use. But they do. And the debate often boils down to this: *is the question that you have asked in applying your statistical method the most even-handed, the most open-minded, the most unbiased question that you could possibly ask?*

It’s not asking whether someone made a mathematical mistake. It is asking whether they *cheated* — whether they adjusted the rules unfairly — and biased the answer through the question they chose, in just the way that every Republican and Democratic pollster does.

Inevitably, the scientists proposing intelligent but different possible answers to this question end up not seeing eye-to-eye. They may continue to battle, even in public, because much is at stake. Biasing a scientific result is considered a terrible breach of scientific protocol, and it makes scientists very upset when they believe others are doing it. But it is best if the disputing parties come up with a convention that all subscribe to, even if they don’t like it. Because if each experimenter were to choose his or her own preferred statistical technique, in defiance of others’ views, then it would become virtually impossible to compare the results of two experiments, or combine them into a more powerful result.

Yes, the statistics experts at the two main LHC experiments, ATLAS and CMS, have been having such a debate, which has been quite public at times. Both sides are intelligent and make good points. There’s no right answer. Fortunately, they have reached a suitable truce, so in many cases the results from the two experiments can be compared.

But does the precise choice of question actually matter that much? I personally take the point of view that it really doesn’t. That’s because no one should take a hint of the presence (or absence) of a new phenomenon too seriously until it becomes so obvious that we can’t possibly argue about it anymore. If intelligent, sensible people can have a serious argument about whether a strange sequence of events could be a coincidence, then there’s no way to settle the argument except to learn more.

While my point of view is not shared explicitly by most of my colleagues, I would argue that this viewpoint is embedded in our culture. Particle physicists have agreed, by convention, not to view an observed phenomenon as a discovery until the probability that it be a statistical fluke be below 1 in a million, a requirement that seems insanely draconian at first glance. There are several good reasons for this choice, and a longer article about them is needed, but one argument in favor is the story of the earthquake and the hurricane.

Even when the probability of a particular statistical fluke, of a particular type, in a particular experiment seems to be very small indeed, we must remain cautious. There are hundreds of different types of experiments going on, collecting millions of data points and looking at the data in thousands of different ways. Is it really unlikely that someone, somewhere, will hit the jackpot, and see in their data an amazing statistical fluke that seems so impossible that it convincingly appears to be a new phenomenon? The probability of it happening depends on how many different experiments we include in calculating the probability, just as the probability of a 2011 New York hurriquake depends on whether we include other years, other cities, and other types of disasters.

This is sometimes called the “look-elsewhere effect”; how many other places did you look before you found something that seems exceptional? It explains how sometimes a seemingly “impossible” coincidence is, despite appearances, perfectly possible. And for the scientist whose earth-shattering “discovery” blows away in the statistical winds, it is the cause of a deeply personal form of natural disaster.

I agree. There will be sufficient integrated luminosity in several channels in the LHC that when the Higgs is there, and ATLAS/CMS are about to make a claim, you will also see it ‘with the naked eye’ in each of the main channels. Maximum likelihood estimates, combined analysis etc are only useful with marginal statistics, but then we are not good at quantifying marginal probabilities as the LEP II Higgs saga had shown. Their main purpose are agreed ground rules for ATLAS and CMS claim to make a discovery or exclusion (their sensitivities appear so close however, I think they are unlikely to beat each other). On another subject, I have understood now why the number of expected Higgs events in the WW and gam/gam channels are similar given the BR for the former is almost 1-on-alpha greater than the latter. At first order the WW has three leptonic decays and 6 hadronic (counting the three udbar, csbar colors). The tau/tau is not detected so only about 4% of WW decay with no jets, and this is the main reason why the two channels just about even out. Nonetheless, I have now realised the Higgs of textbooks was premature. I dont know why people were so uncareful.

Pingback: Probability that it is a statistical fluke [i] | Error Statistics Philosophy

Pingback: “probability that it be a statistical fluke” [ii] | Error Statistics Philosophy

Greetings I am so glad I found your web site, I really found you by error, while

I was browsing on Askjeeve for something else, Regardless I am here now and would just like

to say thanks a lot for a remarkable post and a all round thrilling blog (I also love the

theme/design), I don’t have time to read it all at the minute

but I have book-marked it and also included your RSS feeds, so

when I have time I will be back to read a lot more, Please do keep up the great

job.

Pingback: A biased report of the probability of a statistical fluke: Is it cheating? | Error Statistics Philosophy