[UPDATE 10/23/11: Blog traffic indicates that far more of you are reading this post than the one that follows. That’s too bad, because in my opinion the more interesting information was in the second talk from CMS on the subject, not the first. But also it’s too bad because you’re not reading my very clear statement of what this small excess does and doesn’t mean. So I’ve decided to copy that statement into this post. Here is my view, as stated 10/20/11.
But before we begin, maybe I should make my own opinion perfectly clear to the reader. How high does this story rate on the scale?
- Particle physicists perhaps should be interested, maybe even intrigued, but definitely not excited. As do most small excesses, this one will probably disappear as more data becomes available.
- Other scientists should basically ignore this. The excess will probably disappear soon enough.
- I can’t see why the general public should pay any heed to this, as the excess will probably disappear — with the exception of those who are specifically curious as to why particle physicists are paying close attention. Those of you who are in this category have a nice opportunity to learn why multi-lepton searches are a powerful, though tricky, way to look for new physics.
I go on in that post to talk about why I think this; you can read about it there. So — buyer beware… The original post now follows (and please note words like “minor” and “somewhat”. They are not there by accident. Bloggers: .)]
Finally, something at the Large Hadron Collider (LHC) that does not seem to agree that well with the predictions of the equations of the Standard Model of particle physics. Of course, we should not be surprised that it has taken a while for even a minor discrepancy of this type to see the light of day; as I emphasized in this article, there is always a tendency, during the early and middle years of an experiment, for the results that agree with expectations to appear first, while the results that don’t agree get extra scrutiny and take longer. For this very reason, many outside the Large Hadron Collider experiments have been waiting with great curiosity for the results of the search for “multileptons”: very rare proton-proton collisions which produce directly three or more of electrons, positrons (i.e. anti-electrons), muons, anti-muons, taus and/or anti-taus. These searches have been noticeably late.
Today, at the supersymmetry workshop at Berkeley, the CMS experiment has finally released some results on this search. And indeed, there is somewhat of an excess of events with three leptons, especially in events with no taus or anti-taus but also perhaps in events with one tau or anti-tau. Not enough for CMS to say anything other than “Observed data are essentially consistent with background expectations; no smoking gun for new physics yet.” But I think this is somewhat more interesting than their conservative statement implies. Here’s the table from their presentation, which I’ve marked with red dots where the number of events somewhat exceeds expectations. Importantly, also, there are very few entries below expectations to balance out these upward excesses; this really seems to be a true surplus. Of course, we absolutely cannot conclude this has anything to do with new physics; first, this still has rather low statistical significance, even combining some entries in the most optimistic way, and second, if CMS had a subtle problem understanding its detector backgrounds, or theorists had somehow missed or miscalculated a background, that would also potentially lead to an excess. But this is clearly something to watch closely over the coming months.
UPDATE: It has been pointed out to me that I might have mentioned that there is one very unusual four-lepton event, with MET>50, HT<200 and no Z. The background to such events is thought to be 0.014 +- 0.005. So this event should not be there. But there’s not much more to say; with this many collisions, occasional weird things do happen. Until we see three or four events like this, we have to assume this one might be a fluke. Remember the magnetic monopole of 1982.
I know something about how this search is done because the two experimental groups that are most involved are from Karlsruhe (whose member Fedor Ratnikov gave today’s talk) and my institution: Rutgers. [Colleagues at UC Davis inform me that one of their faculty and a student were also involved in part of the analysis.] The Rutgers group is headed by professor Sunil Somalwar, who, back in 2008 with his student Sourabh Dube, did a highly regarded comprehensive three-lepton search at the CDF experiment at the Tevatron (the now-closed U.S. predecessor to the LHC.) His extensive experience is being put to use again here at the LHC. Somalwar is assisted on the experimental side by postdoc Richard Gray (who will also be giving a talk about a related search tomorrow, where we may hope for more information) and faculty member Amit Lath and on the theory side by professor Scott Thomas, along with a number of students. One thing that’s important to understand is that this is an extremely complex search. You can get a sense for this simply from the number of entries in the table; obtaining the background expectation for each of the entries is a major chore, and there are dozens of them. And there are many subtleties in understanding backgrounds when one pushes the envelope to study leptons with rather low energies, which is part of the Rutgers’ group’s special expertise.
I’ll try to produce an explanation of why multi-lepton events are interesting places to look for new physics, and why understanding backgrounds in a search like this is tricky, over the next day or so. Stay tuned. UPDATE — a page on multi-lepton events, with support from preliminary pages (some lacking figures at the moment) on jets, taus, and supersymmetry multi-leptons, are now available if you are curious.
26 Responses
Thanks a good deal intended for expressing this kind of with all of men and women you actually comprehend the pain you are discussing! Book marked. Remember to also seek advice from our internet site Equates to). We may have got a website link deal design among you
One other thought. The 20 events of one tau, four leptons is the most exceptional on the chart in terms of sigma value, but it makes little sense the the confidence interval for an event with an expected number of observations equal to about eight has a confidence interval of +/- 19% of the mean, while the four lepton bins with larger expected numbers of events have larger confidence intervals as a percentage of the mean.
It looks to me like the formula being used to determine confidence intervals is not properly accounting for the effect of small expected sample sizes that almost always have larger confidence intervals as a percentage of the mean than larger samples as a result of the law of averages. This effect is negligable in large samples (e.g. 1000+), but is huge in small samples (e.g. 20 and less).
See my answer to Chris Austin’s question.
Lubos has already expanded on the susy interpretation of the trilepton excess. And I like it. It is going to be very funny if the susy sector is found before the higgs sector itself.
There’s absolutely no evidence at this point that this has anything to do with supersymmetry. ZERO. So keep that in mind.
The other thing about your point is that most supersymmetry experts *expected* supersymmetry to be discovered [if it existed at all] before the Higgs particle. Standard supersymmetry is rather easy to discover (large rate for jets plus undetectable particles, low background) while a lightweight Higgs is harder (low rate for two photons, large background).
Well, I was thinking in a different scenary, no MSSM nor extensions, but SSM. By SSM I mean simply to ask every known particle to be in a N=1 supermultiplet, but keep agnostic about the higgs mechanism. Then you have still wino and zino, an three scalars from the massive N=1 supermultiplets, but no more. This susy scenario is probably more difficult to discover that the standard one with five scalars and more neutralinos. (And my own model is even more stingier, because I think I can dispose of all the scalars)
If you lump the four no tau, no Z boson bins together (and given the MET and HT combinations in the three most notable bins, and I don’t see how you could justify segregating them in a theoretically significant way from the fourth no tau, no Z boson bin) you have an expected value of 97.5, an observed value of 111, and two-thirds of the sum of the four 95% confidence intervals (a crude approximation of the proper way to do the math combining them) of 80-115 events. The other variations from the the norm (and there are a couple slight undervalues as well as the couple of slight overvalues that you not), are about what you would expect for the data set as a whole. (A chi-squared test would really be the right way to measure the significance of the observed v. predicted values for the study as a whole.)
In short, “nothing to see here, move along.” If you break up your bins too finely without a good reason for doing so, and your absolute event numbers are small, you are going to see quirks like this now and then.
Very possibly you are right about the “nothing to see here”. It’s just “curious”, and that’s all for the moment. But you are wrong about there not being a good reason to segregate the bins as is done here. The bin you chose to include is known to have large background. That’s something like a “control” sample. Obviously in any experiment if we combine signal and control samples we’ll reduce the significance of the signal, but there’s a very good reason not to do that.
I’m wondering if there could be a typo somewhere in the Totals 4L row, N(tau) = 1 column, where it says: obs 20.0, expected SM 7.8 +-1.5. i.e. it looks like an 8 sigma excess. Obviously I can’t check it because you’ve deleted the rest of the 4L data, but perhaps it would be worth checking if there is a typo if you have the rest of the 4L data.
It’s also noticeable that obs increases monotonically with N(tau) in the Totals 4L row, while expected SM has a dip at N(tau) = 1. On the other hand, Totals 3L shows a massive peak at N(tau) = 1, and obs and SM agree well in the whole of that row. Is there any simple reason for the huge peak at N(tau) = 1 in the Totals 3L row?
Taus are tough; there are lots of fake taus. The excess events are those with low MET and low HT (see the figure for the rough definitions) and have a Z particle in them. This is the place where you are least likely to have new physics show up, and most likely to have a problem with the modeling of the backgrounds. So my current opinion is that this is probably not significant.
It maybe worth clarifying the statement on taus… we don’t see taus in the detector. So is N(tau) mean candidate for hadronic decay product?
Yes, true. In fact I am almost certain it means one-prong taus (i.e. taus that decay to only one charged hadron.)
It doesn’t seem to be compatible with a Higgs channel, does it?
Thank goodness i found out in time that “No-OSSF”, does not stand for On-Site Sewage Facility, but that it stands for “Opposite-Sign, Same-Flavor”
Thanks for the article.
Michel
What are “Totals 4L” & “Totals 3L”?
Never mind. Those are the total events with 4 Leptons or 3 Leptons. I just added the numbers in the rows together. Sorry for the stupid question.