Of Particular Significance

A Second Look at the Curious CMS Events

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 10/20/2011

So now the second shoe drops, and the earth shakes a bit (3.9 earthquake just outside Berkeley prior to this talk.)  Yesterday we had the first report from the CMS experiment’s search for multi-lepton events at the LHC (Large Hadron Collider).  (See yesterday’s post.)  The emphasis yesterday was on events that show evidence of  substantial momentum and energy carried off by undetectable particles such as neutrinos (such evidence involves an imbalance of momentum among the detectable particles, or `missing energy’ as it is often [misleadingly] called).  Today the emphasis is on events with large overall energy (not exactly true but close enough for the moment), including both the energy of detectable particles (the leptons and any jets from quarks or gluons)  and the evident energy of anything undetectable.  One could already tell from yesterday’s table of event rates that today’s talk at the Berkeley supersymmetry workshop would be worth paying attention to.

But before we begin, maybe I should make my own opinion perfectly clear to the reader.   How high does this story rate on the scale?

  • Particle physicists perhaps should be interested, maybe even intrigued, but definitely not excited.  As do most small excesses, this one will probably disappear as more data becomes available.
  • Other scientists should basically ignore this.  The excess will probably disappear soon enough.
  • I can’t see why the general public should pay any heed to this, as the excess will probably disappear — with the exception of those who are specifically curious as to why particle physicists are paying close attention.  Those of you who are in this category have a nice opportunity to learn why multi-lepton searches are a powerful, though tricky, way to look for new physics.

Why do excesses so often vanish?  Because sometimes they are just statistical fluctuations; when you do 100 different measurements, as is the case at the LHC, there’s a good chance that one of them will do something that the average measurement won’t do, namely, give you initially an excess of events. [Imagine a hundred people flipping coins; it wouldn’t be that unusual if one of those hundred coins landed on the same side the first seven or eight times it was flipped.]  The other reason is that experimentalists and theorists are human and make errors, often missing some subtle aspect either of their experiment or of their experimental subject, or introducing a bias of some sort.  I wrote about this just a few days ago (in regard to superluminal neutrinos — and the situation here is even more likely to be due to a statistical effect than is OPERA’s neutrino claim.)

So, in that context, here we go.

Preliminaries

Today’s talk, by Rutgers University’s postdoctoral researcher Richard Gray [who has his office down the hall from me and has been very hard at work for many months], was entitled “CMS searches for R-parity violating Supersymmetry”  Why is the multi-lepton search sitting under this title?  What is R-parity-violating supersymmetry, and why does it often give multiple leptons?

You can read about R-parity (one of the three key assumptions of standard variants of supersymmetry) here; the point is that it assures that the lightest superpartner (LSP) is stable.  By a circuitous chain of logic, which depends on the other two assumptions, this assures the LSP is undetectable, and that all collisions that produce superpartners have two undetectable LSPs at the end of the day.  This in turn implies that most of these events have large amounts of `missing energy’, or more accurately, an imbalance of momentum among the detectable particles that can only be made up by the undetected momentum from the undetectable LSPs.

If R-parity is violated, however, then the LSP decays.  And if the decays are fast enough, occurring after the LSP travels only a microscopic distance, its energy is not undetectable after all.  In this case, rather than a lot of missing energy, one expects very little missing energy and a lot of detectable energy.   [In the language of yesterday’s post’s table, one expects low to medium MET and high HT, instead of high MET; the variable which is most stable is actually ST, see below.]

Relevant for today’s purposes is that supersymmetry with R-parity violation can produce multiple leptons. One way this can happen (and by no means the only possibility!!) is shown in Figure 3 of my article on supersymmetry and multileptons. 

However, we really should not get caught up with R-parity violation or supersymmetry here.  This particular speculative idea is only the stated motivation for the search; but it may well not be the source of any observed excess in multi-lepton events.  Many other new-physics processes could generate multi-leptons too.  What is so good about a multi-lepton search is that it is broadly applicable, as are many of the search strategies (even the standard supersymmetry search) used at the LHC.  It casts a wide net, and could catch fish that are not supersymmetric, and perhaps of types never considered by theoretical physicists.

Now, what did CMS observe?

CMS Results

Today’s talk was packed with information, and there’s no way to process it all in a few minutes.  So consider this a first pass only.

The key difference between yesterday’s talk and today was the way exactly the same events were divided up.  Yesterday they were classified by two variables: HT and MET.  Today they are classified differently, by ST. Vaguely, roughly, and incorrectly, one might say that HT measures the energy in jets, MET the energy in undetected particles, and ST the total energy in everything produced in the collision.  Correctly stated [you can skip this if you don’t want to know…]

  • HT is the [ready???] sum of the absolute values of the transverse momenta of all the jets (read quarks and gluons) where “transverse” means the components of the momentum perpendicular to the beam.
  • MET is the [careful…] absolute value of the sum of the transverse momenta of all the jets and the leptons and anti-leptons.  It should be about equal to the absolute value of the sum of the transverse momenta of all the undetectable objects.
  • ST is the  sum of the absolute values of the transverse momenta of all the jets and the leptons and anti-leptons plus the MET.  This variable (also sometimes called effective-mass) is the most robust variable for detecting new particles, as it does not much depend on how those particles decay.

So take the same events as yesterday, divide them up by their ST (is it bigger or smaller than 300 or 600 GeV), and ask (as yesterday) if there are any electron-positron or muon-antimuon pairs, and if so, ask if there is a pair that might have come from a Z particle decay.  And you get the following tables.  For four leptons, you get two unusual events.  The speaker makes the point better than I could (Figure 1).

Fig. 1: Taken from Richard Gray's talk representing the CMS experiment; the violet ellipse was added by me. Four-lepton events are shown. ST is defined in the text; DY0, DY1, DY2 refer to whether there are 0, 1 or 2 electron-positron or muon-antimuon pairs; Z refers to whether one lepton-antilepton pair could be from a Z particle decay. The two circled events have very low background and are quite unexpected, but at this point are just tantalizing.

Something to watch, but nothing more for now.

So now what about the three-lepton events?  There were several bins that were a bit high in yesterday’s talk.  Here they are reorganized, and they mostly end up in two bins, as seen in Figure 2.

Fig. 2: Taken from Richard Gray's talk representing the CMS experiment; the violet ellipses were added by me. Three-lepton events are shown. See Figure 1 for definitions; T stands for tau leptons (specifically those that decay to one charged hadron.) The two circled entries are somewhat in excess. They apparently correspond mainly to the same events that were in excess in yesterday's talk; here we get a different view of them.

Now there were a number of very interesting plots (but not enough!) that give additional insights into these events.  I just picked a couple that seem most revealing to me.  First, take the events that have an electron-positron or muon-antimuon pair [“DY1”] and separate the events into those that have a pair from a Z and those that don’t; and now make plots of the ST distribution for both sets.  Here’s what CMS finds:

Fig.3: Taken from R. Gray's talk representing CMS; text inside figures added by me. The distributions of ST (a measure of overall energy; see text for definition) for events with three leptons, two of which are either an electron-positron pair or a muon-antimuon pair. Left: the pair are likely to have come from a Z particle. Right; the pair are unlikely to have come from a Z particle. Notice good agreement at left, but the right figure shows how the excess of events in Fig. 2 are distributed at moderate values of ST, all below 1 TeV.

What does this mean? Well, of course it might mean that CMS is having trouble modeling its backgrounds. But if it is new physics, we learn a lot right away. The amount of energy being released in these events is low. For instance, if new 500 GeV particles were being produced, and most of their mass-energy were released in their decays, we would expect a higher distribution of ST. So any new physics associated with this excess lies with somewhat lighter particles [or with heavy particles that do not release much of their energy, because they decay to stable undetectable particles that have just slightly smaller masses.]

That’s not all; in one of the backup slides, one finds, for the channel in yesterday’s talk that had the largest number of excess events, a distribution of the number of jets (those surrogates for quarks, antiquarks and gluons) with moderate to high energy.  The number is typically zero or one.

Fig. 4: Taken from R. Gray's talk representing CMS. The distributions of the number of jets (with momentum transverse to the beam greater than 40 GeV) for events with three leptons, two of which are either an electron-positron pair or a muon-antimuon pair. See earlier in the post for definitions of MET and HT. Left: the pair are likely to have come from a Z particle. Right; the pair are unlikely to have come from a Z particle. Notice good agreement at left, but the right figure shows how the excess of events in this bin are distributed in the bins with 0 or 1 jet.

And this tells us that it is unlikely that new particles that feel the strong nuclear force could be responsible for this signal; such particles, if produced in pairs, would easily produce two jets or more (see Figure 3 of this post), but not easily one or less. [Another possibility (see above) is that the jets produced are very low-energy, because the produced heavy particles decay to other heavy particles, of very similar mass, that do not themselves feel the strong nuclear force.]

What options remain? One would be the production of particles that feel only electromagnetic and weak nuclear forces, and have masses perhaps in the 250–400 GeV range [a very rough guess, I’d need to do simulations to see if that’s really right.] (Crude example: something similar to Chargino-Neutralino production [see Figure 2 of this post], but with a higher production rate.) Another would be the production of heavy particles that do feel the strong nuclear force but that don’t release that much energy because their masses lie just above those to which they decay. (Crude example: A form of quasi-universal Extra Dimensions, modified to allow for the trileptons to carry a lot of energy.) Maybe there are singly-produced particles in the 1 TeV mass range that can give this effect too, but that tentatively seems difficult, based on a few of the other plots in the talk.  I’m sure theorists will come up with many other possible explanations.

There are more plots in the talk to discuss, and more things to say about plots I’d like to see, but I’ll stop at this point: this seems like more than enough for one post.  We can’t do too much more with only a handful of events.  Now let’s wait for ATLAS to weigh in, and for CMS and ATLAS both to analyze the 150% more data that they’ve already recorded.  Then we’ll see if this excess disappears like a shimmering mirage, or takes on a sharper outline like an much-needed oasis in the desert.

Share via:

Twitter
Facebook
LinkedIn
Reddit

29 Responses

  1. It is worth noting that astronomy data relating to large scale structure (under the general rubric of warm dark matter studies) and direct detection efforts for dark matter, are not a good fit to a dark matter particle in the three digit and up GeV mass range. The large scale structure data favors something on the order of the keV mass range; the direct detection efforts that claim any signal (and they are mutually inconsistent in the simplest of dark matter models) point to something in the single digit GeV mass range.

    So, if SUSY is out there, there is some evidence entirely separate from particle collider evidence to favor a R-parity violating model over one that conserves R-parity.

  2. Dear Prof. Strassler,

    I dont understand this:
    “Crude example: A form of quasi-universal Extra Dimensions, modified to allow for the trileptons to carry a lot of energ”

    I mean, how and why the extra dimension possibly come in here …

    Could You say I little bit more about this ?

    1. I’ll have to do an article. It’s too long an answer… But maybe you can look up “Universal Extra Dimensions” (the “universal” meaning that all particles in the standard model live in additional spatial dimensions of size 10^(-18) meters.)

      1. Whoops I have follow up questions:

        Are such “large” extra dimensions not ruled out by now, or did these earlier results this year only concern certain specific (RS?) models?

        1. I refer to “smaller” extra dimensions; and even the models you refer to are not excluded, no. It’s details, details… There are many different types of extra-dimensions models. It needs an article…

      2. Dear Prof. Strassler,

        Ok thanks for this clarification so far 🙂

        I think a similar nice article like the one about the state of SUSY is badly needed for the extra dimensions too; clarifying the details You mention and saying what sort of models are excluded and what (with a small propability I think?) could still be “in force”

        I just remember for example a certain well known blog scornfully cheering each incoming exlusion result concerning extra dimensions too (in addition to SUSY) earlier this year :-/ …

        So a clear voice of reason concerning the state of different types of extra dimensions is needed again 😉

  3. Yes, I see that the WW branching ratio is about 2.5 times the t tbar BR, e.g. from page 6 of Kribs et al. And perhaps not coincidentally, that problem is likely to persist right down to the lower limit of 80 GeV for the 4th gen neutrino. So it’s now a real problem to have a 4th generation and just one Higgs, unless that Higgs is so heavy that the Higgs sector is starting to become strongly coupled.

    Unless, perhaps, the Higgs mixes with SM singlets, as suggested by van der Bij and Patt and Wilczek? (Maybe I’m getting desperate here.)

    1. The mixing you refer to was suggested yet earlier by Schabinger and Wells; and earlier than that by any number of authors in various contexts. And I pointed it out (following on Schabinger and Wells) in my much more general paper with Zurek, hep-ph/0604261, previous to Patt and Wilczek. I don’t understand why people are so insistent on crediting these papers that you mention.

      1. Thanks a lot for pointing out the earlier articles. It’s just pure chance that I learned about this from van der Bij, and Patt and Wilczek. If I can use this I’ll try to check out the history and make sure the earlier papers are cited, including yours.

  4. Thanks a lot for this fascinating report.

    ATLAS seems to have found 0 4-lepton events with no identified Z’s in 1 per fb, so I guess that slightly reduces the significance of the first of those 2 events you circled. From the graphs you showed, it looks as though CMS used about 2 per fb.

    I’m a bit puzzled by why the ATLAS expected background of 0.7 4-L events with no Z’s is a lot higher than the sum of all the CMS backgrounds shown for 4-L events with no Z’s.

    With regard to your Higgs-related objection to a 4th SM generation whose leptons produce the CMS excesses: if I recall correctly, the heavy 4th generation quarks would increase the Higgs production rate by a factor of 9, so the 4th gen is excluded for a single Higgs of mass up to around 600 GeV, unless the Higgs has a large invisible width. But suppose the 4th gen neutrino had Dirac mass 100 GeV and the Higgs mass was say 210 GeV, so the 4th gen neutrino couples strongly to the Higgs and the Higgs can decay preferentially to 2 4th gen neutrinos, but all other heavy fermion pairs are too heavy. Would this be a way to rescue the 4th gen without being “very ugly”?

    @SA: which table did you mean?

    1. I’ll have to look, but keep in mind that the key to the new CMS study is the effective removal of backgrounds to low-energy leptons. That’s where my colleagues prove their worth. Earlier studies by both ATLAS and CMS have larger backgrounds because their techniques for removing background were not yet as powerful as those that CMS used in this study. In particular, the absence of 4-lepton events in the ATLAS study does not much affect the significance of the CMS study unless they used the same strategy for digging deep into the data, which I doubt.

      On your higgs point — why, given your logic, doesn’t a 500 GeV Higgs particle dominantly decay to top quarks? 🙂

  5. The uncertainties given in the table you posted are only systematic. Add the Poisson error in quadrature, and you see no excess.

    1. That’s not really a correct statement: the correct statement is there is an excess, but it is not very statistically significant. That is why CMS is making no claims of having seen anything. There is no reason for theorists to ignore it, however, as it might lead us to suggest other strategies for searching in the data — ones that might be more efficient and effective for finding a more conclusive excess, or excluding it.

  6. Stuart,
    Transverse momentum is being used as a proxy for energy, so when we want to find the “total energy” we sum the absolute values. The only time we use a vector calculation is when we try to deduce the existence of undetectable particles (MET). Assuming that there is either only one undetectable particle, or that all undetectables leave in the same direction, we can add the absolute value of this to the sum of absolute values of the visible components to get a proxy for total energy (ST). A vector calculation of ST would (by definition) always produce zero.

    Matt,
    One would be the production of particles that feel only electromagnetic and weak nuclear forces, and have masses perhaps in the 250–400 GeV range

    This would appear at first sight to be consistent with a fourth generation of leptons. I’d be inclined to believe that sooner than some of the more exotic alternatives.

    1. A fourth generation of leptons would require a fourth generation of quarks to cancel anomalies, and a fourth generation of quarks would have vastly increased the Higgs production rate. Or you have to make the model very ugly. So on the contrary, I’d rather not have a fourth generation of leptons; I’ll take something that does NOT get its mass from the Higgs field and makes no contributions to anomalies — charginos and neutralinos being just one of the possible examples.

      1. As a Higgs-skeptic, I don’t share your reservations about 4thG quarks. 😉 IIRC the lower mass limit for 4thG neutrinos is ~80GeV and for charged leptons ~100GeV, so I think it’s premature to rule them out completely.

  7. How should one mathematically handle signals with high significance but few events? (Not sure that “high significance” is the correct way to state this)

    I mean, getting four events when 1,34 +- 0,40 are expected is naively a near seven sigma discovery. In the case of one single event when 0,002 +- 0,001 are expected this is even more pronounced.

    Also,

    “Another would be the production of heavy particles that do feel the strong nuclear force but that don’t release that much energy because their masses lie just above those to which they decay”

    Would SUSY with a next-to-lightest particle decaying to the LSP be the most natural explanation for these signals?

    1. You have to use Poisson statistics, not Gaussian statistics; and the error bar on the expectation is just that, an error bar on the expectation, not the Poisson-statistics error bar on the data. So you’re overestimating the significance in both cases. It is true that the one event with low background is statistically very significant. But the other point about having just one event is that it takes just one rare mistake or freak accident to get one rare event. Always remember Blas Cabrera’s magnetic monopole of 1982.

      Define “natural” and I’ll answer your second question. 🙂 No one needs to say “SUSY”, just new fermions in the doublet or in the triplet+singlet representations of the electroweak theory. It’s not SUSY until we have sleptons and squarks, and (depending on the rates, which I would have to calculate) we may not need any of those for a signal like this. If that’s too technical an answer let me know.

      1. Thanks. So if I understand correctly, the low background event (0,002 +- 0.001) has roughly a 1/1000 to 1/333 chance of occurring and is then a slightly over three sigma deviation.

        1. Formally that might be right, but how would you characterize the “magnetic monopole” event? Freak accidents don’t follow Poisson statistics; that’s why one is never enough..

  8. Just trying to understand the definitions of HT, MET, and ST.

    I can see why we’re interested in transverse momentum. The incoming beams (by definition) have zero transverse momentum. Thus, by conservation, the resulting outputs (the leptons and anti-leptons and jets that can be detected, and whatever else that can’t be detected) should have zero transverse momentum in total as well.

    What’s confusing me at the moment is the switching of “absolute value of the sum” and “sum of the absolute values” in the definitions of MET and ST. Perhaps I’m just misunderstanding what’s being added up: I’m thinking of the transverse momenta as vectors radial to the beam, which we can add up to give another radial vector (so if there are no undetectable particles, this sum vector would be zero). But then I’m interpreting “absolute value” as simply magnitude, throwing away the directional component — and now I’m confused as to why we’d want to do that.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.

Related

This week I’ll be at the University of Michigan in Ann Arbor, and I’ll be giving a public talk for a general audience at 4

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 12/02/2024

Particle physicists describe how elementary particles behave using a set of equations called their “Standard Model.” How did they become so confident that a set

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 11/20/2024