Of Particular Significance

The Trigger: Discarding All But the Gold

Matt Strassler 11/04/11

Did you know that most of the information produced in the proton-proton collisions at the Large Hadron Collider (LHC) is dumped irretrievably in the metaphorical trash bin — sent ingloriously into oblivion — yes, discarded permanently — as quickly as it comes in? By “most,” I don’t mean 75%. I don’t mean 95%. I mean 99.999% to 99.9999% of all the data at the Large Hadron Collider is erased within a second of its being collected.

It sounds crazy; how can a scientific experiment simply ignore the vast majority of its data?! Well, it’s not as insane as if first appears. Nor is it unprecedented; previous generations of hadron colliders have done something similar. And finally, it is absolutely necessary. In this article I’ll tell you why.

There are three main observations that lead to the decision to throw away most of the data.   [I’ll describe this in the context of the two large general-purpose experiments at the LHC, called ATLAS and CMS; the details for the other experiments are quite different, though the issues are analogous.]

The first is that it doesn’t hurt (much). Most proton-proton collisions are boring. Certainly 99% of them are extremely dull (to a particle physicist, anyway), and the next 0.99% won’t raise any eyebrows either. Let ’em go.  A few hadrons produced, maybe a couple of jets of rather low energy.  No big deal.  Nothing to see here, folks.

The second is that it what particle physicists are looking for at the LHC is certainly something very rare indeed. Profoundly interesting proton-proton collisions — say, ones in which a Higgs particle, or some other hypothetical new particle, might be produced — are exceedingly uncommon, at most one in 10,000,000,000 collisions and perhaps as rare as one in 10,000,000,000,000 collisions. If it were possible to distinguish and separate, in real time, the many dull collisions from those rather few collisions that show characteristics redolent of one of these very rare processes, then the data set with the more interesting collisions would be enriched with rarities. It turns out this can be done, with a reasonable degree of reliability.

Third, there’s really no practical choice. Given that we’re on the lookout for something as rare as one in 10,000,000,000,000 collisions, and discoveries are rarely possible unless a new physical phenomenon has been produced a few dozen times at least, we’ve no choice but to make 1,000,000,000,000,000 collisions or so a year. Accounting for the fact that the LHC isn’t on all of the time, that translates to about 100,000,000 collisions each second!   If the experimentalists tried to keep and process all the data from all of those collisions, it would require something in excess of the entire world’s stock of computers — not to mention blowing through the LHC’s budget!

In short, ATLAS and CMS can only afford to process and store about 500 collisions per second. But if the LHC matched this limitation, and only produced 500 collisions per second, only one or two Higgs particles would be made at the LHC each year! Not nearly enough for a discovery to be made!

So there’s no choice.  The LHC must vastly overproduce collisions, and the ATLAS and CMS experiments must determine, in real time, if a particular collision looks interesting enough to be worth keeping.   Obviously this must be done automatically; no human committee could select, each second, a few hundred out of 100,000,000 collisions for permanent storage! The whole operation has to be done by computers, actually by a combination of hardware and software. The critically important system that carries out this key decision is called “the trigger”. For each collision, either the trigger fires, and the collision is stored, or it doesn’t fire, and the collision is lost for good.  Clearly, the trigger had better work as intended.

Now is this trigger system, this strategy of dumping information overboard, really so strange and unfamiliar?  Not really.  It’s doing something similar to what your brain does every day, say, with faces. Think about it: if you commute by public transport to work each day, or walk to work within a city, your brain probably registers hundreds of faces daily. How many of them can you remember from last week, or last year? Probably just a few, belonging to those people with whom you had a conversation, a confrontation, a collision. It would appear that your brain only bothers to store what it considers relevant. Something — a rapid relevance-determining mechanism, over which you have little conscious control — triggers your brain to put a memory of a face somewhere where you can access it. Most of the memories get shunted to someplace inaccessible, and perhaps are even “overwritten.” And it’s a bad day when you fail to remember the face of someone highly relevant, such as a previous boss or a potential future spouse. Your trigger for remembering a face had better work as intended.

At an LHC (or other hadron collider) experiment, the trigger employs a number of strategies to decide whether a collision looks interesting. And that batch of strategies isn’t fixed in stone; it’s programmable, to a large extent. But it is still automated, and only as intelligent as its programmers. Yes, it is absolutely true: an unwisely programmed trigger can accidentally discard collisions manifesting new physical phenomena — the proverbial baby thrown out with the bathwater.

So particle physicists — the experimentalists who run the detectors and take the data, and the theorists like me who advise them — obsess and debate and argue about the trigger. It’s essential that its strategies and settings be chosen with care, and adjusted properly as the collision rate changes or new information becomes available. As the ultimate and irreversible filter, it’s also a potential cause of disappointment or even disaster, so opinions about it are strong and emotions run high.

What are the principles behind a trigger?  What are the typical clues that make a collision seem interesting? The main clues are rarities, especially ones which theorists have good reason to believe might potentially yield clues to new physical phenomena. Here are the classic clues… a collision is more likely to fire the trigger if it produces

  • An electron or positron (anti-electron), even of low energy
  • A muon or anti-muon, even of low energy
  • A photon, even of low energy
  • A tau lepton or anti-lepton of moderate energy
  • Signs of invisible particles of moderate energy
  • Jets [manifestations of quarks, antiquarks and gluons] of very high energy
  • Many jets of moderate energy
  • Jets from bottom quarks of moderate energy
  • Multiples or combinations of the above

In the absence of one or more of these rare things, a typical proton-proton collision will just make two or three jets of low energy, or even more commonly a featureless “splat” in which a few dozen hadrons go off haphazardly. These collisions generally just reflect physical processes that we’ve studied long ago, and can’t carry any interesting information about new phenomena of interest to particle physicists, so they are justifiably discarded. (A tiny fraction are kept, just as a cross-check to make sure the trigger is behaving as expected.)

On a personal note, I’ve spent quite a bit of time worrying about the triggers at the LHC experiments. In 2006 I studied some theories with then-student Kathryn Zurek (now a professor at Michigan) that in some cases predict physical phenomena on which the standard trigger strategies would rarely fire, potentially making certain new phenomena unnecessarily difficult to discover. To some extent this work had a minor role in encouraging the ATLAS and CMS experiments to add some additional trigger strategies to their menus.

Before I finish, let me clean up a small lie that I told along the way to keep things simple. Let me clarify what’s being discarded. To do that, I need to remind you how the LHC beams actually work. The LHC has two beams of protons, orbiting the LHC ring in opposite directions, and colliding at a few predetermined points around the ring. But the beams aren’t continuous; they are made from bunches, hundreds of them, each made from as nearly as many protons as there are stars in our home galaxy, the Milky Way. Collisions occur when one of the bunches orbiting clockwise passes through one of the bunches going counterclockwise. Most of the protons miss each other, but a handful of collisions occur.  How large is a “handful” depends on what settings the LHC’s operators choose to use. Late this year (2011) there were as many as 20 to 40 collisions in each bunch crossing [You heard that right, each time two bunches cross, 20 to 40 pairs of protons collide.] That sounds insane too, but it’s not, because most of the time all of those collisions are dull, and very rarely one of them is interesting.  The probability that two are interesting is very, very low indeed.  The term for having all those extra collisions at the same time as the one you want is called “pile-up”.

Data is collected for each bunch crossing; each time two bunches cross, the detectors measure everything (well, more precisely, as much as possible) about all the particles in all 20 to 40 collisions that occur. The trigger’s decision isn’t whether to keep or discard a particular collision, but a particular bunch crossing. A typical bunch crossing has 20 or so dull collisions, and no interesting ones; if any one of them looks interesting, the whole set of data from that bunch crossing is read out. There’s no time to try to separate out the simultaneous collisions from one another; that has to be done later, long after the trigger has fired.

37 Responses

  1. Hi Matt.

    Very clear and interesting post as always.

    Maybe all the computing problem presented here will dramatically change when the quantum computing will be mature enough to be used at LHC (or what we will use after its “dead”).
    I looked for links to see if something like this is already in developing for LHC but I did found nothing. Do you know if such projects exists at LHC?
    I could found only several interesting articles like this one (it worth an eye):
    http://www.enterrasolutions.com/2013/11/quantum-computing-and-big-data.html

    Thank you for all you are building here.

  2. Kudos Matt. You have a real gift for creating such beautiful and helpful analogies in your writing. They help immensely!

  3. Thanks for the great post professor!
    I have a question:
    When measuring the trigger turn on one can do it two ways (leaving tag and probe out of the discussion):
    1)Either measure the absolute efficiency using orthogonal triggers – e.g if you want to measure how many jets you have above 80GeV pT, you can utilize a muon trigger and measure the ratio of jets(>80)/jets(all).
    2)Measure the relative efficiency with respect to a lower threshold trigger – e.g to measure the efficiency of triggering 80 GeV jets, you can utilize a 30 GeV jet trigger and measure the ratio jets(>80)/jets(>30)

    Now the question is , what is the advantage of using the second method?
    The only one that I have found is that method 1 may be biased. E.g if you use a muon trigger , some muons will be from a semileptonic b-decay. Therefore the jets you measure will not be inclusive but have a level of contamination from heavy flavor decays.
    Is it correct? And is it the only ?

    Thank you very much

  4. Hey would you mind letting me know which web host you’re utilizing? I’ve loaded your blog in 3 different browsers
    and I must say this blog loads a lot faster then most.

    Can you suggest a good internet hosting provider at a reasonable price?
    Thanks, I appreciate it!

  5. Fascinating stuff! Thanks Matt Strassler for the post.
    Some who want to know more about the triggers or the collider or about physics in general for that matter may want to view a few of the Summer school lectures at http://indico.cern.ch/scripts/SSLPdisplay.py?stdate=2012-07-02&nbweeks=7 .
    My question is seeing that computer technology is still advancing and storage and computing power still getting faster and cheaper do you foresee the triggers being opened up a little more to store more events that may have been discarded under the current plan? Also when the LHC is finally up to full design spec and the energy is higher and the maximum number of bunches are colliding at the 25ns timing and the pileup problem even worse, is the physics of how each part of the detector works going to become the limiting factor on how much useful data can be extracted or is it still the IT and software that are the main bottleneck?
    Thanks again for your interesting and informative articles.

    1. I’m sure there will be advances implemented during the 2013-2014 shutdown, but the details are not known to me; you’d need to ask an expert within the experiments, of which there aren’t that many. I’m not even sure decisions have been made; with computers, it always pays to make the decision at almost the last minute, to benefit from the most recent technology. Your second question doesn’t really have an answer; even now, the physics of each part of the detector is a limiting factor that determines what the trigger can be asked to do, on top of the question of how many events the trigger can select. So there are hardware and software and IT issues now, and there will be later; the balance may change a bit but I don’t think it’s going to be a qualitative shift. It’s worth keeping in mind that things are not going to get that much worse: the machine is already operating within a factor of 2 of its maximum collision rate, maximum energy, and pile-up. [By the way, operating at 25 nanoseconds makes the pile-up situation *better*, not worse, because you spread the collisions out over twice as many bunch-crossings] The big challenge will be keeping enough Higgs events, and other low-energy processes that are difficult for the trigger system, when the collision energy goes from 8 TeV to 13 or 14.

  6. That’s really interesting!
    I have a couple of question about the software/hardware used to analyze the data. Do they use normal PCs to run the software? If that’s the case maybe something like Seti At Home would be interesting to be able to analyze more data using more people’s computers over internet. Is there any plan for that?
    Do they use GPUs in some way? As they are much faster than normal processors to run math computing could be interesting, I think SETI software can use GPUs to analyze data.

    Many thanks Professor

    1. Regarding the trigger: I believe [Edited by host: what I said here earlier was wrong, they don’t use anything like ordinary PCs. See my further reply below.]

      The other bottleneck in the data (not described here but mentioned in http://profmattstrassler.com/articles-and-posts/lhcposts/triggering-advances-in-2012/data-parking-at-cms/ ) comes later, when reconstructing in full detail, with no time constraint, what happened in a collision. That too involves many computers and very complex software. I do not yet know why they cannot farm out that task to off-site computers belonging to the public, but I can try to find out.

      1. Yes, I was thinking about off-site computing for the stage with no time constraints, but well, as I can see on a first quick read of the paper you sent, the system is amazingly complex indeed. That’s probably the reason why there’s no GPUs present, as they’re good for brute force numeric computation, but not so good when there’s so much complex data transfers.
        I will carefully read it later to try to understand the details.
        Many thanks Professor.

  7. Thanks for another wonderful, clear, article, Matt! I found myself wondering whether there was any way to increase the “yield” from each bunch-crossing. Of course, I assume that if the proton density in the bunch (do you call it “luminosity”?) was higher, the yield would be higher. But what I was really wondering was whether there has been any brainstorming (maybe over a beer or two) on ways to steer or tune or otherwise guide the protons in the colliding bunches into interesting collisions. (Another assumption, I guess, is that head-on collisions produce more interesting fireworks than off-center collisions. Correct?)

    1. The problem is that we simply cannot control subatomic particles at the level of accuracy that would do what you suggest. Particles can be controlled at the micrometer scale (millionth of a meter, 1/30,000 of an inch) and below, even down to a few nanometers, but to try to get two protons to line up perfectly you’d have to do more than a million times better.

      For scale: imagine you are throwing two sacks worth of sand at each other. And now you want to get more of the particles of sand to hit each other head on. Not easy.

  8. waw, what a clear explanation for the pile-up thing they kept talking about in the 13th december conference. thanks a lot professor

  9. There is an insane amount of data being thrown away for the reasons you stated. I was wondering if there was any other possible use for this un-triggered data? For example, using the data to increase the number of collisions in the current machine or give some insight on how to improve on future particle colliders. Just thinking of uses for this useless data. Thanks for the articles.

    1. It’s not an issue as to whether that data is or isn’t useful. It would be great if one could keep all the data on all the collisions, but there’s no practical way to do it. Fortunately, most of that data is useless for the LHC’s main goals, so there’s no harm done. But it would still be a lot better if one could keep it, because when you throw so much away, you still has to keep your fingers crossed that none of it is critical.

  10. How different “occasions” are extracted from single “bunch cross” data? How to determine is two jets belongs to same occasion or not?

    1. The experimentalists knew they would face these conditions, so they designed the experiments to be able to separate different collisions from one another using precise measurements. Take a look at http://www.lhc-closer.es/img/subidas/3_9_1_2.png : It illustrates the ability of the ATLAS experiment to separate all of the charged particles coming from four separate proton-proton collision vertices; the particles from each collision have been drawn in the same color to make it easy for you to see this. The experiments can deal with 30-40 vertices at a time. A jet contains many charged particles, so you just have to look to see which collision vertex those particles point back to, and then you know which proton-proton collision made the jet. And if you see two jets, you can check whether they point back to the same vertex or not.

      Obviously nasty things happen sometimes in this environment. Sometimes two collision vertices are too close together to distinguish. And it is very hard to tell which vertex an electrically-neutral particle came from, so often photons or neutrons are assigned to the wrong collision. But the experimenters have a lot of clever techniques to cope with this… imperfect, but good enough for most measurements. I do worry, however, about certain specific measurements which are a lot harder in the presence of all these simultaneous collisions!

  11. So by average how many collisions are stored, let’s say per second? I tried to count based on what you have said and I got something close to 30/sec. Is this more or less fine? Still that’s a lot of data. How much data is produced and stored with one collision, I guess we are talking here about at least tens of gigabytes?

    BTW: fantastic blog and website, I got science degree (in theoretical chemistry) 15 years ago but since then I never actually worked in science, my life went other ways, I was always interested in particle physics, Thanx to you I can at least feel up to date with latest proceedings and discoveries. And you have amazing teaching skills, I am maybe not a total layperson, but very close, and I can understand absolutely everything. My teachers back then at the university weren’t even close to that level I must say 🙂

    1. Hmm — did I miss a zero in my text somewhere? The data storage rates are about 400 per second (but I don’t have the precise numbers, and ATLAS and CMS are slightly different.) It’s a few billion collisions stored per year [actually, as described at the end of the article, a few billion bunch crossings, each of which typically contains one interesting collision and a couple of dozen dull ones] . Note that the overall collision rate has been increasing by more than a factor of 10 during 2011 and will probably increase again in 2012 by a small additional factor.

      Each collision is much less data than what you suggested — in each collision only a small fraction of the detector is particularly active, with most of the detector elements just registering electronic noise, so in a sense there are a lot of close-to-zeroes, and so data compression can reduce the size by a lot. In the end I am told it is about 10 megabytes per collision [actually, again, per bunch crossing]. I don’t know as much about this as I should; maybe one of my experimental colleagues can comment, if they happen to see me floundering a bit here.

      And thanks for the kind words!

  12. Thank you professor. My Physics I professor worked ( well, still does) at the LHC and we have seen several presentations on the topic so it’s so good to be -finally- able to understand a post 🙂
    So the information from the collisions that were triggered is analyzed by scientists or by a computer program? How soon after?
    Pam

    1. There are two levels (at least) of analysis.

      The first (“reconstruction”) involves just figuring out what all the electronic signals mean in terms of what particles were produced, how much energy did they carry and where were they heading. A very good effort (though preliminary, and potentially subject to reconsideration) is made automatically by computer shortly after the data is taken. But this is just at the level of saying what happened in that particular collision.

      Searches for new phenomena (“analysis”) of course involve study of large classes of collisions, thousands to millions of them, typically. The decision of what to search for and how, the selection of the relevant subset of the data, and the actual study of the data is done by humans, aided of course by computers. Humans can always, at such a stage, revisit and override what the computers did at the level of “reconstruction”.

      Each such search is very labor intensive, requiring numerous cross-checks to avoid errors. It is not unusual for 10 to 20 people to be involved in a single search.

      By the way, questions from non-experts are encouraged. If you can’t follow a post because a couple of points aren’t clear, ask for clarification. I won’t always be able to provide it, but often I can help, either with a quick comment, a revision of the post, or a later article. And your question can help me make the site better down the line.

  13. One way of evaluating trigger procedures would be to choose at random to keep a small part of collision results that would have been discarded and see if anything interesting shows up over time. This will not be sufficient to prove that trigger procedures do not miss some rare and previously unknown phenomenon, but could work as low cost sanity check for some systemic error.

Leave a Reply

Search

Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC