Geneva, Switzerland, is not known for its sunny weather, and seeing the comet here was almost impossible, though I caught some glimpses. I hope many of you have seen it clearly by now. It’s dim enough now that dark skies and binoculars are increasingly essential.
I came here (rather than the clear skies of, say, Morocco, where a comet would be an easier target) to give a talk at the CERN laboratory — the lab that hosts the Large Hadron Collider [LHC], where the particle known as the Higgs boson was discovered twelve years ago. This past week, members of the CMS experiment, one of the two general purpose experiments at the LHC, ran a small, intensive workshop with a lofty goal: to record vastly more information from the LHC’s collisions than anyone would have thought possible when the LHC first turned on fifteen years ago.
The flood of LHC data is hard to wrap one’s head around. At CMS, as at the ATLAS and LHCb experiments, two bunches of protons pass through each other every 40 billionths of a second. In each of these “bunch crossings”, dozens of proton-proton collisions happen simultaneously. As the debris from the collisions moves into and through the CMS experiment, many detailed measurements are made, generating roughly a megabyte of data even with significant data compression. If that were all recorded, it would translate to many terabytes produced per second, and hundreds of millions of terabytes per year. That’s well beyond what CMS can store, manage and process. ATLAS faces the same issues, and LHCb faces their own version.
So what’s to be done? There’s only one option: throw most of that data away in the smartest way possible, and ensure that the data retained is processed and stored efficiently.
Data Overload and the Trigger
The automated system that has the job of selecting which data to throw away and which to keep is called the “trigger”; I wrote an extended article about it back in 2011. The trigger has to make a split-second judgment, based on limited information. It is meant to narrow a huge amount of data down to something manageable. It’s has to be thoughtfully designed and carefully monitored. But it isn’t going to be perfect.
Originally, at ATLAS and CMS, the trigger was a “yes/no” data processor. If “yes”, the data collected by the experiment during a bunch crossing was stored; otherwise it was fully discarded.
A natural if naive idea would be to do something more nuanced than this yes/no decision making. Instead a strict “no” leading to total loss of all information about a bunch crossing, one could store a sketch of the information — perhaps a highly compressed version of the data from the detector, something that occupies a few kilobytes instead of a megabyte.
After all, the trigger, in order to make its decision, has to look at each bunch crossing in a quick and rough way, and figure out, as best it can, what particles may have been produced, where they went and how much energy they have. Why not store the crude information that it produces as it makes its decision? At worst, one would learn more about what the trigger is throwing away. At best, one might even be able to make a measurement or a discovery in data that was previously being lost.
It’s a good idea, but any such plan has costs in hardware, data storage and person-hours, and so it needs a strong justification. For example, if one just wants to check that the trigger is working properly, one could do what I just described using only a randomly-selected handful of bunch crossings per second. That sort of monitoring system would be cheap. (The experiments actually do something smarter than that [called “prescaled triggers”.])
Only if one were really bold would one suggest that the trigger’s crude information be stored for every single bunch crossing, in hopes that it could actually be used for scientific research. This would be tantamount to treating the trigger system as an automated physicist, a competent assistant whose preliminary analysis could later be put to use by human physicists.
Data “Scouting” a.k.a. Trigger-Level Analysis
More than ten years ago, some of the physicists at CMS became quite bold indeed, and proposed to do this for a certain fraction of the data produced by the trigger. They faced strong counter-arguments.
The problem, many claimed, is that the trigger is not a good enough physicist, and the information that it produces is too corrupted to be useful in scientific data analysis. From such a perspective, using this information in one’s scientific research would be akin to choosing a life-partner based on a dating profile. The trigger’s crude measurements would lead to all sorts of problems. They could hide a new phenomenon, or worse, create an artifact that would be mistaken for a new physical phenomenon. Any research done using this data, therefore, would never be taken seriously by the scientific community.
Nevertheless, the bold CMS physicists were eventually given the opportunity to give this a try, starting in 2011. This was the birth of “data scouting” — or, as the ATLAS experiment prefers to call it, “trigger-object-level analysis”, where “trigger-object” means “a particle or jet identified by the trigger system.”
The Two-Stage Trigger
In my description of the trigger, I’ve been oversimplifying. In each experiment, the trigger works in stages.
At CMS, the “Level-1 trigger” (L1T) is the swipe-left-or-right step of a 21st-century dating app; using a small fraction of the data from a bunch crossing, and taking an extremely fast glance at it using programmable hardware, it makes the decision as to whether to discard it or take a closer look.
The “High-Level Trigger” (HLT) is the read-the-dating-profile step. All the data from the bunch crossing is downloaded from the experiment, the particles in the debris of the proton-proton collision are identified to the extent possible, software examines the collection of particles from a variety of perspectives, and a rapid but more informed decision is made as to whether to discard or store the data from this bunch crossing.
The new strategy implemented by CMS in 2011 (as I described in more detail here) was to store more data using two pipelines; see Figure 1.
- More HLT “yes” votes were allowed, in which the experiment’s full and detailed information about a bunch crossing would be stored (“parked”) for months, before being processed just like data collected in the usual way.
- For certain HLT “no” votes, even though the full, detailed information about the bunch crossing was discarded, some of the high-level information from the HLT about the particles it identified was stored (“scouting”).
Effectively, the scouting pipeline uses the HLT trigger’s own data analysis to compress the full data from the bunch crossing down to a much smaller size, which makes storing it affordable.
Being bold paid off. It turned out that the HLT output could indeed be used for scientific research. Based on this early success, the HLT scouting program was expanded for the 2015-2018 run of the LHC (Figure 2), and has been expanded yet again for the current run, which began in 2023. At the present time, sketchy information is now being kept for a significant fraction of the bunch crossings for which the Level-1 trigger says “yes” but the High-Level trigger says “no”.
After CMS demonstrated this approach could work, ATLAS developed a parallel program. Separately, the LHCb experiment, which works somewhat differently, has introduced their own methods; but that’s a story for another day.
Dropping Down a Level
Seeing this, it’s natural to ask: if scouting works for the bunch crossings where the high-level trigger swipes left, might it work even when the level-1 trigger swipes left? A reasonable person might well think this is going too far. The information produced by the level-1 trigger as it makes its decision is far more limited and crude than that produced by the HLT, and so one could hardly imagine that anything useful could be done with it.
But that’s what people said the last time, and so the bold are again taking the risk of being called foolhardy. And they are breathtakingly courageous. Trying to do this “level-1 scouting” is frighteningly hard for numerous reasons, among them the following:
- The number of “no” votes by the level-1 trigger is tens of millions of bunch crossings per second, and thus many trillions per year; even with the data highly compressed, that’s an enormous amount of data to try to work with.
- The level-1 trigger has to make its decision so quickly that it has to take all sorts of shortcuts as it makes its calculations.
- The level-1 trigger only has access to certain parts of the full CMS detector, and only to a small fraction of the data produced by those parts. For instance, it currently has no information from the “tracker”, the part of the detector that is crucial for reconstructing particles’ tracks — though this will change in the future.
So what comes out of the level-1 trigger “no” votes is a gigantic amount of very sketchy information. Having more data is good when the data is high quality. Here, however, we are talking about an immense but relatively low-quality data set. There’s a risk of “garbage in, garbage out.”
Nevertheless, this “level-1 scouting” is already underway at CMS, as of last year, and attempts are being made to use it and improve it. These are early days, and only a few new measurements with the data from the current run, which lasts through 2026, are likely. But starting in 2029, when the upgraded LHC begins to produce data at an even higher rate — with the same number of bunch crossings, but four to five times as many proton-proton collisions per crossing — the upgraded level-1 trigger will then have access to a portion of the tracker’s data, allowing it to reconstruct particle tracks. Along with other improvements to the trigger and the detector, this will greatly enhance the depth and quality of the information produced by the level-1 trigger system, with the potential to make level-1 scouting much more valuable.
And so there are obvious questions, as we look ahead to 2029:
- What information might practically and usefully be stored from the level-1 trigger when it says “no” to a bunch-crossing?
- What important measurements or searches for new phenomena might be carried out using that information?
My task, in the run up to this workshop, was to prepare a talk addressing the second question, which required me to understand, as best I could, the answer to the first. Unfortunately the questions are circular. Only with the answer to the second question is it clear how best to approach the first one, because the decision about how much to spend in personnel-time, technical resources and money depends on how much physics one can potentially learn from that expenditure. And so the only thing I could do in my talk was make tentative suggestions, hoping thereby to start a conversation between experimenters and theorists that will continue for some time to come.
Will an effort to store all this information actually lead to measurements and searches that can’t be done any other way? It seems likely that the answer is “yes”, though it’s not yet clear if the answer is “yes — many”. But I’m sure the effort will be useful. At worst, the experimenters will find new ways to exploit the level-1 trigger system, leading to improvements in standard triggering and high-level scouting, and allowing the retention of new classes of potentially interesting data. The result will be new opportunities for LHC data to teach us about unexpected phenomena both within and potentially beyond the Standard Model.