Of Particular Significance

Trigger Failure, and Repair, at the LHC

© Matt Strassler [Dec. 4, 2014]

In this article, we’ll see how things can go wrong with the trigger system at experiments like ATLAS and CMS at the Large Hadron Collider [LHC], and also how one can work around the associated challenges.  Specifically, we’ll look at Higgs particles decaying to unknown long-lived particles, and how that can make a mess of things.

Suppose, as might well be true, that one in a hundred Higgs particles decays to a pair of as-yet unknown particles — let’s call them “X” particles.  And further suppose that X particles have a rather “long” lifetime, on particle timescales, that allows them to travel a meter or so on average before they decay, let’s say to a quark and an anti-quark.  Any high-energy quark or antiquark then makes a “jet” of “hadrons” (a spray of particles, each of which is made from quarks, anti-quarks and gluons).  But since the X has traveled some distance before it decays, the jets that it produces — unlike most jets, which start at the proton-proton collision point — appear in the middle of nowhere.

To be more precise, imagine a proton-proton collision such as shown in Figure 1, in which two gluons, one from each proton, collide head on, and make a Higgs particle, plus an extra gluon, which is kicked off and makes a jet of its own.  So far, this is perfectly ordinary Higgs particle production.  Now, however, this is followed by something unusual: Higgs decays to two X particles immediately (or rather, after a billionth of a trillionth of a second), and much later, after a few billionths of a second, each X decays to quark + anti-quark.

Fig. 1:
Fig. 1: Left, from top to bottom — a collision of two gluons leads to a disturbance in the top quark field (often called “a virtual top quark and anti-quark”) and from there a Higgs particle is created, along with a stray gluon.  The gluon immediately forms a jet of hadrons, while the Higgs particle, after a very short pause, decays to two (hypothetical) X particles. Right — after a relatively long time, each of the X particles, having traveled for about a meter or so, decays into a quark and an anti-quark, and each of these makes a jet of hadrons as well.

What will this look like to a detector like ATLAS or CMS?  Well, it depends exactly where the two X’s decay — and remember, even though X particles have a common average lifetime, each individual X decays at a random time.  But perhaps one such collision might look to a detector roughly like Figure 2.  In this figure you are looking at ATLAS or CMS from the perspective of the beampipe, with the colliding protons coming straight into and out of the screen.

Fig. 2:
Fig. 2: The collision described in Figure 1, as seen in a detector like ATLAS or CMS, looking along the beampipe that carries the protons.  Both ATLAS and CMS employ nested detectors: a tracker for measuring trajectories of charged particles, electromagnetic and hadronic calorimeters for measuring partlcle energies, and a muon system for detecting muons. Upper left: what the particles produced in the event really are doing.  Upper right: what the Level 1 trigger system knows about the event.  Lower left: what the full trigger system knows.  Lower right: an experimentalist looking at the event carefully, with no time constraint, may also be able to detect the short tracks at the far right of the tracker, marked in blue.

At upper left is what the event would look like if you had a perfect detector that could see all particle tracks.  You notice three sprays of particles: one jet from the collision point due to the final gluon, and one additional spray from each of the decaying X particles… in fact, if you were to look closely, you’d see each of those sprays from an X particle has two sub-sprays, one from the quark and one from the anti-quark to which the X decays.  But this won’t be so obvious to the detector.  At lower right is what ATLAS or CMS experimentalists would see in ideal circumstances, with lots of time to analyze the event carefully.  But the problem is that the trigger system doesn’t have much time.  At upper right is what it would know about the event at “Level 1”, the first stage of triggering.  It has to decide, in less than a millisecond, whether or not to keep the event based only on a rough sense of what is seen in the calorimeters and muon system; the tracker is not used, because it takes too long to read out its data.  At lower left is what the full trigger would see if Level 1 gives its ok; now the tracker information is available, but there isn’t time to go looking for tracks that don’t start near the collision point, such as the tracks (which I colored blue) that are present at the right side of the tracker.

The good news is that a collision like this cannot be directly mimicked by any real physical process that occurs involving known particles. If you saw a collision event like this (more precisely, if you saw a computer’s picture of the event, as inferred from the data that the detector recorded) in enough detail, you’d be pretty excited [though still cautious, because weird sprays of particles can also happen when a hadron collides with a wire in the detector … so you do have to look closely.]

Fig. 3: A very common type of event in which three quarks, anti-quarks and/or gluons are produced in a collision, leading to three jets.  All of this occurs in an incredibly tiny fraction of a second.
Fig. 3: A very common type of event in which three quarks, anti-quarks and/or gluons are produced in a collision, leading to three jets. All of this occurs in an incredibly tiny fraction of a second.

But here’s the big problem. The standard trigger methods used to select which events to keep and which ones to discard would discard this event. It would be lost forever, using standard methods. That’s because from the Level 1 trigger point of view this is just an event with three ordinary uninteresting low-energy jets.  And there are a gigantic number of events with three low-energy jets produced every second at the LHC.  An example is the one shown in Figure 3, where a quark and anti-quark collide and scatter, spitting off a gluon, and giving a three jet event which appears to the detector as shown in Figure 4.  You see that the Level 1 trigger sees something rather similar in Figure 2 and Figure 4, even though the reality of what is happening is very different.  If one looked at these two events in detail, one could easily tell these two collisions are qualitatively different.  But if the Level 1 trigger cannot distinguish them, then the baby, looking just like the bath water, will be tossed out!

And so, good-bye, discovery…

…unless we bring in a new trigger method.  Fortunately, collisions like the one in Figure 2 can in fact be saved, using techniques (some of which Kathryn Zurek and I suggested in our first work on this subject) that the ATLAS and CMS experiments pioneered.  The first key observation is that even the Level 1 view of the event (Figure 2, upper right) shows that the jet at the top is particularly narrow… and there is something else interesting that makes narrow jets — a tau lepton that decays to hadrons.  Ordinary jets can be narrow too, but that’s rare.  So the Level 1 trigger has a strategy that says: if we see three jets that have energy below, say, 100 GeV, we’ll discard the event, but if one of the jets is narrow, we’ll accept the event even if the jet only has energy of 50 GeV.  Again, this is a strategy that was developed to find tau leptons, not long-lived particles — but so what?  It can be repurposed in this context!

Fig. 4: How the event in Figure 3 looks like to the detector; compare with Figure 2.  At Level 1 the events look very similar, even though the physics is quite different, as is especially clear once a full analysis of the events is complete.  But such an analysis isn't possible if both events are discarded by the trigger!
Fig. 4: How the event in Figure 3 looks like to the detector; compare with Figure 2. At Level 1 the events look very similar, even though the physics is quite different, as is especially clear once a full analysis of the events is complete. But such an analysis isn’t possible if both events are discarded by the trigger!

So now we’re in business; the event has survived Level 1!  At the next stage, (Figure 2, lower left), the trigger has time to look at the tracking information, and will notice that two of the three jets have no tracks.  That’s a bit unusual.  Even more unusual is that one of the jets has energy in the outer “hadronic” calorimeter but little or none in the inner “electromagnetic” calorimeter.  So one or both of these facts can now be used by the trigger as an excuse to keep the event, so that humans can look at it later.  Saved!

This event, however, will not be alone.  All sorts of weird events, having nothing to do with Higgs decays, will be in the same pile as the one in Figure 2.  There will be some with detector problems (electronic noise that made a fake jet, tracker failures which removed all the tracks from jets in the upper right quadrant of the detector, etc.) and others with weird effects (e.g. a fake jet produced when a muon, produced when a stray proton from the beam hit some piece of the accelerator, entered the detector and hit something in the calorimeter.)   There will also be a few ordinary three-jet events, in which two of three jets had nothing but neutral hadrons (so no tracks) and none of these were hadrons that decay to photons (so no energy in the electromagnetic calorimeter) and one of the jets happened to be particularly narrow, all by chance. So the experimentalists will have to work hard to sift through those events and find the interesting ones, if any, perhaps looking for the hard-to-see vertex in Figure 2, lower right, where the blue tracks emerge just at the edge of the tracker.  But at least they have a chance!  If the trigger had thrown away the event, all would be lost!!

Starting in 2006 (and with my participation until the LHC started taking data), the ATLAS experiment developed this trigger strategy, along with a few others, and recently used it to look for events in which two X’s decayed in the hadronic calorimeter. While the probability that both X’s will decay in this part of the detector is relatively small, it’s big enough that ATLAS had a decent shot at a discovery. Unfortunately, they didn’t make one, and instead they could only put limits on this process.  As shown in Figure 5, they were able to put limits for X particles with masses of 10-40 GeV: for certain lifetimes, at most 1 in a few tens of Higgs particles can have decayed to two X particles.  But 1 in a hundred is still allowed, or even 1 in ten if the lifetimes are a little longer or shorter.

Fig. 5:
Fig. 5: Limits from ATLAS on the fraction of Higgs particles that coud have decayed to two X particles as described above, as a function of the average distance traveled before the X particles decay, and for three different X masses (10, 25 and 40 GeV/c2), shown as three different curves. For instance, if X particles have a mass of 25 GeV and a lifetime that allows them to travel about 1 meter before decaying, then at most 2.5% of Higgs particles are decaying to them. Since 500,000 Higgs particles were produced at ATLAS in 2011-2012, there could have been over 10,000 X particles produced, and (even in the worst case trigger scenario) at least 1% of them are hiding in the data! The limits are weaker for much longer or shorter lifetimes (because 1 meter is the radius of the calorimeters where the measurements were made), so there could be even more X’s, for all we know.

Of course it would be very interesting if ATLAS folks could look for one decay in the hadronic calorimeter and a second decay somewhere else, either in the tracker or in the muon system (which at ATLAS is a sort of limited tracker for any particle which gets that far; this is not true of CMS.) I hope we’ll see this search sometime.  The backgrounds will be larger but the amount of signal can go way up too… and you might actually see a X decay by eye…

Both ATLAS and CMS have made a few other interesting measurements of long-lived particles (I gave links to some of them early in this post), and in most of them so far, special triggers were used.  There are other methods that don’t require such special triggers, but can still benefit from them.  One other trick up CMS’s sleeve is in its parked data, which was obtained in 2012 using a trick to allow the trigger to keep more data than would otherwise have been possible.  One trigger strategy used for the parked data was to identify two high-energy jets that might have accompanied a Higgs particle when it is produced in the scattering of two quarks.  For this strategy, one ignores the Higgs decay, more or less, and focuses on the jets created by the two quarks that scattered.   However, the events collected using parked data are lying mostly unexplored, still waiting for someone to analyze them and look for a sign of an X particle decay (or of anything else unusual that a Higgs particle might have done, other than decay to undetectable particles.) I hope this parked data won’t sit gathering dust until after 2018.

Run 2, starting in May 2015 with higher collision energy and higher collision rates, will raise a whole new set of trigger challenges for Higgs studies and for long-lived particles.  Time is quickly running out to improve any triggers in place for 2015. So while I was at CERN in November 2014, I gave a talk to members of the ATLAS experiment, and a similar talk to members of the CMS experiment, on why long-lived particle searches are so important, and on what triggers and analysis strategies might be useful.  This was followed in both cases by extensive and fruitful discussions about the challenges and opportunities that lie ahead in Run 2.  Many details and subtleties about how the experiments and their triggers actually work — the hardware, the software, and the decisions humans make about how to use them — have to be considered with care. I left CERN with renewed optimism that we will see a significantly broader range of searches for long-lived particles in Run 2 than we did in Run 1.

60 Responses

  1. Do not exist dark Matter makes sense,because nothing concrete does think in something that altere the gravity in the visible universe.maybe the stronger and direct of CP could implies asymmetries in the cosmos,or perhaps the divergences that can to be encountered in the distortions of gravitational fields.is impossible that these strongest gravitational fields hás not showed any diferences in the measurament of these Waves and if occured not won of extra energy with the ripples in the spacetime and the dark Matter.follows the same quantification the dark Matter no the gravity só as common Matter.it is not spam and buzz

  2. Dear Matt, glad to see you back to your blog.

    1. Have you received any feedback from ATLAS or CMS about the realization of the suggested modifications on triggering system for Run2 ?
    2. Any progress made with the parked data analysis about the possible long living particles ?

    I’m looking forward to read your comments during the upcoming results from Run2.

    Thank you for your efforts.

    1. Yes, on (1) there’s been quite a lot of work on this, by a small number of individuals, in recent months. Discussion continues. (2) — well, not really. People are mostly preparing for the next run.

  3. […] “and others with weird effects (e.g. a fake jet produced when a muon, produced when a stray proton from the beam hit some piece of the accelerator, entered the detector and hit something in the calorimeter.)”

    How do you account for such effects? Is it mostly just a simple statistical model based on complex calculations that you overlay on the results when you go to make sense of them afterwards?

    Thanks!

    1. In case you happen to see this, I check this periodically and am patiently waiting for a reply. 🙂

      I’ve been wondering this since long ago when you started mentioning you would be writing a triggering article, and I should have asked then so it got included here. Oh well!

  4. Hi, Matt. Are the triggers written in easily changeable code? How easy is it to change out?

    What is involved in programming the triggers?

    And where do you fit in for bridging the gap between the physics and the actually programming that needs to be input?

  5. “So if you know anyone who wants to donate cash for storage, or just storage, please let me know…”

    Can you give us some numbers? $250k isn’t a lot, even as a % of LHC’s budget. How much additional data could you collect for that? (And if it’s significant, why not just fire someone who’s dragging, and use their salary?)

    1. There are three problems, separate but linked. The first is data storage cost; for something as big as the LHC actual data storage should be possible for between 10-20c per GB depending on how you factor in the cost of building the data centers. The LHC currently produces data at the rate of 25 PB or $2’500K per year if we take the cheapest option. The problem there is *currently*; if we kept ALL the data instead of discarding it that cost rises a hundred or thousandfold. This is a problem but maybe one we could overcome if we really tried. (See also, NASA< shoestring budget.)

      The sheer volume of data center needed is the second problem as is getting data in, out and around. There's already a worldwide computing grid to deal with the current data load, http://en.wikipedia.org/wiki/Worldwide_LHC_Computing_Grid Data storage requires massive amount of power, especially for cooling as well as space.

      The third problem is processing. Data storage is fine but someone or something would have to run through all the data to check everything. This requires an even larger investment in power, electrical and processing. A rough estimate is about 10x the data's volume of bits needing to be processed to 'scan' it. The quick and messy discarding used currently manages ok but giving everything a thorough treatment would quickly push processing costs alone above those of storage, and raise questions as to where the data would be processed.

      And most of this effort, 99% at least, probably 99.55 or higher would return the result 'quark\gluon scattering. No biggie.' It would be a massive waste of resources.

      1. Kudzu, based on the same facts you cite I come to some different (preliminary) conclusions/ideas.

        Concerning your argument that more than 99% of the total data may not be interesting: Yes, maybe, but if the remaining 1% or 0.1% contain the information that is needed to make one more large discovery, it would be worth storing the whole data set.

        Concerning the processing resources: The processing could be delayed a few years in the future; the cost of processing power is still decreasing significantly every year (Moore’s “law”). Another option is also to use the processing resources of currently existing computers while they are idle (as not all the computers are running at 100% load all year round). This has some costs (more power consumption if CPU load is higher), but at least the hardware already exists. Maybe something like the BOINC projects, such as SETI@home, but on larger scale with institutions and not mainly private persons.

        Concerning your first point of data storage costs: If the NSA is capable of storing the data of almost all the worldwide telecommunications, the technology exists. And the price tag for 10x the current storage capacity of LHC will not be 10x its current price, it should be significantly less due to new technology and/or discounts on large orders. And finally, the price tag should not be compared with current costs of LHC data storage, but as a percentage of the total LHC budget. I do not have the numbers, but I guess that data storage capacity is currently only a minor item in the budget, and even a doubling of its cost would not much affect the overall LHC costs.

        1. On your first point. The problem lies in balance; we can save lives if we post medical experts at all restaurants to handle people chocking, but it is not considered worthwhile by most people. Costs must always be balanced against potential benefits and given the state of the sciences as a whole and their precarious positioning a-la funding this must be done conscientiously.

          Delaying processing is problematic, it delays any potential benefits from the data and requires years of storage costs, especially as the data can be expected to pile up. A better solution might be to delay analyzing new data until processing speeds catch up.

          I am highly doubtful the NSA has nearly as much information stored as people think. Certainly I have seen no proof they have ‘almost all the worldwide telecommunications’. In my experience you don’t need much to hang someone. However the size of the entire internet, including all unindexed pages and data stored on servers is estimated by Google CEO Eric Schmidt to be no more than about 10 terabytes. (The last time he was asked at any rate, see WISEgeek for more.) this puts it significantly below the pentabyte scale, so much so that even if the NSA had a backup of everything they would not be near capable of storing this data.

          The problem that is easy to miss is the sheer magnitude of the data being generated. It seems like it cannot be that large since most is quickly discarded, but it is truly an immense volume of information. A simple doubling of the budget would be no means suffice, nor I should think even a tenfold increase. I would be interested if you had a proposal that could cheaply store this much data but so would the NSA so perhaps we should be discreet.

  6. Hi Matt,

    The T quark and the Higgs if I’m right was predicted long before there discoveries, and even named long before if I’m right (due to predictions from the standard model).
    So my question; (Sorry if it’s nieve) What is a X particle likely to be? A SUSY particle? A new fermion or boson? Is super symmetric partners all that’s left in predicted particles? excluding DM.

  7. Hello Matt, thanks for the article.
    I am not sure if I completely understand what you are looking for:

    The reason to improve the trigger is to find some particle that hasn’t been seen yet, right? You say that you “specifically look at Higgs particles decaying to unknown long-lived particles” indicating that only decays involving a Higgs are interesting for the search of these new particles.

    Why is that? And why is it not possible to produce X by the decay of another particle?

    Or is it just about understanding the Higgs better?

    1. Good question. I gave the Higgs as an example because it exists — it is no longer hypothetical — and because it is very sensitive to new phenomena like this. In principle, top quarks could decay to new things too, and so could Z bosons and W bosons. (Lighter particles like bottom quarks aren’t exempt either, but the LHC is probably the wrong place to look for weird things they can do; we have “b factories” which have already covered that ground.) However, ***in many contexts, new light particles are most often produced and most easily discovered in Higgs decays***. This is relevant both for new long-lived particles and new short-lived ones: http://profmattstrassler.com/articles-and-posts/the-higgs-particle/the-standard-model-higgs/lightweight-higgs-a-sensitive-creature/

      The same triggers that work for Higgs –> X X generally work for Z decays to long-lived particles, with some caveats that I’ll skip for now. You don’t need these triggers for top or W decays because you make pairs of tops and W’s in top-antitop production, and the other top or W can give you an electron or muon which you can often trigger on.

      And X’s could be produced in the decays of even heavier hypothetical particles, say Y –> X + gluon. Again, the same triggers that work for Higgs –> X X will generally work for the heavier particles too, although the heavier the Y particles are, the more energy produced in their decays, and the easier it becomes for standard triggers to pick them up.

      So the reasons to focus on the Higgs are: (a) among the known particles, it is the most likely to produce new particles in its decays, and (b) because it is lightweight, it is particularly difficult to trigger on its decay products. And my experience is that if you can trigger on a long-lived object produced in a Higgs decay, you can trigger on that object in almost every other context you can think of.

      1. Silly question Professor, I will understand if this one is throw out as well, 🙂

        It sound like a new math (and I don’t mean Feynman’s diagrams but more in the lines of Fourier series) could be derived by just matrix of all the scattering paths we have discovered thus far which could allow more detailed analysis of existing “laws” (verification and validation of the new math) and further progress down the scale.

        Wishful thinking?

        1. Another silly question so I’ll place under my first to keep track of them, 🙂

          Can every interaction of particles we have observed be derived solely by the conservation of energy?

          If so, will this apply at all scales, even below Planck’s scale of the definition of the smallest quanta?

          1. Assuming that what you mean by ‘conservation of energy’ is tracking the particles coming out of a collision then many interactions can, but not all. You will also need a variety of detectors, from low energy cloud chambers for things like electrons scattering off themselves to LHC style setups.

            But this will miss a lot of phenomena, how electrons and protons interact at low energy for example. (That is, forming atoms.) and quarks also pose a problem tending to hadronize. But a combination of theory and smashing stuff together has proved remarkably fruitful, so much so that its progress has generally been measured in terms of how hard we can smack stuff together.

  8. Matt, I am quite puzzled that such a complex, huge, and costly machine as the LHC leaves it up to a very simple algorithm (level 1 trigger) to discard some 99.999% of the data. Why aren´t there efforts to avoid such dumb triggers, or reduce their effect to a minimum (say, discard only 99.0%)?

    Using only a very small fraction of the data puts a very strong filter on what one can see. Only the events expected to be seen can possibly be detected. I am concerned that the assumptions from physical theories, or hypotheses, may cause a strong bias. Generally, I imagine the ideal evaluation of an experiment looks at ALL its data, and only after a careful analysis, a major part of the data may be discarded as non-interesting.

    Ideally, one would store all the data for later processing, which could be done several years in the future when computing power has increased. This would still be much earlier than waiting for the next super collider, which may be decades away (if ever constructed).

    I am aware that current data processing technology are not capable of storing all the data. But the LHC has been built over many years, other parts such as the detectors are at the leading edge of technology, and it would have been possible to develop new technologies for data storage and processing.

    1. Markus,

      You took the words right out of my mouth.

      Matt, I must be missing something…What about the search for the lowest mass SUSY candidate for example (wasn’t that the prime candidate for dark matter). Isn’t that supposed to be a long lived particle (if not stable altogether). it just can’t be that all these brilliant physicists would create a machine so focused on finding the Higgs that that’s really all it could find?! Can’t be.

      1. 🙂 One thing at a time. (1) Dark matter, by its very nature, is extremely long-lived — comparable to the age of the universe. Just like neutrinos, it is undetectable if produced at the LHC. One of the most important trigger strategies at the LHC is one that looks for detectable things recoiling against undetectable things. This “missing `energy’ ” (really momentum) trigger is perfect for finding high-energy neutrinos and dark matter. (2) The trigger are certainly not “dumb”. They are extremely smart. The problem is the numbers, as explained here: http://profmattstrassler.com/articles-and-posts/largehadroncolliderfaq/the-trigger-discarding-all-but-the-gold/ To discover the Higgs boson [goal #1 of the LHC], most of whose decays are elusive, you must make many tens of thousands of them; to make this many you need 100 million collisions per second, or one million billion collisions per year; and with 1 Megabyte of data per collision you’re talking millions of petabytes to process and store. This cannot be afforded. There is no other solution. Note, also, that it works; the Higgs was discovered, and a vast array of other phenomena were searched for and ruled out [i.e., would have been discovered if they existed.] In a similar machine with a similar trigger — the experiments at the Tevatron — the top quark was discovered. So everyone knows that you have to be very cautious and careful with biases introduced by the trigger… but you MUST have one.

        1. Also, I think you misunderstood what I wrote. The Level 1 triggers only discards about 99.6 percent of the data, keeping one event in 250 approximately. The higher levels of the trigger, with much more information, discard the rest.

          Remember also that the vast majority of what is produced at this machine is ordinary quark/gluon/anti-quark scattering in rather low-energy scattering. Such physics has already been studied at great detail at the Tevatron and earlier machines. This is the justification for the strategy used — almost everything thrown away is known physics — but you are right to worry about loopholes, and clearly both I and many of my experimental and theoretical colleagues do worry about it and spend a lot of time on this issue. The good news is that there are not many loopholes. It is quite difficult to think of something that (a) you could hope to trigger on but you don’t, at least accidentally, and (b) you could hope to find inside your data even if you kept 100% of your data.

          1. Okay, thanks for the Info, Matt, indeed I misunderstood that the level 1 trigger already threw away 99.999%; the correct number of 99.6% sounds much better.

            Concerning the technology, I do not question that it is currently impossible to store ALL the data the LHC produces. But there is a large gap between the desirable 100% and the 0.001% that are currently stored after passing all triggers/filters.

            At least, and this could be realized with the current LHC, I think it would be possible to dedicate for example 1% of the current data storage capacity to store unfiltered (raw) data from the experiments, and evaluate this sample if the assumption was correct that there is nothing interesting in it.

            It is not denied that the colliders did find new particles in the data that passed the triggers. But the question is if there is more in the the data that were discarded very early. If The LHC did so far not find new physics apart from the standard model (which looks quite unexpected when compared to the CERN announcements before the LHC went operational), one may wish to investigate more about the so-far-unexpected or excluded possibilities. Maybe there is no detectable new physics at the energy scale of the LHC, but maybe it is there but it is in the data that were (in that case, accidentally) discarded early.

  9. Dr. Strassler, you might consider this trigger. Take any event and record all the information about the paths recorded. There must be at least 3000 variables to each event; as is this a stright line, its energy, its starting point, its length, its perpendicular distant from the event, etc,etc.etc. And for each event, then things like the nunmber of lines. then bulid the data base of events to get things like the frequency of events verse the number of lines. So far no physics is involved. Also record if some other trigger was noted for this event. There will soon be a relationship between these variables and those events triggered. And maybe more important, there should be some variations in the frequency of various varables that will tell you something may be going on here. Like the frequency of 10 line is higher than 9 or 11 and will cause a trigger if other variables are also doing this, and bring physics in here. I hope I am clear enough.

    1. Or maybe simpler at the end, at least trigger this event to be looked at by the other programs that do the triggering. Then no physics is involved at all. So that you are scanning a few that you over looked the first time.

  10. Matt,

    A pleasure to be reading your blog again.

    I recently came across the site Higgshunters.org that seems to be crowd-sourcing the detection of decays rather like those you have described. From the site: “Your task is to search for tracks appearing ‘out of thin air’ away from the centre. We call these off-centre vertices. ”

    I would be interested in any thoughts you have about this project. The sites explanations are pretty limited. One thing that surprised me was the site’s claim that human eyes were sometimes more effective at locating these off-center vertices than were computers.

    Going through the collision images and highlighting the off-center vertices was rather compelling. Like chips or peanuts, I had gone through 50 of them before I realized.

      1. For the dual Higgs transition into dual photons or other dualities see perhaps:
        LHC Signals Between 121-130 Gev Interpreted with Quantum-FFF Theory.
        [Link Removed By Editor]

          1. Dear Matt,
            I respect your reaction of course, but it is not fair, because the double geometrical convertible Higgs string possibility creates a new world beyond the standard model. Such as a negative charged dark matter black hole, dual Higgs- graviton push gravity and a pulsating multiverse

  11. Matt I really appreciate your Aspen talk on the Higgs at youtube.
    I am left however with the next simple question.
    Could it be possible that the Higgs mass should split into half? because then a process of two decaying Higgs particles into two photons could be possible!
    If so we could easily imagine that the form of each Higgs only has to change by the decay process onto a different ( stringy) form called photon.!.. .

  12. Hello Matt,
    In description to one of the figures you mentioned that there were 500,000 Higgs bosons produced during the 2 years of phase 1 experiments. Did you you mean there were 500,000 observations?

    1. “Observations” is the wrong category; you have to think about it more carefully. Still, I’m glad you asked the question, because the caption, as written, is a bit misleading.

      Let’s start from the top. 500,000 Higgses were produced at ATLAS and a similar number at CMS. How many were stored, having passed the trigger? Probably a bit less than 10%, via a variety of trigger strategies. The full story of how this works would be a long one.

      Next, those stored come in various classes, depending how the Higgs decays and how it was produced. Not all of these classes are actually that useful. Only certain classes are distinctive enough that they can be distinguished from other processes that have nothing to do with the Higgs. People have looked for the “expected” distinctive classes and the numbers at this point drop from 10% to something close to 1%. In the most striking channels, the numbers are even smaller. So the number that are used in scientific analyses (and no really “observed”, because often the arguments are statistical, not definitive), is probably 1% at most.

      What about unexpected classes like H –> X X events? That’s an extremely complicated question because it depends on how the X decays most often, and on the X lifetime. However, one simple thing can be said: at least 1% of these events are stored when the Higgs is produced along with a lepton from a W or Z, and at CMS, another 1% may have been stored in parked data. ***This is independent of how and where the X decays.***

      On this basis, I should revise the caption to say: “as many as 10,000 X’s might have been produced”, and “a significant fraction — definitely no less than 1% of those produced — are currently in the data”.

  13. Excellent article as usual. Am I correct in saying I detect some degree of frustration? Relax good Professor a lifetime is a very short period of time, 🙂

    The “problem” you described reminded me of a problem we had in one of an airborne mechanical system (a sensor no less) which had some nagging vibration resonances. This was back in the 70’s so we resorted to testing on vibration tables and a lot of accelerators, 🙂 Anyway, from the PSD plots we identified the frequency ranges and and location in the assembled unit. The design of the sensor allowed me to place a “Plexiglas” cover with no adverse effects to our problem, tested and plots matched with the transparent cover. I then used a stroboscope and synchronized it to the resonant frequencies and actually was able to see the cause(s) of the problem.

    Does the trigger system(s) in LHC use similar idea in real time screening of which frequencies or a range of frequencies to store? In that way all possible events can be stored because that is what the trigger “sees”.

    Hope this makes sense, got to get one right sometime. 🙂

    1. Well, no I’m afraid. The trigger system is more digital than analogue here; the data from the collision is analyzed far enough in the earliest stages that estimates of the actual measured energy deposition and (later) particle tracks is available. This is what is sent to the trigger software, which then literally looks at the information almost the way a human would, looking for patterns of certain sorts (e.g. jets of energy, or patterns of tracks and energy deposition that look like electrons or muons, etc.) but with extremely fast and therefore relatively crude algorithms, which become more refined the further you go along the trigger process. So no, it’s not doing something fancy in Fourier space — it is looking at the same thing we do, except with much less time to do so.

      1. Not sure ‘crude’ is the correct word! I would think just presenting the data to the trigger would be very complex. Each sensor (how many calorimeters are there?) needs to filter its data and send it up. Somehow it must be synchronized over the sensors so the trigger is seeing a consistent view of the event. The event may even be over before the trigger gets a wiff of it.

        However you are correct when the data is presented to the trigger, it’s not likely to be more than a couple of dozen instructions.

        1. You’re right that the trigger, and the process of getting the data to the trigger, is very sophisticated. “Triggering at the first level is the most challenging part of the online data selection since it requires very fast custom designed electronics with a significant portion placed
          on the CMS detector.” [The trigger has much more time than the event itself though; the whole event is over, roughly, in 10s of nanoseconds, but there are parallel trigger processors working simultaneously, and each one has quite a bit longer to look at an event at Level 1.] What I meant by “crude” in this context is that with small amounts of time the *algorithms* have to be very crude, and they act only on gross aggregate data, not the details that would be available to higher levels of the trigger. “The algorithms process data in pipeline fashion, using pattern recognition and fast summing techniques, without introducing dead-time. The algorithms use input data of reduced granularity and resolution.” Quotations from http://arxiv.org/ftp/arxiv/papers/0810/0810.4133.pdf

          1. This is what I do for a living so perhaps my trigger is too sensitive… 🙂 But it’s awesome that they can keep track of individual events using sensors several feet apart when there are only a few nanoseconds between events. I assume that is what the pipelines are for.

            Anyway, thanks for the article link, I look forward to reading it.

          2. “The trigger has much more time than the event itself though; the whole event is over, roughly, in 10s of nanoseconds, … ”

            Are you saying the time constant (process time) of the sensor element (not including the electronic / software) is slower than the event time to be measured?

            ” … but there are parallel trigger processors working simultaneously, and each one has quite a bit longer to look at an event at Level 1. ”

            Do these “parallel” processors use the same input information from the same sensor elements or is each one responsible for a fragment of the input and the whole thing then gets resulted in processors downstream?

            I am beginning to understand what you mean by crude, if this is the scheme used. The resolution of the sensor element is not sufficient to differentiate the “energy” content.

          3. I read the paper last night, thanks for posting it. It was interesting, but only had a few sentences on the part I’m most interested in. It occurs to me that level 2 triggers are likely complex enough that they require custom programming (at least in part). But level 1 could be a simple logical expression that can be directly compiled.

            So, is there a language you use to describe triggers (both types)? Is a specification of that language publically available? How different are they between CMS and ATLAS?

            Many thanks and I appreciate the time you put into this!

          4. Sorry but I found it difficult to pass the trigger data validation using the Lvl-1 trigger emulator section:

            “A software package was developed to upload entire Monte Carlo events corresponding to Higgs, SUSY and other exotic signals directly at the inputs of the trigger pipelines. These events were then processed by the trigger hardware and the result was captured at the output of the trigger pipeline.”

            The first thing that pops in my minds is if the “new physics” event is over before the LvL-1 trigger even becomes aware of it, i.e. the time constant of the input sensors is slower than the event total lapse time then all what is being observed is what the emulator has configured to see. That is not the same as observing what is really happening. All you are seeing is a puff of smoke in which more than one event could be happening. I am sorry but the way I read it, the resolution of the input sensors is not high enough to actual say there is “new physics”, physics outside the constructs of the emulation scheme(s).

            I am not even sure that even running at higher energies would help since the problem remains in our lagging state of our sensors technologies. We need to develop better eyes before going searching for new things. I always used as a rule of thump the measurements must be no less than 10x the resolution of what you’re measuring and this was for mechanical systems, 🙂

  14. Matt;

    You wrote to Don Murphy that “the Higgs particle is a ripple (not a general disturbance — an honest-to-goodness ripple) in the Higgs field”.

    Can laymen such as myself generalize that concept to understand that all particles are ripples in their associated field? Unless they are constituted from other particles, of course.

    Is such a generalization reasonably accurate?

    sean s.

    1. Absolutely. That’s absolutely the way the math works in quantum field theory (which is the mathematical equations which we currently use to describe the particles and fields of nature.) You solve the equations for the field, looking for ripple solutions, and these solutions are the field’s particles, when you account for quantum mechanics. My public lecture that discusses this a bit more: http://www.youtube.com/watch?v=ZtaVs-4x6Qc

      1. Wikipedia is neither entirely right nor wrong in this case, and you ought to trust an expert in quantum field theory who gives courses on the subject at top universities to future experts in the field [namely, me] before you trust Wikipedia. The distinction between real and virtual inside of a hadron is not defined — there isn’t time for a quark or gluon inside of a hadron to be real before it scatters off the walls of the proton — and that is part of why you will see people write different things. I would say, however, that the Wikipedia article is more wrong than right, and certainly highly misleading if you think it’s making a definitive statement.

        1. It’s not just Wikipedia, it’s wide reading of things like classical electromagnetism, optics, TQFT, and relativity. IMHO the Standard-Model given explanation doesn’t square with other subfields of physics, and to make scientific progress HEP theorists need to take on board aspects of those other subfields. Detecting or inferring particles with short or zero lifetimes instead just isn’t enough. That’s not scientific progress, that’s stamp collecting and damn statistics. Far better to be able to show that the electron field is “Dirac’s belt” configuration of the photon field. Or demonstrate that the bag model is related to the balloon analogy for the expanding universe. Or describe how gravity is a trace force that’s left when two electromagnetic fields don’t quite cancel. I think there’s low hanging fruit out there, and I wish I could interest you and people like you in it.

          1. John,

            I get the impression that you have a gut feeling of what you think the truth is regarding certain phenomena. But I also get the impression that you feel you lack the ability to get at these things yourself. Perhaps mathematical challenges? If so you are certainly not alone – even Einstein wished he knew more math…

            I guess what I’m trying to figure out is if there is, as you say, “low hanging fruit out there” why not reach for some yourself?

            Regardless, you will never convince anyone to pursue “Dirac belts”, “bag model universes” or “gravity as a trace em force” just because you may like these ideas. There are a lot of ideas out there, and you must realize that various ideas have different appeal to different people. So while you may be taken with certain ideas you cannot expect anyone else to be taken by the same ones that you are. If you think otherwise then IMHO you are banging your head against the wall.

          2. Dino: all points noted. The problem is that if scientific progress doesn’t come from HEP, it will look like particle physicists have been standing in the way of scientific progress for decades telling fairy tales. There’s critical books out there, and funding pressures. Without scientific progress from HEP there will be big problems for HEP that IMHO will cause problems for physics and science at large.

      2. Is there anything about virtual particles that would make them unable to create the Higgs via that mechanism? The layman’s treatment of them seems to be as perfectly normal particles that can do anything real ones can with the caveat that they must vanish quite quickly.

        1. Yes. They’re virtual. As in not real. They aren’t short-lived real particles. There are no actual photons zipping back and forth between the electron and the proton in the hydrogen atom. Hydrogen atoms don’t twinkle. In similar vein there aren’t “zillions of gluons antiquarks and quarks in a proton zipping around near the speed of light”. When you smash protons you don’t get zillions of things spilling out like beans from a bag. That’s not to say the bag model is wrong. It’s just better to says gluons are “parts of the bag”, and then say quarks are too. And then you can maybe see that energy can confine itself or take some different configuration/topology that we call some other particle, and then you’re making progress. Whereas damn statistics and stamp collecting just lead to funding cuts.

          1. Readers should simply ignore Mr. Duffield. He talks nonsense with the assurance of an expert. He’s clearly never actually calculated anything about protons, and he thinks he can understand these things thoroughly just from reading words… words that he only partly understands.

          2. Michael: I’ve read it. See where it says “A virtual particle is not a particle at all.” The gluons in a proton are virtual. As in not real. So when Matt talks about two gluons, one from each proton, colliding head on, it’s a fairy-tale.

  15. Hi Matt,
    You stated in the early part of this post that:

    To be more precise, imagine a proton-proton collision such as shown in Figure 1, in which two gluons, one from each proton, collide head on, and make a Higgs particle, plus an extra gluon, which is kicked off and makes a jet of its own.

    I thought a Higgs particle was a disturbance in the Higgs Field and that what was happening at the LHC was the creation of enought energy through the proton-proton collions to knock a Higgs particle from the Higgs field, which exists everywhere in space. The statement I quote above seems to imply that the Higgs particle is a constituent of gluons. This is very confusing. Can you explain?

    1. Your understanding is correct but incomplete. You’re right that the Higgs particle is a ripple (not a general disturbance — an honest-to-goodness ripple) in the Higgs field, and that the proton-proton collisions provide the energy for it. But the detail is missing: how does that energy in the protons get converted into disturbing the Higgs field? The answer is that a sufficiently energetic disturbance in the top quark field can make the Higgs field ripple, and a pair of gluons can generate a strong disturbance in the top quark field. This is because top quarks interact quite strongly both with the gluon fields and with the Higgs field. Notice this has *nothing* to do with who is a constituent of who. In fact gluons, top quarks and Higgs particles are not constituents of one another; they are quite independent. The notion here is similar to how a singer vibrates her vocal cords which makes the air vibrate which in turn can make a glass vibrate (even to the point that it breaks); some of the energy in the proton-proton collision is being transferred, by interactions, from the gluon field to the top quark field and finally to the Higgs field.

  16. Thanks for the interesting article, I think the software in this would be a lot of fun to work on. Some questions:

    1. Do they ever do “fishing expeditions” and just collect everything they can for a million or billion events?
    2. What kind of tracking do they have for the first trigger? If there is none, then how do you know a jet is ‘narrow’?

    1. 1) Fishing expeditions generally don’t work. A million certainly isn’t enough. Anything that would show up a few times in a random sample of a billion collisions would be produced 100 million times per year and would be hard to miss. And the experiments can only store 10 billion events or so per year, so that’s quite a bit of your effort and storage spent on fishing. However, your question, which I haven’t thought about in a long time, prompts me to revisit the question carefully; I will think it through to make sure there’s no circumstance in which it could make sense.

      2. Good question. The calorimeter has a cellular structure, so you know where the energy was deposited — though not with as high resolution as you have in a tracker. The cells in the hadronic calorimeter are, roughly, about 6 degrees by 6 degrees wide (but this is a big oversimplification). Those in the electromagnetic calorimeter are smaller. But notice that you only know where the energy was deposited — you don’t really know where it came from. For all you know it came from outside (e.g. a cosmic ray) so you really do want the information from the tracker.

      1. p.s. I should have said that the experiments *do* take some events at random, mostly for monitoring purposes. [The trigger for this is called “minimum bias”, and there are others that are less open but still somewhat random.] And in fact, with Salam and Cacciari, I did think of rather bizarre and implausible examples of physics that could show up in the monitoring samples. But it’s almost impossible to come up with anything… the production rates have to be so huge that the new particles must be light and interact with gluons or quarks, and then it is really hard to understand how they would have been missed in previous colliders.

        1. I was thinking about this last night. In reading the postings (and the article you posted) that triggering focuses on what you are looking for: “save the event if it looks like this”.

          Do they ever have negative triggers: “toss the event if it looks like this”? If not, might that make fishing expeditions productive enough to consider?

          1. This strategy is not generally used but it is certainly considered an option. There are particular contexts where I would recommend it, but that’s a longer story. Yes, you could make your fishing expeditions more productive, but not as much as you might think because even the highest level of the trigger gets fooled by normal boring stuff that looks funny.

            Right now people are trying to develop other approaches — data “parking” and data “scouting.” Frankly the limiting factor there is money; if about $250,000 per year can be raised to buy storage at CMS, then there will be a lot more of this going on. So if you know anyone who wants to donate cash for storage, or just storage, please let me know…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.