Of Particular Significance

SEARCH Day 2

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 08/22/2013

Day 2 of the SEARCH workshop will get a shorter description than it deserves, because I’ve had to spend time finishing my own talk for this morning. But there were a lot of nice talks, so let me at least tell you what they were about.

Both ATLAS and CMS presented their latest results on searches for supersymmetry. (I should remind you that “searches for supersymmetry” are by no means actually limited to supersymmetry — they can be used to discover or exclude many other new particles and forces that have nothing to do with supersymmetry at all.) Speakers Pascal Pralavorio and Sanjay Padhi gave very useful overviews of the dozens of searches that have been done so far as part of this effort, including a few rather new results that are very powerful. (We should see even more appear at next week’s Supersymmetry conference.) My short summary: almost everything easy has been done thoroughly; many challenging searches have also been carried out; if superpartner particles are present, they’re either

  • so heavy that they aren’t produced very often (e.g. gluinos)
  • rather lightweight, but still not so often produced (e.g. top squarks, charginos, neutralinos, sleptons)
  • produced often, but decaying in some way that is very hard to detect (e.g. gluinos decaying only to quarks, anti-quarks and gluons)

Then we had a few talks by theorists. Patrick Meade talked about how unknown particles that are affected by weak nuclear and electromagnetic forces, but not by strong nuclear forces, could give signs that are hiding underneath processes that occur in the Standard Model. (Examples of such particles are the neutralinos and charginos or sleptons of supersymmetry.) To find them requires increased precision in our calculations and in our measurements of processes where pairs of W and/or Z and/or Higgs particles are produced. As a definite example, Meade noted that the rate for producing pairs of W particles disagrees somewhat from current predictions based on the Standard Model, and emphasized that this small disagreement could be due to new particles (such as top squarks, or sleptons, or charginos and neutralinos) although at this point there’s no way to know.

Matt Reece gave an analogous talk about spin-zero quark-like particles that do feel strong nuclear forces, the classic example of which are top squarks. Again, the presence of these particles can be hidden underneath the large signals from production of top quark/anti-quark pairs, or other common processes. ATLAS and CMS have been working hard to look for signals of these types of particles, and have made a lot of progress, but there are still quite a few possible signals that haven’t been searched for yet. Among other things, Reece discussed some methods invented by theorists that might be useful in contributing to this effort. As with the previous talk, the key to a complete search will be improvements in calculations and measurements of top quark production, and of other processes that involve known particles.

After lunch there was a more general discussion about looking for supersymmetry, including conversation about what variants of supersymmetry haven’t yet been excluded by existing ATLAS and CMS searches.  (I had a few things to say about that in my talk, but more on that tomorrow.)

Jesse Thaler gave a talk reviewing the enormous progress that has been made in understanding how to distinguish ordinary jets arising from quarks and gluons versus jet-like objects made from a single high-energy W, Z, Higgs or top quark that decays to quarks and anti-quarks. (The jargon is that the trick is to use “jet substructure” — the fact that inside a jet-like W are two sub-jets, each from a quark or anti-quark.) At SEARCH 2012, the experimenters showed very promising though preliminary results using a number of new jet substructure methods that had been invented by (mostly) theorists. By now, the experimenters have shown definitively that these methods work — and will continue to work as the rate of collisions at the LHC grows — and have made a number of novel measurements using them. Learning how to use jet substructure is one of the great success stories of the LHC era, and it will continue to be a major story in coming years.

Two talks by ATLAS (Leandro Nisanti) and CMS (Matt Hearndon) followed, each with a long list of careful measurements of what the Standard Model is doing, mostly based so far only on the 2011 data set (and not yet including last year’s data). These measurements are crucially important for multiple reasons:

  • They provide important information which can serve as input to other measurements and searches.
  • They may reveal subtle problems with the Standard Model, due to indirect or small effects from unknown particles or forces.
  • Confirming that measurements of certain processes agree with theoretical predictions gives us confidence that those predictions can be used in other contexts, in particular in searches for unknown particles and forces.

Most, but not all, theoretical predictions for these careful measurements have worked well. Those that aren’t working so well are of course being watched and investigated carefully — but there aren’t any discrepancies large enough to get excited about yet (other than the top quark forward-backward asymmetry puzzle, which wasn’t discussed much today). In general, the Standard Model works beautifully — so far.

The day concluded with a panel discussion focused on these Standard Model measurements. Key questions discussed included: how do we use LHC data to understand the structure of the proton more precisely, and how in turn does that affect our searches for unknown phenomena? In particular, a major concern is the risk of circularity; that a phenomenon from an unknown type of particle could produce a subtle effect that we would fail to recognize for what it is, instead misinterpreting it as a small misunderstanding of proton structure, or as a small problem with a theoretical calculation. Such are the challenges of making increasingly precise measurements, and searching for increasingly rare phenomena, in the complicated environment of the LHC.

Share via:

Twitter
Facebook
LinkedIn
Reddit

31 Responses

  1. 1) I am not expert enough to know when the LHC dataset will reach that point… I haven’t checked it recently, and it was only briefly discussed this week. What is really missing right now from the discussion is not a measurement but a theory calculation, computing the Standard Model’s prediction of the asymmetry with higher precision. This calculation is underway, but is extremely difficult. I thought we’d see the result this spring, but I learned during this week that it is still some time off. After this calculation is done, the significance of the discrepancy may well change.

    2) It has nothing to do with the state of the search for supersymmetry or anything else. It is our job to chase down all plausible discrepancies between Standard Model and data. The top quark, so much heavier than the other quarks, is one of the most plausible places to expect discrepancies.

    1. I’m going to fill up your comments section with one more post for a question: How many of these “changes” and “new priorities” can be undertaken by massaging already-collected data, and how many actually require new triggers and new data collection? Thanks again for all your excellent, excellent work, Matt.

      1. There’s no simple way to answer “how many?”. Lots of things (most of them difficult) can still be done with already-collected data. Lots of things (including relatively easy ones as well as hard ones) can be done with the next round of data. Triggering is a very complex subject that cannot be easily summarized.

  2. I’m actually stunned by the fact that hierarchy problem has troubled physicists for so long. I do know what creates the big difference between forces. It’s the spin frequency difference between objects like Earth and a proton. Here is a quote from my blog.

    ratio between gravitational interaction and EM interaction is (8.98755*10^16)^2/(1.16*10^-5)^2 ~ 6*10^43.

    You are welcome.

      1. Well, I try to stick with the topic and give a different point of view. To you, it’s polluting. To me, it’s fruitful debating. But if your opinion is the dominant one here then I’ll take my “toys” elsewhere.

        I respect opinions of others. Especially when those opinions are well reasoned. ST and LQG are also theories as well as mine. There is however one big difference, mine is testable and broader!

    1. That’s completely silly. The reason that the ratio of a large planet’s size to the size of a proton is vaguely comparable to the size of the hierarchy problem is, in fact, related the hierarchy problem itself. So you’ve just shown: “Because of X, X is true.”

      You’re welcome.

      1. Thank you. But in case where forces are related to object’s spinning frequency that ratio is the outcome. And they most certainly are (tested, not by me).

        1. First, you need to understand the difference between “postdiction” and “prediction”. A postdiction is not a fundamental test.

          Second, just quoting an answer to a calculation, when you knew the answer in advance, convinces no one. Numerology is not acceptable as proof of anything without a detailed scientific argument.

          1. I have roughly total of 30 pages proving my case and making “postdictions”. There is one genuine prediction which concerns particle annihilation and it’s easily tested (tested also). It states that particles can be annihilated by manipulating their nuclear spin orientations in certain manner (described in detail in one of the papers).

            I don’t do numerology. That ratio is a pure calculation based on my hypothesis and derivations from it.

    1. Don’t forget the LHC is going from 8 TeV to 12-14 TeV soon. That may be enough. And one option is to roughly double its energy down the line, with new magnets. So it’s not clear we need a new accelerator; we may merely want to work with improved versions of the one we have. There is an argument for a higher-precision machine — an electron-positron collider. That would be smaller and has less energy than the LHC, but would be good for studying the Higgs particle and top quark in more detail.

  3. I hadn’t noticed until I looked more carefully, a one week old, August 15, 2013 pre-print from CMS on top forward-backward asymmetry after two years of 7 TeV and 8 TeV energies. The final sentence of the conclusion: “All top quark properties measurements at the LHC are in good agreement with the SM predictions.” arxiv.org/1308.3338 The only consolation it offers with regard to supporters of the competing the 2-3 sigma Tevatron finding is that “uncertainty is still large” in the CMS result.

    1. Yes, LHC can’t yet be said to once-and-for-all exclude what’s happening at the Tevatron. But this is a complex subject, requiring new input from theory and a collection of additional measurements from Tevatron and LHC to resolve.

  4. the asymmetry forward-backward to the top quarks is very interesting.the time can to be a pseudo scalar.the supersymmetry could be observed by
    topological changes to differents geometries
    these asymmetries leads us to others spacetime continuities.,where the speed of light is seen as variables

    1. Time is pseudo scalar but constancy of speed of light is scalar. There is no continuity of localized spacetime or extra dimensions. If any, it will be like quanta, we will never know. ?

  5. Matt,
    Thanks so much for providing us with a ringside seat to these new and exciting developments!! It’s a real privilege to be able to take part in it.
    – Doc

  6. One problem for experimentalists may be that there is too much reliance on theory for finding new particles. I understand that they have to set special triggers for interesting events because with thousands and thousands of particles coming out ,you cannot store all the results of every collision. But if you store only one out of say million events (is that right?), you might miss some unexpected unknown stuff. What do you think?

    1. We worry about that a lot and are careful to set triggers so that only well studied events are excluded and keep some just to be sure.

    2. The word “trigger” was uttered perhaps 100 times yesterday. Of course we know this is a risk — it is an issue we spend a huge amount of time on… designing the triggers so that the collection of trigger strategies is as robust as possible, casting as wide a net as possible. There is always, inevitably, some amount of theory bias, and part of theorists’ job is to help reduce this by thinking broadly about all the different things that could go wrong. The big risks for triggers is almost always rare, relatively low-energy processes. A 125 GeV/c^2 Higgs particle is not an simple thing for a trigger designer, and you can bet it gets a lot of attention.

Leave a Reply

Search

Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC

Related

Recently, the first completed search for what is nowadays known as SUEP — a Soft-Unclustered-Energy Pattern, in which large numbers of low-energy particles explode outward

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/15/2024

A number of people have asked me my opinion concerning CERN‘s proposal for a new, larger and more powerful particle physics collider… or rather, two

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 02/06/2024