Of Particular Significance

Some Comments on Theory and Experiment at the LHC

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 08/29/2011

In the next couple of days I hope to update a post I put up a short time ago, one that, as of today, 8/29/11, still holds true.  The issue that I addressed in that post was : What does the Large Hadron Collider (LHC) currently have to say about Supersymmetry?  I took a slightly polemical point of view, but you can look at the links in the post to longer, more pedagogical articles to see where the point of view comes from.

The problem with trying to answer a question like this one — What do LHC results imply for Theory X — is that it is ill-posed. An experiment searches for a phenomenon — not a theory. If a theory always predicts a certain phenomenon, then an experiment can search for that phenomenon, and, in finding it or not, give a definitive thumbs-up or -down to the theory. But often a theory has many versions, and although it will have a general set of predictions — supersymmetry, for instance, predicts superpartner particles — its details can look quite different to an experiment, depending on the version. [For instance, in supersymmetry, whether or not one sees the classic supersymmetry signature depends on the masses of the superpartner particles, on whether there are extra types of particles that are not required by the theory but might just be there anyway, etc.] So any given experimental result is just one important but incomplete piece of information, one that constrains some versions of the theory but not others. Typically, to entirely rule out a general theory like supersymmetry requires a large number of experimental searches for many different phenomena.

Conversely, though, many different theories may predict the same phenomenon. So an experiment that looks for but fails to find a certain phenomenon doesn’t just rule out just one version of one theory. It typically rules out several versions of many different types of theories.

And if it does find that phenomenon, it does not tell you which of those different types of theories it comes from. Remember that, when the LHC makes its first discovery! You’ll likely see claims from physicists in the press about what the discovery means that aren’t really merited by the data.

Crudely, this problem is much like the genotype-phenotype problem in biology. [More technically, the problem is that the mapping from theories to phenomena is highly non-linear and neither one-to-one nor onto, for those who know those terms.] Suppose you want to look for signs of bird genes on a deserted island. You might design an experiment to look for things that fly. But that’s problematic, of course.   Many different genetic codes can produce the phenomenon of flight — bat genes, hawk genes, dragonfly genes. So if you see something unfamiliar flying, you don’t know, without more information, whether its genetic code has anything to do with that of birds. Conversely, if you find nothing airborne, you’ve constrained not just bird genes but also those of many insects and various mammals. And yet you haven’t ruled out birds, or mammals, or insects, because some of them don’t fly. You’d be quite embarrassed to tell your scientific colleagues you’d proved there were no bird genes on the island, only to have a penguin or kiwi wander out from behind a bush.

This is the situation at the LHC.  This summer’s results were recently billed as “dashing hopes” of supersymmetry, and other such.  [I am glad that the BBC has adjusted its news webpage mini-headline for this article to read “LHC puts supersymmetry in doubt”, though I should point out it was in some doubt before the LHC too.]  But in fact (a) many versions of supersymmetry are not excluded by current LHC results, and, equally important, (b) many versions of other theories are excluded by these results.

Unfortunately this complexity is tough on reporters, and frankly on me too, because it is hard to summarize this kind of information in a short article. The relation between the empirical and the theoretical is very complicated, and hard to explain well.  I’ve given you a little taste of this in my articles on supersymmetry and what it predicts.

In the better press articles, such as this one, the LHC results have been reported as a failure to find the simplest version of supersymmetry, implying that more complicated versions must be necessary if supersymmetry is a part of nature.  Such a statement might be a bit premature, but is roughly true. However, I’m not sure the reporters (or all the physicists involved) understand what “more complicated” means. All that is needed for supersymmetry to evade the strongest current results from the LHC is one additional particle (and its superpartner). [In fact in some cases the number of additional particles required is zero, in the sense that the gravitino, the superpartner of the graviton, can be enough to muck things up.] Yes, that’s a bit more complicated than the simplest model. But not much. The theory already has (depending on how you count) several dozen particles and their superpartners; does one more make the theory so much more inelegant as to deserve disdain?

I will try, over the coming couple of weeks, to take stock of what the LHC has and hasn’t said about supersymmetry and some other theories. But before I do that in any detail, I can already tell you the basic answer. Recent LHC results put powerful and interesting constraints on the possibility of super-birds, extra-bats and little-insects. But conversely they do not exclude super-birds, extra-bats, or little-insects.

Thanks to the fantastic performance of the LHC and the great work done by the physicists on the ATLAS, CMS and LHCb experiments, we know a great deal more about what nature might and might not be like than we did last year. Nevertheless, because many types of experimental analyses of the LHC data have not yet been done, and because, for some questions, a lot more data is needed, we do not yet have nearly enough information to make any new, sweeping, existential claims about the nature of nature.  Patience; a thorough search that leads to definitive knowledge takes time.

Share via:

Twitter
Facebook
LinkedIn
Reddit

17 Responses

  1. @Matt
    after this one -> Matt Strassler | September 1, 2011 at 3:39 PM <-
    You are about to jump on the first place of my favourites blogs (not so short) list. Congrat!
    Let me ask You a question which is (i think) not so loosely coupled to susy. Is quark compositeness (and i mean as a possible LHC discovery, not as theoretical possibility only) taken seriously nowadays? and to put this question in a context let's say i refer to one of the latest lhc paper (if not the latest). they claim to rule out preons ( is this term still in use?) below 2,49TeV, but it was expected to rich something like 2,7TeV. Is the tiny bumpy thing near 2,55 totally irrelevant?

    1. Paolo — the possibility that quarks are composite particles is certainly something to be taken seriously. However, the particle perhaps most likely to display compositeness would be the top quark, as (a) it is least studied, and constraints on its properties are rather weak (b) it interacts most strongly with the Higgs particle, which might be a clue that it is involved with some new forces that might also affect the Higgs. It is not easy to make the up and down quarks composite in theoretical models without inducing new phenomena that would be rather common but have never been observed. Still, always worth looking when the effort involved is not extraordinary.

  2. Dear Prof Strassler,

    now when we learned from the BBC that SUSY is dead and should be no longer looked for, you may also eagerly await what is the new physics that is not dead. If it is not dead, it must be undead! And indeed, (y)our journalist friend Pallab Ghosh who managed to calculate the actual fate of SUSY wrote about science that starts to ponder zombie attack:

    http://news.bbc.co.uk/2/hi/8206280.stm

    The evidence is overwhelming and multi-dimensional. For example, researchers from University of Ottawa and Carleton University have thought who would win if there were a battle between the zombies and the people. Moreover, there has been a revival of the zombie film in recent years. So all types of evidence seem to converge. 🙂

    So the question posed by the BBC is why theorists like you haven’t managed to get rid of the unnecessary maths and study the science that describes the actual reality. 😉 I must personally admit that the theory of the zombie attacks is probably right because I have seen one such a zombie attack on Twitter Watch, a Chrome extension, where I monitor the term “supersymmetry”, among other words. About 5,000 zombies have attacked me by copying Ghosh’s other BBC article “on the spot” so the zombie attacks are almost certainly the first new physics to be discovered, and they will win over the humans. 🙂

    There are still heretics such as Filip Moortgat who said – on the 3rd slide from the end, at a recent SUSY conference –

    http://indico.cern.ch/getFile.py/access?contribId=40&sessionId=10&resId=0&materialId=slides&confId=141983

    “don’t believe the BBC”. Wow. what a blasphemy.

    I understand your points and agree that Jon Butterworth is officially – and by expertise – closer to the Higgs search than Tommaso Dorigo. But Tommaso Dorigo has a more uncontrollable urge to leak all secrets than Jon Butterworth; the latter may be more likely to emit fog about sensitive new data. 😉 It’s as simple as that. Concerning the leaks, he’s probably trying the nerves of his colleagues only and doesn’t necessarily move to the forbidden territory. The combined ATLAS+CMS graph in Eilam Gross’s talk (it disappeared later)

    http://blog.vixra.org/2011/08/29/higgs-excluded-from-130-gev-to-480-gev/

    is already excluding things from 130 GeV up. The 140 GeV region seems to have a cross section of “one half of the Standard Model”, separated by 2.5 standard deviations both from “no Higgs” and from “1 SM Higgs”.

    All the best
    Lubos

    1. Thanks for your message. This really doesn’t matter much; the Higgs search will proceed in the same way whether of not 140 GeV is or is not yet ruled out. And it will be ruled out clearly (if the Higgs is not there) within a few months.

      Calling SUSY dead prematurely is more serious, because it might affect search strategies and the way the LHC is operated, and might in the long run affect public perception of the LHC. But there are enough people in the experiments (Moortgat is an example) who know this that I am currently not too worried about the next round of searches; the holes that we (Lisanti, Schuster, Toro and I) emphasized recently will be filled, along with a number of others.

      The next challenge is to dig deeper into the data. I doubt most people realize how easy [relatively!! nothing is easy at LHC] most searches have been so far, compared to what may be necessary for finding new phenomena.

      1. Thanks for your answer. Well, I surely take your words seriously – because you’re the de facto winner of the LHC olympics etc. So if you think it will be hard, it may be hard.

        Nature could have still done it in such a way that it would have been easy – but She didn’t. 😉

        By the end of 2011, a 140 GeV Higgs should be found or excluded and I think it will be excluded. But it’s not enough to find the 116 or 119 GeV Higgs that actually exists. The graphs just look too noisy and the noise will only drop by sqrt(2) or so by the end of October.

        I think that the misguided premature comments about the death of SUSY – when at most mSUGRA is in trouble, and even this much more modest claim is questionable – will only have effect on the mood of many people.

        But I am sure that there are still a sufficient number of people who will continue to search in the way they should. The only thing they have to do is to shut their mouth on the sidewalk because I have received those 5,000 tweets or so that SUSY is on the ropes. So there are surely millions of people who think so – exceeding the number of physicists who know it’s not really true by several orders of magnitude. But the millions don’t matter as much as the physicists.

        It seems to me that the inflow of shocking anomalies at the LHC is and will be slower than some people expected so there’s lots of time to readapt to new strategies etc.

      2. “But it’s not enough to find the 116 or 119 GeV Higgs that actually exists. The graphs just look too noisy and the noise will only drop by sqrt(2) or so by the end of October.”

        IIRC the relevant channels for light masses still only had 1/fb in the published combinations. If they get up to 5/fb and combine CMS and Atlas, you’ll have a fairly significant sqrt(10) drop in the noise or 10x the data. You should get to nearly 4 sigma for a 116 or 119 GeV Higgs.

  3. In reply to Peter Woit: Thanks for the comment.

    Peter writes:
    —-
    Matt,

    Before raising questions about Tommaso Dorigo’s ethics and expertise (based on information from Lubos Motl?),
    —-

    I am sorry if I seemed to question Dorigo’s ethics. I do not question his ethics, only his wisdom. There is a fine but crucial line between (a) interpreting factually what plots say and mean, at face value, for the public, and (b) stating in public one’s personal opinion. The statement, by an experimentalist, of personal opinion about one’s own experiment is not made purely on the basis of what is visible publicly. It is also based on what other things he or she knows — since I doubt Dorigo, or any other experimentalist, being human, would make a clear statement that they knew to be false. If I were in one of the experiments I would not make a statement that could be so-interpreted. But Dorigo is a grown man and can take his chances how he likes.

    I also do not question that Dorigo has the expertise necessary for the blog posts that he writes on interpreting the figures shown by ATLAS and CMS at face value. He’s a perfectly good experimentalist and he knows statistics very well. But Peter, you underestimate the problem here. The reason the Higgs combination group has not produced a combined result is partly because it is very difficult to do. I do question whether Dorigo has the statistical, theoretical and experimental knowledge and information that is available to the Higgs combination group, and that is necessary for properly combining the ATLAS and CMS results. I know that I don’t have it — not even the theoretical knowledge. And I think there could be some nasty surprises before the combination comes out. What if one experiment questions the other’s methods? etc. This is not an easy business.

    Peter wrotes
    —-

    you might want to read more carefully what he has to say. I don’t see anywhere comments from him about an unreleased ATLAS + CMS combination. His comment about expecting to see the Higgs in the same mass range as the one Kane mentions is explained by the two careful blog posts he wrote about the separate public ATLAS and CMS combinations, see

    http://www.science20.com/quantum_diaries_survivor/new_atlas_limits_higgs_mass-81880
    and
    http://www.science20.com/quantum_diaries_survivor/new_cms_limits_higgs_mass-81897

    which are about the best source for understanding the situation I know of, and reflect significant expertise in the subject.
    —-

    I agree that for now his posts are as good as you will find *for interpreting the graphs from ATLAS and CMS at face value*. But there is a lot behind those graphs, and the error bars that appear there. I have tried to give some sense for this in my article http://profmattstrassler.com/articles-and-posts/the-higgs-particle/why-the-hints-of-higgs-currently-rest-on-uncertain-ground/ , though I certainly did not try to give the whole story, which is very technical, even to the extent I understand it. None of these subtle issues appear and get proper discussion in Dorigo’s posts. You’d think this whole thing was just about straightforwardly interpreting a graph or two. It’s not.

    You might ask where my own expertise comes from. I wouldn’t even call it expertise, just knowledge. For one thing, I attended a several-day workshop at the University of Washington in April, which was attended by a number of the key members of the Higgs Combination Group from ATLAS and CMS, and several key theorists with much more expertise than I. A substantial part of the conversation was about the lack of clear theoretical understanding of the uncertainties involved in the analysis of the lepton/anti-lepton/neutrino/anti-neutrino search strategy which currently dominates the Higgs mass range of interest, 120-180 or so. I am also aware that there are experimental challenges for that search. So what is not in Dorigo’s post is the answer to this type of question: what if there is a shift in the theoretical prediction, or the uncertainty in that prediction, for some of the standard model backgrounds; how dramatically could that change the CMS or ATLAS plot? This is a very subtle point, not clear from any statistical argument.

    You may argue that I am overly skeptical, and perhaps I am. For something as important as excluding the Standard Model version of the Higgs boson, perhaps I can be forgiven for being more conservative than Dorigo.

    Peter writes:
    —-
    [Dorigo] concludes that (statistically barely significant) excesses seen around 140 GeV by both experiments are smaller than expected if there is a SM Higgs, while (statistically insignificant) excesses at lower masses are consistent with a SM Higgs. Obviously any such discussion at this point is based on quite weak evidence. He’s not offering to put money down on anything here.
    —-

    I don’t *disagree* with this, again if we take things at face value. This is similar to what John Butterworth says in his article, quoted in my post. Still there are subtleties that the hurricane has prevented me from catching up on, so I’m unwilling yet to say that I *agree* with it once those subtleties are accounted for. I may yet come to agree, or I may not. I am not convinced by what Dorigo says, or anyone else that whose face-value interpretations I have read. But neither am I sure they are wrong; I am simply expressing no opinion at this point.

    Peter writes:
    —-
    Tommaso, like Jon Butterworth and anyone else working for ATLAS or CMS, has every right to comment publicly on what their understanding of the current situation is, as long as they are not revealing confidential information from their experiment. He deserves encouragement for his efforts to provide accurate analysis and information to the public, not the sort of comments you are making here.
    —-

    I have no objection to this remark. The question is what is revealed by a statement of opinion, given that no one likes to be wrong in public, especially in writing. Even an opinion can reveal information, if it is not carefully stated.

    I do think this is all noise, anyway. When you can have these kinds of debates, it just means you don’t have enough data yet. I’m a lot more patient than some of my colleagues. Thanks for your comment.

    Matt

  4. I dont think you have to be involved in combination analysis to be able to put 2 and 2 together from the data released from the various channels. Besides the experimental effort is so segmented these days, there really are no experts on the whole picture. I talked to an experimentalist involved in one of the channels on ATLAS, and he gets to know the data from the other channels together with the rest of us.

  5. Matt,

    Before raising questions about Tommaso Dorigo’s ethics and expertise (based on information from Lubos Motl?), you might want to read more carefully what he has to say. I don’t see anywhere comments from him about an unreleased ATLAS + CMS combination. His comment about expecting to see the Higgs in the same mass range as the one Kane mentions is explained by the two careful blog posts he wrote about the separate public ATLAS and CMS combinations, see

    http://www.science20.com/quantum_diaries_survivor/new_atlas_limits_higgs_mass-81880
    and
    http://www.science20.com/quantum_diaries_survivor/new_cms_limits_higgs_mass-81897

    which are about the best source for understanding the situation I know of, and reflect significant expertise in the subject. He concludes that (statistically barely significant) excesses seen around 140 GeV by both experiments are smaller than expected if there is a SM Higgs, while (statistically insignificant) excesses at lower masses are consistent with a SM Higgs. Obviously any such discussion at this point is based on quite weak evidence. He’s not offering to put money down on anything here.

    Tommaso, like Jon Butterworth and anyone else working for ATLAS or CMS, has every right to comment publicly on what their understanding of the current situation is, as long as they are not revealing confidential information from their experiment. He deserves encouragement for his efforts to provide accurate analysis and information to the public, not the sort of comments you are making here.

  6. Dear Prof. Strassler,

    thanks very much for Your thoughts about these issues I mentioned in my comment.
    Your arguments for better being cautious concerning the exclusion of the 140 GeV higgs are plausible to me, and I will read the higgs hint article tomorrow 🙂

    1. I have known Gordy Kane for many, many years, and I like him very much. He’s a very good scientist, and a good friend. I have also seen his predictions for supersymmetry switch with the winds… “just this moment, as data rules out the last thing we said, we’ve finally figured out what’s *really* going on, and it’s just out of reach…”. So. He could get lucky this time.

      As for what Phil Gibbs has to say — there is such a thing as digging for gold in a sandbox. Personally I think there’s not enough data yet for the conclusions he draws. For a person not directly involved with the data to informally combine such complex results as the Higgs search is fine, as long as he doesn’t truly believe what he is doing. For him to say that a 140 GeV Higgs is ruled out at this point is … well … look, of course he might be right, but if he turns out to be right it will be for reasons that he can’t possibly know right now.

      More importantly, as I have described in my Higgs-hints article,

    2. http://profmattstrassler.com/articles-and-posts/the-higgs-particle/why-the-hints-of-higgs-currently-rest-on-uncertain-ground/
    3. , there are simple two-lepton/two-antilepton and two-photon analyses that don’t yet have enough data, and there are subtle lepton/anti-lepton/invisible-particle searches, both playing an important role in the current limits. It is very possible that the latter search is not being done properly, either because of theory issues or because of experimental issues. Should this turn out to be true, or should arguments ensue about those searches that cannot be resolved, you might have to reduce their weight. If you did that, the limits around 140 GeV would weaken considerably.

      You can play whatever statistical games you want, but nothing you do will turn insufficient data into knowledge.

      1. Dear Prof Strassler ;-),

        let me just mention that the 2-3-sigma elimination of the 140+ GeV Higgs (by combined ATLAS+CMS) is not just something that Phil Gibbs, a long-time propagator of rumors about a 140 GeV Higgs, chose to conclude based on the talk by E. Gross.

        If you look at Tommaso Dorigo’s interview with Gordon Kane

        http://www.science20.com/quantum_diaries_survivor/gordon_kane_susy_lhc-82028

        you will see that even Dorigo – someone who tries hard not to say anything that could be viewed as positive for SUSY in his whole life 🙂 – says:

        “and I concur with Gordy that the mass will be in the range he quotes [114-128 GeV]”

        So this is an emerging conclusion from the current LHC (combined) data that the Higgs isn’t above 130 GeV, at least not if it is just a SM-like Higgs. Am I misunderstanding something?

        Yours
        LM

        1. Hi Lubos,

          Well, notice we are not getting such information from real experts. Neither Dorigo, nor Gibbs, nor Kane is really an expert here — none of them is involved with the Higgs combination analysis. That isn’t to say that they aren’t right — they might be — but just that we should be cautious in believing them. They are doing the most naive thing in combining the results of the two experiments; but in the region of 120-150 GeV the experiments have correlated systematic errors, so this naive combination is probably wrong.

          A better expert would be John Butterworth, from ATLAS, and in this article http://www.guardian.co.uk/science/life-and-physics/2011/aug/22/1
          he says something much more careful:

          There is some tone of disappointment in those reports above, which is understandable. As I wrote here, a month ago ATLAS and CMS both had hints in the data which, while not statistically significant at the time of EPS, could have risen in significance at this meeting. However, the significance did not rise – in fact it dropped a bit. But again, not significantly. This could still all just be statistical noise, in either direction, up or down.

          I wrote about this kind of thing back when I knew the ATLAS results, but before I saw the CMS results, for EPS. At EPS, it became a little more likely that the Higgs was around in the mass region 130 to 150 GeV. At lepton photon it became a little less likely. However, our expectation says that we should have excluded it in this region, and we still haven’t (the update takes the 95% exclusion down to 145 GeV). But none of these likelihoods have been anything we would call significant yet anyway. And still we have said nothing about the region between 115 GeV and 125 GeV. We just have to wait (and work, of course).

          So I think it is a matter of taste right now what one says about the 130-140 region, even just taking the data at face value. Wait for the official results — and even then, one needs to think about them.

          My personal reason for additional caution is that I am not sure that I trust the analysis that looks for lepton/anti-lepton/neutrino/anti-neutrino, which dominates the exclusion limits in that region. I might be too cautious here, and maybe Dorigo et al. are right. But are the standard model backgrounds REALLY understood? especially when one divides the data into subsets with 0 jets and with 1 jet?

          [Meanwhile, why is Dorigo, a member of an LHC experiment, commenting on unreleased ATLAS and CMS combined results on his blog? I would have thought he was banned from making any such remarks.]

    4. For the record, I am not saying at all that the Higgs is ruled out at 140 GeV. i was merely posting about a “illustrative” combination that was shown at a workshop which apparently ruled it out. In fact it turns out that this combination used a formula which is not as good as the one I have been using. My combination plots still show a healthy signal at 140 GeV but nothing conclusive.

      I am confident that my own combination formula are good and the weakest link is the poor quality of some of the original plots which make them hard to digitise accurately. These and other errors such as correlation effects are much smaller than the statistical errors. In fact there are some features of the official channel combinations that I do not like and I actually trust mine more. However the current limits are not yet good enough to show exactly where the Higgs lives and I am not claiming any conclusions stronger than what is shown.

      1. Philip — thanks for the clarification regarding your point of view.

        I am not sure the correlation effects are so small, when it comes to the lepton/antilepton/neutrino/antineutrino search. ATLAS and CMS could easily be suffering from the same problem, either because there is an issue with theory computations (which would affect both of them) or because there is a subtle experimental mistake they might both have made. They could either be overestimating or underestimating their background, and either creating the hints or suppressing a real signal.

        Conversely, I think it would be very interesting to do a pure two-lepton/two-antilepton combination of ATLAS and CMS, which could be carried out without these concerns. I think there is a nice excess around 142 or so…

      2. I also have reservations about the channels that produce neutrinos because they have lower energy resolution. There seems to be a slight conflict between these “low resolution” channels and the “high resolution” channels that would also include the diphoton. The low resolution combinations (mostly WW) narrowly exclude from 140 GeV up (but within combination errors we may only want to count from 150 GeV up). The High resolution combinations (diphoton and ZZ-> 4l) show a good hint of a signal at 140 GeV. However the significance is really not yet good enough for either case.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.

Related

This week I’ll be at the University of Michigan in Ann Arbor, and I’ll be giving a public talk for a general audience at 4

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 12/02/2024

Particle physicists describe how elementary particles behave using a set of equations called their “Standard Model.” How did they become so confident that a set

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 11/20/2024