Of Particular Significance

At the SEARCH Workshop on the LHC

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/18/2012

This weekend I am fortunate to be participating in a very interesting workshop on Large Hadron Collider [LHC] physics, held at the University of Maryland. Called the “SEARCH Workshop“, it was organized by three theoretical particle physicists, Raman Sundrum (University of Maryland), Patrick Meade (SUNY Stony Brook) and Michele Papucci (Lawrence Berekely Laboratory), and they’ve brought together many theoreticians and experimentalists of all stripes from within the Big Tent of LHC physics. With the exception of a panel discussion at the very end, all of the talks are experimental, from ATLAS and CMS. We’re hearing about all of the major searches that ATLAS and CMS have done at the LHC —- starting yesterday with searches for Higgs particles, heavier partners of the top quark and bottom quark, and several variants of supersymmetry — and there’s lots of time for detailed discussion.

I can’t possibly review everything being shown in the talks — ATLAS and CMS have done a huge number of analyses. But I’ll point out a couple that caught my eye that I haven’t specifically talked about in past posts (which include ones here, here, here, here, and here.)

The first part of the workshop involved the search for a lightweight Standard Model Higgs (the simplest possible form of the Higgs particle.) In particular, we heard the latest on the hints of a signal of a Standard-Model-like Higgs particle with a mass of 125 GeV/c2.

A very interesting result from ATLAS was an application of their two-photon Higgs studies to what is called a “fermiphobic Higgs”, [I am reminded to point out that CMS also has analyzed their data this way; but my point is not directly about the search anyway.]  A “fermiphobic Higgs” is a (non-Standard-Model) Higgs particle that does not couple to matter particles as expected (in which case the masses of the matter particles would come from a separate source… a second Higgs field.) More generally this is just a Higgs for which the most common of the ways to produce Standard Model Higgs particles — p p –> H via a collision of two gluons — is suppressed, and the process p p –> q q H (“vector boson fusion”) is the main way to make Higgs particles. [Here “p” stands for proton and “q” for quark; see this article for more details on what these notations mean.] While the rate of p p –> q q H is smaller than the rate for p p –> H, the rate for the Higgs to decay to two photons is increased, with the effect that the signal of a Higgs at the LHC is not at first glance very different from a Standard Model Higgs. One main difference is that most Higgs particles come with two jets from the two quarks (the q’s in  pp –> qqH), which is not true for the Standard Model Higgs. There weren’t enough details given in the talk, so I have to read the paper, but basically what ATLAS showed indirectly gives us some indirect insight into what ATLAS has to say about what fraction of their Higgs signal is coming from the p p –> q q H process — the same thing that CMS told us back in January and recently updated. I still have to think this through to understand the implications fully, and will let you know when I do.

In theories that are more complicated than the Standard Model (which has the simplest form of the Higgs particle) there may be several Higgs particles, rather than just one. One or more of them may be electrically charged. And if a charged Higgs particle is sufficiently light (lighter, say, than 150 GeV) then the top quark may sometimes decay to a charged Higgs plus a bottom quark. The Tevatron experiments CDF and DZero looked for signs of this process, in the case that the charged Higgs decays to a tau lepton and an anti-neutrino, and they were able to show that at most 10-15% or so of top quarks decay to a charged Higgs of this type. CMS and ATLAS have now managed to reduce this possibility to 1-2%.

What is especially interesting, for experts, is that CMS and ATLAS are so good at studying hadronically-decaying tau leptons that their best results come from the case when

  • one top decays to a bottom quark and a W particle, which in turn decays to a quark/anti-quark pair, and
  • the other top decays to a  a bottom quark and a charged Higgs, which in turn decays to a hadronically-decaying tau lepton and anti-neutrino

which is a case which has only jets and hadronically-decaying taus — not at all the case that I, or almost anyone else in the community, I suspect, would have guessed would be the most powerful. Traditionally taus that decay to hadrons are considered to be somewhat hard to work with, and collisions that produce no leptons or photons or large amounts of “missing energy” (really, missing momentum perpendicular to the proton beams) have been considered the most challenging. That events of this type give the best results is a testament to how wonderfully the detectors ATLAS and CMS are working, and how capable are the experimentalists who built them and operate them. Also one should mention the theorists who developed new techniques for defining jets a few years back, which have helped make such measurements easier to carry out.

Extraordinary ATLAS data showing how easily a signal (grey) of top quarks decaying 5% of the time to charged Higgs particles would show up in their analysis using hadronically-decaying taus and W particles decaying to jets. Prediction in the absence of any charged Higgs is the black solid histogram; data is the black dots. CMS has a similar result (but not as cool a plot.)

At the end of the day there was a very interesting and wide-ranging discussion session, part of which I have to relate because it is relevant to what I was writing about in January on this website. Michelangelo Mangano, one of the leading theorists at the LHC, opened the discussion by arguing that we really needed to think through the Higgs program for the coming years. He suggested three main directions:

  • Considering the advantages of having data from both 7 TeV and 8 TeV proton-proton collisions, and eventually 14 TeV collisions, which can be compared and contrasted to gain new insights — and considering even whether it might be justified to spend a year (not necessarily right away) in which one the LHC would quintuple the planned 2012 data at 8 TeV, in order to have more precise information about production of Higgs particles (and other things) at this energy.
  • Studying how to combine different measurements to determine properties of the Higgs to the highest possible precision. (This was also emphasized by other theorists earlier during the day.)
  • Laying out a complete program of measurements of all the ways that the Higgs might decay, including not only those that are predicted by the Standard Model but also those “exotic decays” that can arise in other theories.

We didn’t discuss his first point (which I think most of us have not thought about yet.)  His second point was discussed quite widely, as it needs to be; I think it is uncontroversial, but we have to talk more about the backgrounds to the various measurements.   And on his third point…

Now I’ve written here about the possibility of exotic decays of the Higgs, because I personally think this is a very high priority for 2012 — or more precisely, that it is a really important priority for the data that we collect in 2012 (the data analysis could be done later). And I’ve worried a lot over the past three months about the triggering strategies [i.e. how exactly to select the tiny fraction of the LHC’s collisions that can be permanent stored, one of the great challenges at a machine like the LHC] for such decays. So I  mentioned my concerns and my recent studies about triggering, and suggested that in order to help ATLAS and CMS make decisions about how to run their triggers in 2012, some of the theorists in the room ought to do some quick studies that would clarify how one would actually make the measurements of various exotic decays.

This sparked a very lively discussion, in which my point of view was questioned and criticized by a few of the experimentalists in the room, with a couple expressing support and others general doubt. The contrary point of view is that

  • it will be impossible to measure any exotic decays of the Higgs in 2012,
  • even if it is possible, the very inefficient but always reliable triggering strategy of using the processes p p –> W H and p p –> Z H (of which about 30% of the events can be reliably triggered, meaning about 1% of all Higgs particles produced are always stored, no matter how the Higgs decays) allows any of these measurements that are possible in the first place to be made without a change in trigger strategy.

Currently the cases for and against this point of view are badly incomplete — there are many required theoretical studies that have not been done. What all agreed was that the theorists in the room who think exotic decays are important need to make a clear case that a change in trigger strategy is actually needed.

Fortunately, the theorists in the room include about twenty who have thought about exotic Higgs decays over the last five years or so… most from the generation younger than I am, many of them professors already and very well-known in the field. And I think seeing the questions that the senior experimentalists were raising got their attention. So… on the initiative of a few of them, we’re all meeting today to discuss how to proceed.

This is what happens at good workshops — they serve as an opportunity for an exchange of ideas and a catalyst for important problems to be addressed.

Share via:

Twitter
Facebook
LinkedIn
Reddit

32 Responses

  1. With regard to a fermiophobic Higgs: if there is a Higgs at 125 GeV that is very similar to the SM Higgs in its couplings to W and Z, as suggested by the Tevatron’s evidence for WH and ZH production with H -> bbar, and perhaps supported by CMS’s evidence for pp -> qqH, but the 125 GeV Higgs does not couple to the top, although it perhaps might couple to down-type quarks: won’t this increase the required size of the Yukawa coupling of the top to the other Higgs, that gives the top its mass, because the other Higgs will have to have a relatively small VEV? How much of an increase in that Yukawa coupling could be tolerated, before the model becomes non-perturbative?

    1. I don’t think any theorists believe there can be a fermi-phobic Higgs that has Standard-Model-like couplings to the W and Z, because of the large top quark mass. It’s an archaic idea. But the experimenters did the analysis, so we may as well get some insights from what they learned. My interest wasn’t in the fermi-phobic Higgs itself but in the fact that the way ATLAS did the analysis gives us more insight into how their excess two-photon events are distributed. (Still haven’t had time to look at it carefully.)

  2. Matt, “I still have to think this through to understand the implications fully, and will let you know when I do.”

    Are you able to tell us more on this now? Is the old Atlas excess for the SM Higgs significantly reduced by this new analysis?

  3. All of the published predictions for possible particles that prove the validity of supersymmetry, which predictions give the most hope for near-term tests?

    1. hmmm… I’m confused as to your wording. I can read it either that you want “predictions… that prove the validity of supersymmetry” — but of course predictions can prove something is true — or that you want predictions for “particles that prove that vlidity of supersymmtry” — but again, particles don’t prove something is true either. Could you rephrase your question with some care?

    2. David,

      “Check Lisa Randalls talk at the Moriond 2012 Higgs summary conference. Many people believe that the top is composite these days – even susy advocates.”

      http://indico.in2p3.fr/getFile.py/access?contribId=103&sessionId=8&resId=0&materialId=slides&confId=6001

      You may need a PhD in theoretical physics though, 🙂

      PS: Prof Strassler, is it plausible that the repeating collapsing of electromagnetic solitons, instabilities of vacuum and possibly the instabilities of supersymmetry, create an attractive field, quantum gravity?

      1. What are electromagnetic solitons? I know what solitons are. I know how electromagnetism works, in detail. The theory of quantum electrodynamics that fits data so beautifully is not know for solitons, much less collapsing ones. I’ve heard this language from non-experts only. Can you provide a reference from a legitimate scientific source?

        It would be hard for me even to begin answering your question until these terms are carefully defined.

      2. I apologize for the vague language but I am not a physicist but I am very eager for your opinion on this conjecture of defining the “first’ field of nature.

        At the instant of the sudden release of free energy, Big Bang, the only way to expand is to create space and since a process did arise you will have time and therefore a spacetime domain. The three dimensional space arises because that is only that is required to define space, simplicity of nature. I don’t know if it is valid to call it a field yet but the free energy will interact and sphere is of denser energy packets arise, again, because it is the simplest solution (outcome). Hence a growing matrix of these spherical energy packets occurs and at some point will stabilize in size, radius, and this is what I am calling a soliton. It is the coupling between adjacent solitons that define the “particle”. Give space the characteristics we define as particles, Dirac’s mass equations?

        This scenario must be very unstable and the spherical packets should collapse, from a well defined sphere with a higher density “wall” it collapse on itself due to the interaction from the adjacent packets and hence creating an attractive force. Then the same interaction the adjacent packets the space will either disintegrate, fuse in the adjacent packets, or build up again into a new spherical packet.

        This scenario, creates a field, an attractive force, a geometrical symmetry (since space will expand there must be a point of stability to support the expansion). This scenario also give and energy equation of the simplest form, E= f(x,y,z,t).

        Is this the fundamental mechanism which every other field is derived from? Could this be the gravity field, so fundamental that it will require physics well below Planck’s scale to describe it?

        Again I apologize for my vague language but idea is been haunting me since I started reading about the Higgs and SUSY and could both be explained with one very simple mechanism.

        1. Ok, let’s see if we can walk before we run.

          “I don’t know if it is valid to call it a field yet but the free energy will interact and sphere is of denser energy packets arise, again, because it is the simplest solution (outcome). Hence a growing matrix of these spherical energy packets occurs and at some point will stabilize in size, radius, and this is what I am calling a soliton.”

          So let’s look at when you say “the free energy will interact and sphere is of denser energy packets arise, again, because it is the simplest solution.”

          First, the English is problematic; I can’t entirely understand what you are saying because the grammar is off.

          Second, my question, when you say “it is the simplest solution”, is “why?”

          Since everything that follows depends upon this assumption, let’s see if we can get somewhere by getting this part straight.

  4. Matt, I just want to re-affirm Mike’s praise for your blog. I too am a retired NASA scientist who has been enlivened by all in this exciting and enlivening adventure at CERN. Thank you.

  5. Professor, thanks for giving us a taste of what it’s like to be “in the room.” This excites me as much as going to the moon did, and I’m gratified that this adventure permits such intimate and open sharing of the behind-the-scenes decisionmaking.

  6. It will be nice if we can come up with a perfect triggering strategy. If cannot at this moment, can we make another catching net to catch a very small percent (such as only 500 per day) of those abandoned events with random selections?

    1. Starting? If you’ve been reading my blog, you know that the most serious and cautious scientists remain unsure. No one will be surprised if current hints are real; few people will be surprised if current hints go away. If you were here at this workshop you’d hear how cautious people are. Unfortunately none of the other bloggers on particle physics are here to listen in.

      1. Please don’t get me wrong, I wish them nothing but the best of luck. I am getting up there in years and I too am eager for some answers.

        I also recommended last November, that they should not increase the energy levels beyond 7 TeV until the machine settles down to a very repeatable parameters, across the board. Maybe even 7 TeV is too high?

        When we did qualification testing for the space shuttle components we would first sort things out at about 1/8 the levels before the full energy. It gave us a chance to checkout our test and measurement schemes and allow the test unit to settle down to minimum displacements.

        It will be more important to focus on the machine for a couple of more years or we could be looking at ghosts for the duration.

        1. Well, I wouldn’t worry about that. The 7 TeV machine has worked beautifully. The quality of the data is extraordinary; the only reason we have any hints of a Higgs possible right now is that the machine and the detectors have worked as well as could have been hoped. The big issue at the LHC since 2010 hasn’t really been energy, but collision rate. And that has increased gradually since early 2010 from a few hundred collisions per second to the current 100s of millions per second.

          The accelerator people are very confident in the 8 TeV step. I think they’ve earned our confidence with how well things went last year. That said, any step forward has its risks.

  7. Another interesting article, which I’m sure if I read carefully I could begin to understand.

    Could I ask a question over basic definitions.

    Does “SM Higgs” simply mean the same as a “complex scalar doublet” field transforming as SU2?

    So is a fermiphobic higgs different in nature to a SM Higgs?

    Is an SM Higgs by definition the only Higgs, or can a SM Higgs be one of a family of Higgs?

    1. The Standard Model has one and only one Higgs field, and that field does everything: give mass to the W and Z particles and also the masses for all the matter fields. [Yes it is a complex scalar doublet.] And it has no other particles than the ones we already know about.

      Anything that isn’t called the “Standard Model” is, by definition, more complicated. So yes, the fermiphobic Higgs is different from a Standard Model Higgs, and a theory with a fermiphobic Higgs is more complicated than the Standard Model.

      You can read about the Standard Model Higgs here

      http://profmattstrassler.com/articles-and-posts/the-higgs-particle/the-standard-model-higgs/

      and about a few of the many examples of more complicated models here (the implications are out of date but the models described are not.)

      http://profmattstrassler.com/articles-and-posts/the-higgs-particle/implications-of-higgs-searches-as-of-92011/

  8. Nice talks… I searched the low-mass Higgs search talks from CMS and ATLAS for the word `blind’ and didn’t find a single occurrence. Is any portion of these searches done blind? How are possible subconscious biases for both of the experiments to drift toward agreement or toward getting exciting PR, commentary, and hence future support handled?

    The significance of the ATLAS H>Gamma Gamma search seems to be 1.5 sigma (slide 12/35 of Brelier’s talk), when the look-elsewhere effect is accounted for… although he conclusion slide omits the look-elsewhere effect. The significance of the CMS H>Gamma Gamma search seems to be 1.6 sigma. The CMS summary slide (33/42 of DeRoeck) gives a global significance of 0.8 sigma for Higgs excess around 125 GeV.

    Doesn’t seem to be extraordinary evidence! Is it an extraordinary claim or just an ordinary claim, though?

    1. Nobody from ATLAS and CMS is making extraordinary claims. You can see that in the statements in the concluding slides. Generally most people here BOTH take the possibility that the Higgs is at 125 GeV, AND the possibility that it is not, quite seriously.

      However, regarding the numbers you noted: some of the look-elsewhere effect estimates are VERY conservative, so the evidence is not as bad as those numbers would indicate. But we won’t become confident as a community without a lot more data.

      1. But would a claim of a Higgs at 125 GeV be ordinary or extraordinary?

        Including the look-elsewhere effect is merely accurate… it accounts for the probability that a fluctuation is possible *anywhere* in the search region. Conservative? I disagree. Merely accurate, given that a search region was used. If a priori everyone had agreed to look only at 125 GeV and nowhere else, then there would be no need for the look-elsewhere correction.

        Guess what! It is amazing! I saw license plate 2F7758B on the road today! Isn’t that incredible! That is probably 5 sigma! But whoops… the probability that I saw *some* license plate today was unity. Maybe I should have `looked elsewhere’ instead of at the plate.

        Seriously, it is not at all unlikely that a fluctuation would occur *somewhere* in the search region.

        That ATLAS and CMS see nearby fluctuations around 125 GeV does seem curious… but… where these blind analyses? Was there concern for subconscious bias? Seems not, at first blush.

        1. You don’t need to explain the look elsewhere effect to me.

          Note I did not say that one should not include the look elsewhere effect. It is widely agreed among my colleagues that one should. However, when one digs a little deeper one discovers all sorts of ambiguities as to how to implement it. For instance, the chance that you saw 2F7758B today is very low; the chance that you saw a license plate that had two sevens in it was quite a bit higher; the chance that you saw a license plate with two of the same digits in it is higher; well, where do you stop? The chance that the LHC experiments, in one of their thousands of measurements, would see a 3.6 sigma fluctuation is unity. But is that relevant to the Higgs search, or not?

          My point was that (I think) one of the numbers you quoted was a conservative evaluation of the look-elsewhere effect… probably too conservative.

          In any case, as long as we’re still debating this, discovery hasn’t yet occurred.

      2. Perhaps the 0.8 sigma is conservative to some… but I do think that had there been a bump anywhere in the 100-600 GeV mass range, there would have been excitement. If one quotes the LEE for a more restricted range like 110-145 GeV, then one is saying: had there been a bump at 150 GeV, or 250 GeV, or 490 GeV, everyone would have said `that is not interesting’.

        I can’t agree that a bump at 490 GeV would have been dismissed by the community.

        Still wondering: how do CMS and ATLAS take into account subconscious bias? Is there any blinding? Particle physics has a long history of false bumps generating excitement, from the split A’s of Bogdan Maglich to somebody’s `High Y’ anomaly to the `OOPS Leon’ and the Crystal Ball Higgs… did anyone mention that UA1 discovered the top quark and SUSY in the 1980’s?

        And is a Higgs discovery ordinary or extraordinary? I guess the evidence required changes a little.

        1. First: since a Standard Model Higgs is completely ruled out at well over 99% over a large range above 150 to around 500 or so by previous data, does it really make sense to include this in the look-elsewhere effect? I don’t think so; that’s an artificial reduction.

          Second: suppose we ignore everything except the photon and ZZ* analyses. I think we would agree that most of the other searches are pretty dicey. Well, the photon searches (which drive most of the excess right now) are only sensitive up to the 150 range or so (actually rather less). Should we really include regions where no one was looking for the Higgs?

          I think you’re being too conservative.

          Regarding false bumps: EVERYONE with any experience mentions the false top quark and SUSY and split A1s and High Y’s and OOPSLeons. I’ve mentioned them on this site. And the 17 keV neutrino and the fifth force and the ultra-millisecond pulsar and of course the current W + two jets anomaly. Why these are not sufficient to generate caution is something you will have to ask less cautious people, such as certain bloggers. Within the community — for instance, within this workshop — I generally see experts being very patient and conservative — and VERY worried about whether they could be suffering from unconscious bias.

          No one is overly worried yet, however, because so much more data is coming. And there are indeed some discussions about how to do blinding, where possible, with the next data set.

      3. The conservatism and knowledge of past flubs sounds great and appropriate… thanks Matt for mentioning it. Although you did post a plot on the neutrino speed from bad actors of the 1980’s fiascos involving UA1. For myself the ICARUS result adds exactly nothing to careful, thoughtful discussion.

        Actually, the conservative aspect of the discussions doesn’t get covered sufficiently. Careful evaluation of the LEE is really the only trace of skepticism that rises to the surface of the discussion.

        As to the width of the search region appropriate for the LEE… I think it is clearly wrong to only look at the region covered by one technique at a time. On the other hand a priori information can and should weigh in… our version of the Monte Hall problem (the Monte Higgs ?).

        But if 0.8 sigma is too pessimistic, quoting the local probability is surely wildly optimistic.

  9. Matt—if the recent hints from the Tevatron are correct, then they’ve seen Higgs decaying to b-bbar, which would kill the fermiophobic idea. Alas, it’s only 2 sigma not likely to improve much. I’d be curious to know if the LHC has any chance of seeing H –> b-bbar or tau pairs during the next year (I doubt it, but am not sure). This assumes it is at 125 GeV, of course.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.

Related

This week I’ll be at the University of Michigan in Ann Arbor, and I’ll be giving a public talk for a general audience at 4

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 12/02/2024

Particle physicists describe how elementary particles behave using a set of equations called their “Standard Model.” How did they become so confident that a set

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 11/20/2024