Of Particular Significance

“Supersymmetry Dealt a Blow”?

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 11/13/2012

One of the challenges of being a science journalist is conveying not only the content of a new scientific result but also the feel of what it means.  The prominent article in the BBC about the new measurement by the LHCb experiment at the Large Hadron Collider [LHC]  (reported yesterday at the HCP conference in Kyoto — I briefly described this result yesterday) could have been worse.  But it has a couple of real problems characterizing the implications of the new measurement, so I’d like to comment on it.

The measurement is of how often B_s mesons (hadrons containing a bottom quark and a strange anti-quark, or vice versa, along with many quark/anti-quark pairs and gluons) decay to a muon and an anti-muon.  This process (which I described last year — only about one in 300,000,000 B_s mesons decays this way) has three nice features:

Yesterday the LHCb experiment reported the evidence for this process, at a rate that is consistent (but see below) with the prediction of the Standard Model.

The worst thing about the BBC article is the headline, “Supersymmetry theory dealt a blow” (though that’s presumably the editor’s fault, as much as or more than the author’s) and the ensuing prose, “The finding deals a significant blow to the theory of physics known as supersymmetry.”  What’s wrong with it?  It’s certainly true that the measurement means that many variants of supersymmetry (of which there are a vast number) are now inconsistent with what we know about nature.  But what does it mean to say a theory has suffered a blow? and why supersymmetry?

First of all, whatever this new measurement means, there’s rather little scientific reason to single out supersymmetry.  The rough consistency of the measurement with the prediction of the Standard Model is a “blow” (see below) against a wide variety of speculative ideas that introduce new particles and forces.  It would be better simply to say that it is a blow for the Standard Model — the model to beat — and not against any speculative idea in particular.  Supersymmetry is by no means the only idea that is now more constrained than before.  The only reason to single it out is sociological — there are an especially large number of zealots who love supersymmetry and an equal number of zealots who hate it.

Now about the word “blow”.  New measurements usually don’t deal blows to ideas, or to a general theory like supersymmetry.  That’s just not what they do.  They might deal blows to individual physicists who might have a very particular idea of exactly which variant of the general idea might be present in nature; certain individuals are surely more disappointed than they were before yesterday.   But typically, great ideas are relatively flexible.  (There are exceptions — the discovery of a Higgs particle was a huge blow to the idea behind “technicolor” — but in my career I’ve seen very few.)  It is better to think of each new measurement as part of a process of cornering a great idea, not striking and injuring it — the way a person looking for treasure might gradually rule out possibilities for where it might be located.

Then there’s the LHCb scientist who is quoted as saying that “Supersymmetry may not be dead but these latest results have certainly put it into hospital”; well…  Aside from the fact that this isn’t accurate scientifically (as John Ellis points out at the end of the article), it’s just not a meaningful or helpful way to think about what’s going on at the LHC.

Remember what happened with the search for the Higgs particle.  Last July, a significant step forward took place; across a large fraction of the mass range for the Standard Model Higgs particle, it was shown that no such particle existed.  I remember hearing a bunch of people say that this was evidence against the Standard Model.  But it wasn’t: it was evidence against the Standard Model with a Higgs particle whose mass was in a certain range.  And indeed, when the rest of the range was explored, a Higgs particle (or something very much like it) turned up.  Failure to find one variant of a theory is not evidence against other variants.

If you’re looking for your lost keys, failing to find them in the kitchen, living room and bedroom is not evidence against their being somewhere else in the house.

Similarly, the new result from LHCb is not evidence against supersymmetry.  It is evidence against many variants of supersymmetry.  We learn from it about what types of supersymmetry cannot be true in nature — we know which rooms don’t have your keys.  But this is not evidence against supersymmetry in general — we still don’t know if your keys are elsewhere in the house… and we won’t know until the search is complete.  Nature is what it is — your keys are wherever they are — and the fraction of your search that you’ve completed is not logically related to how likely your search is to be successful.  It may be related to how optimistic you are, but that’s a statement about human psychology, not about scientific knowledge.  The BBC article has confused a blow to the hopes and optimism of supersymmetry zealots with a blow to supersymmetry itself.

It’s also important to understand that despite the fact that some physicists and certainly the science media spend an inordinate amount of time talking about supersymmetry, the particle physics community actually spends a lot of time on other ideas, and also on testing very carefully the hypothesis that the Standard Model is correct.  For good reason, a number of the most interesting results presented so far at the HCP conference, not just the one we’ve been talking about, involve precise tests of the Standard Model.

Now, in a bit more detail, here are a few of the scientific issues surrounding the article.

First, it’s important to notice that the measurement quoted yesterday is still very rough.  Yes, it agrees with the prediction of the Standard Model, but it is hardly a precise measurement yet: speaking broadly, the fraction of B_s mesons that decay to a muon/anti-muon pair is now known to lie somewhere between 1 in 100,000,000 and 1 in 1,000,000,000.  The Standard Model predicts something between 1 in 240,000,000 and 1 in 320,000,000. So the LHCb measurement and the Standard Model prediction are currently consistent, but a more precise measurement in future might change that.  Because of this, we should be careful not to draw an overly strong conclusion.  Many variants of supersymmetry and of other speculative ideas will cause a deviation from the Standard Model prediction that is too small for this rough measurement to reveal; if that’s what nature is all about, we’ll only become aware of it in a few years time.

One serious scientific problem with the article is that it implies

  • that supersymmetry solves the problem of what dark matter is, and
  • that if supersymmetry isn’t found, then physicists have no idea what dark matter might be

Both of these are just wrong.  Many variants of supersymmetry have at least one proposal as to what dark matter is, but even if supersymmetry is part of nature, none of those proposals may be correct. And even if supersymmetry is not a part of nature, there are plenty of other proposals as to what dark matter might be.  So these issues should not be linked together in the way they are in the BBC article; one should not mistake propaganda (sometimes promulgated by supersymmetry zealots) for reality.

Another point worth remembering is that the biggest “blows against” (cornerings of) supersymmetry so far at the LHC don’t come from the LHCb measurement: they come from

  • the discovery of a Higgs-like particle whose mass of 125 GeV/c² is largely inconsistent with many, many variants of supersymmetry
  • the non-observation so far of any of the superpartner particles at the LHC, effects of which, in many variants of supersymmetry, would have been observed by now

However, though the cornering of supersymmetry is well underway, I still would recommend against thinking about the search for supersymmetry at the LHC as nearly over.  The BBC article has as its main title, “Popular physics theory running out of hiding places“.  Well, I’m afraid it still has plenty of hiding places.  We’re not yet nearing the end; we’re more in the mid-game.  [Note added: there were some new results presented today at the HCP conference which push this game a bit further forward; will try to cover this later in the week.]

One more scientific/linguistic problem: left out of this discussion is the very real possibility that supersymmetry might be part of nature but might not be accessible at the LHC.  The LHC experiments are not testing supersymmetry in general; they are testing the idea that supersymmetry resolves the scientific puzzle known as the hierarchy problem.   The LHC can only hope to rule out this more limited application of supersymmetry.  For instance,  to rule out the possibility that supersymmetry is important to quantum gravity, the LHC’s protons would need to be millions of billions of times more energetic than they actually are.  The same statements apply for other general ideas, such as extra dimensions or quark compositeness or hidden valleys.  Is this disappointing? Sure.  But that’s reality, folks; we’re only human, and our tools are limited.  Our knowledge, even after the LHC, will be limited too, and I expect that our children’s children’s children will still be grappling with some of these questions.

In any case, supersymmetry isn’t in the hospital; many of its variants — more of them than last week — are just plain dead, while others are still very much alive and healthy.  The same is true of many other speculative theories.  There’s still a long way to go before we’ll really have confidence that the Standard Model correctly predicts all of the phenomena at the LHC.

Share via:

Twitter
Facebook
LinkedIn
Reddit

65 Responses

  1. I would like to pose the following question: if susy wouldnt be found at low energy, would supersymmetry still be relevant for cosmology? are those separate applications or are they interdependent?

  2. On a hopeful note is it still possible that the mu+/mu- decay channel might deviate significantly from the SM prediction, like say, by a factor of two either way? Or is that much departure from expectation improbable in the light of other data already collected?

  3. You mention that the experimental limits for the fraction of B_s mesons that decay to mu+/mu- pairs is between 100 million and 1 billion, with the Standard Model predicting 1 mu+/mu- in 240 million decays. Will we have to wait till March for these limits to tighten up, assuming they will continue crunching data till then?

  4. aveces no es necesario complicarse,la naturaleza no hace lo que quiere,no es tan rebelde,hace lo que le permite el campo,si seguimos empujando,seguiremos alejando,

    1. Yep, nature does what she wants, in a way as simple or as complicated as she wants; and independent from our preferences.
      Like it or not 😉

    1. No, it obviously was not; and there are three types of neutrinos, and all of them have weak nuclear interactions, which the photon does not — making the idea that one of them is the superpartner of the photon completely ludicrous.

      1. The superpartner of neutron fermion is supposed to be a boson – how it could have a weak interaction, after then? The photons undergo the quantum fluctuations in the same way, like the neutrinos, during which their flavor changes – so that the photons may demonstrate flavor oscillation as well from this perspective.

        1. The Higgs is a boson, and it has weak interaction. The W particle is a boson, and it has weak interactions. The pion is a boson, and it has weak interactions. What are we talking about?

          Your statement about photons is wrong. You cannot write equations that will do what your words say… it’s simply impossible.

  5. Hi Matt. Great article. I was just wondering if you could expand on this comment: “But typically, great ideas are relatively flexible.” What exactly do you mean? Are you saying that, for example, SUSY is a great idea because it is flexible? Or that, because it is flexible, it is more likely to be a great idea? I don’t think you’re suggesting that we prefer flexible theories over inflexible ones…

    1. A great idea is usually something is easy to state, a grand and simple concept. But then the question is how it manifests itself in nature. Typically, there are many options.

      For example: the idea that the information as to how to make a cell might be stored in a chemical code such as DNA is a great idea. But you can imagine a billion ways to carry it out.

      The idea that the universe might be so uniform because it inflated very rapidly in its early history is a great idea. But there are a thousand different ways that inflation might have occurred.

      The idea that the Higgs particle might be able to be at low mass and that the Higgs field might have a small value, because quantum fluctuations are partially canceled due to superpartners is a great idea. But there are million specific ways of carrying out that idea.

      So: great ideas are usually so basic and fundamental that they can work in lots of different ways. Unfortunately, to rule them out, you have to rule out all of those different possibilities — which is no easy task. It’s easier to prove them right, if they are — you just have to find evidence.

      1. I guess the big exception to this principle is General Relativity? Where the combination of the equivalence principle, Minkowski space, and consistency (at the limit) with Newtonian gravitation leaves you just a single possibility for the theory.

  6. Hey Matt! Saw this when Garrett posted on his FB wall. I was one of your undergrad students in Physics 57 at Stanford – great to run across your blog. Looking forward to digging into this post on a plane ride tomorrow. I’ve already threatened to ask Garrett a couple of follow-up questions, and I’m sure I’ll have some for you as well. Hope you’re well!

  7. Garrett, if you know your keys exist and do not have them on you. You can search and exclude the entire eastern seashore, and it won’t affect the prior one way or the other..

    1. But you don’t know your keys exist with absolute certainty, it just has a high probability. As you continue to search without finding them, that probability slowly drops, until after searching almost the entire space, the probability that your keys exist is quite low.

      1. The keys can exist only in one place! It may well be that you search the whole house before you look at the right spot and see them. Prof. Strassler is exactly right, period.

  8. Matt, you put this in bold:

    “the fraction of your search that you’ve completed is not logically related to how likely your search is to be successful.”

    If there is a space in which you believe your keys might exist, then the more of that space you search without finding them, the more you should reduce your belief that they’re there. No?

      1. But once you’ve searched the entire space, and not found them, you’d have to reduce your belief that they’re there. Yes? What about after you’ve searched 99.9% of the space?

        1. This isn’t the right way to think about the problem. We searched 99% of the space for the Standard Model and we found the Higgs in the last 1%. That’s just an accident of how hard it is to find a 125 GeV Higgs. It isn’t a statement about how unlikely there is to be a 125 GeV Higgs.

          The issue is whether the 0.1% that remains after an extensive search is an unlikely location — and the problem is that we humans aren’t very good at determining priors about nature. So it becomes a political debate and not a scientific debate.

          The issue with (classic) technicolor is not about likelihood; it is whether what remains of this idea is 0.1% or 0.0001% or 0%. Since you can’t calculate very well in classic technicolor, it’s hard to conclude this debate.

      2. This is a basic question about how probability works in general though. If you have a large probability (degree of belief) for something being in some range, then as you search that range and don’t find it, you have to adjust the probability downwards that it exists. If the search for the Higgs had continued without it being found, we would have had to lower the probability of there being a Higgs, even if only slightly. Yes? If you have some fancy prior probability distribution over the range, that’s fine — the probability still drops as you search and find nothing. This fact holds whether we’re talking about keys, Higgs, or superparticles.

        1. But that is a fact about a *person*, Garrett. That’s a fact about that person’s belief. Or about a large segment of a community.

          The last time I checked, Nature is not interested in that person’s belief. Nor is a theory. I did not say that supersymmetry zealots might not take the new result as a blow. I happen not to be a supersymmetry zealot, so it’s no blow to me.

          But supersymmetry, the idea? the theory? the possibility that it might be a part of nature? All that’s happened is certain variants of the idea are dead-dead-dead — it’s a fatal blow to them — and other variants are not at all dead — it’s no blow at all to them.

          This is a matter of not being able to distinguish human preference, emotion, hope and disappointment from rigorous scientifically-based knowledge about nature. People are such terrible thinkers sometimes! This is precisely what leads to huge mistakes, such as devoting far too much human-power to one idea. Look at how often in history community preference has been wrong!! (cf. the resistance to Einstein’s notion of the quantum, which lasted almost 20 years — for good, but wrong, reasons.)

      3. That’s right, probabilities are personal. They are the knowledge we have about the world. Which is why it’s so important that we adjust them as rationally as we can. As we gather more information about nature, we should change these probabilities that we have — that’s the basic tenant of science. Nature is what it is, but we don’t have perfect information, so all we can do is adjust our probabilities. Now, for the case at hand, if the probability that you have for superparticles existing in some parameter range is nonzero over the whole range, and some of that range is experimentally excluded, then your total probability for superparticles existing at all should go down. Even if it doesn’t go down in direct proportion to the amount of range excluded, it should still go down a little, even if (as you’ve described) the probability density goes up in the unexcluded range. Yes? If you don’t adjust your probabilities this way, either personally or as a community, I don’t think it’s science.

        1. You’re right of course. Seems that several people here have no clue of Bayesian statistics. It’s a matter of 3 or 4 lines of algebra to show that if before measurement there is a finite a priori probability, and after measurement that portion is excluded, the updated a priori probability goes down. But of course, that is only true if in the universe of outcomes, “it isn’t there” is present.
          If you have 100 chairs in your house, then you can consider two universes:
          – one with one hundred events of the style “my keys are under chair number i”.
          – one with one hundred and one events: the same as before, plus “my keys aren’t under any chair”.

          And depending on the universe you consider, if you’ve been looking under 95 chairs and didn’t find anything, you can conclude:

          100% chance that they must be under one of the 5 remaining chairs (if you’re using the first universe)

          depending on your a prioris, but maybe 80% chance that after all, they aren’t under any chair if you’re working with the second universe.

          So we see again that whether SUSY is “cornered” or whether “SUSY is in hospital” depends on the universe you started with 🙂 whether you considered the possibility that it isn’t there or not from the start or whether that’s not an option 🙂

          1. To illustrate with numbers:

            Consider 3 events in the universe:
            A1: SUSY in parameter space part 1 (the part that has been explored since, say, more than 25 years LEP, Tevatron, HERA, early runs LHC…)
            A2: SUSY outside of that space, so in part 2 (inaccessible or not yet explored)
            A0: no SUSY

            Say that the a-priori (guessed) probabilities before anything started being measured (say, 30 years ago) were:
            A1: 70%
            A2: 20%
            A0: 10%

            That is to say, a priori, people expected probably that SUSY would be discovered at LEP, HERA, Tevatron, and the early runs of LHC was 70%. They’d think it would be bad luck, but still 20% chance that SUSY wouldn’t be there. And they’d give it 10% chance of not being any SUSY out there at all. Just an example.

            Now, consider the event B which is: nothing seen in part 1.
            We have of course that P(B | A1) = 0 (assuming competent experimenters) ; P(B | A2) = 1 (excluding fraudulent experimenters) and P(B | A0) = 1 (idem).

            From that one can deduce that P(B) = 30%. That is, the a priori probability people would have guessed that nothing is seen in zone A1 is 30%.

            But now, B is a fact. So using Bayes’ theorem, we find:

            P(A0 | B) = P(B | A0) P(A0) / P(B) = 100% * 10% / 30%

            Which results in P(A0 | B) = 33%

            In other words: 30 years ago, the a priori probability that there wasn’t any SUSY was 10%, and now, with B a fact, it is 33%.

            The numbers are just an illustration.
            Garrett wasn’t saying anything else (and no, I’m not Garrett 😉 ).

      4. The point is that this probability is irrelevant to the work of validating or invalidating the hypothesis. The only probability that could be of relevance, was the probability for the Standard Model being right about the Higgs, but this probability would also just be guesses on our part. To conclude on the Higgs particle, we need to _know_, and to know we must search _all_ of the haystack. You cannot validate a hypothesis on the basis of a human’s guesses about the likelyhood of that validity. This would be analogous the failure of political pundits (both sides) in the recent US election, where opinionated bias led to predictions outside of what the actual statisticial information said.

        Setting a probability on any independent aspect of a theory about nature, would be biased one way or another. What you argue Garrett, is that the “position” of the Higgs isn’t independent, but rather depends on the exclusion of 99% of the range. Matt says that it was independent of this exclusion.

        A theory like the Standard Model is a qualified guess based on previous observations, but it is not a prediction that you can attribute a probability to as the solution space isn’t infinite. You don’t even know how many dimensions it has.

  9. Matt, to reiterate what George says, if you define technicolor in a very strict sense, it did not take the Higgs discovery to kill it, the s-parameter constraint would have been enough. The only sense in which TC could be alive is by interpreting it in a wider context, and in this respect I agree with George: as for supersymmetry, there is a good class of options that are not invalidated by the discovery of the Higgs.

  10. Are the triggers sensitive to the mssm gluino? e.g.. Kane has again said one can expect it to be below 1 tev consistent with the other constraint. The lhc data so far is also in no way sufficient given that the error bands are as large as the signal.
    I see no reason to reach any conclusion until one has addressed such. Whether or not e popular press wants to use bad metaphors, and whether or not many scientists want another Dirac moment, nature will do what it wants. Keys are always in the last place one looks.come to think of it so was the Higgs.

    1. your question doesn’t have an answer: triggers fire on what is observed, and gluinos are no observed — their decay products are observed. Whether the trigger fires therefore depends on what the gluino decays to. Similarly, data analysis strategies also depend on what the gluino decays to, so limits on gluino masses are not uniform and depend on how the gluino decays. For some decay modes, gluinos certainly are not constrained up to 1 TeV yet.

  11. You wrote a couple of things which I found very instructive, as one who reads the science media:

    1) the science media spend an inordinate amount of time talking about supersymmetry, the particle physics community actually spends a lot of time on other ideas, and also on testing very carefully the hypothesis that the Standard Model is correct.

    So nice to know!

    2) LHC experiments are not testing supersymmetry in general; they are testing the idea that supersymmetry resolves the scientific puzzle known as the hierarchy problem. The LHC can only hope to rule out this more limited application of supersymmetry. For instance, to rule out the possibility that supersymmetry is important to quantum gravity.

    As quantum gravity is the holy grail, it is also nice to know that LHC experiments can shed some light on this!

    Thanks, Matt.

    1. Regarding 2), reread the paragraph, he says the LHC does *not* have remotely enough energy to test that.

  12. Whoops, I did not know before that people working on supersymmetric theories or taking it seriously are “zealots” … 😀

    But I like this article a lot, it is very nice to observe (and much needed …!) how such misleading articles with such mediocre titles (probbably choosen just to start flame wars, cause strife, and increase the number of clicks on the expense of the scientific truth), are fragmented step by step and things are put back in order for good.

    Very very very well done 🙂

    Cheers

  13. What other ideas are there about what dark matter might be other than supersymmetric particles? Have you written about them here?

    1. Hmm — I don’t think I have written about it. But it is well known that if you have any stable neutral heavy particle, it is a candidate, if it has the right types of interactions (or at least doesn’t have the wrong types). And lots of theories predict such things — in fact you can just add them in to most theories without a problem.

  14. Any updates in Kyoto on the observed CP Violation in Charm Decays? Does this still look like a possible violation of the standard model?

  15. Good to see that someone who knows is keeping the pulse of this nonsense “scientific journalism”. There are a lot out there doing more harm than good to popular science and it appears that the reason they get hired has to do more with paparazzi-like headlines (like the blow) than really conveying the fundamental ideas to the public.

    1. Well — I don’t want to be overly harsh on Mr. Ghosh. He’s trying his best, clearly. And he was fed hot material by some of the LHCb people (look at the quote he managed to snag) which he then balanced, to a degree, with a quote from John Ellis. But it would have been better to have a centrist talk about the result too.

  16. So supersymmetry is too vague to be dealt a blow by experiments? And if that’s not what you mean can you say what experimental result would warrant a headline “Supersymmetry dealt a mortal blow.”

    1. I wouldn’t use the word “vague”, but rather “broad”. It can show up in far too many ways to be dealt a clean blow, that is correct.

      I cannot imagine any realistic experiment that, by itself, would warrant a headline “Supersymmetry dealt a mortal blow.” Or even “Supersymmetry as a solution to the hierarchy problem is dealt a mortal blow.” Supersymmetry as a solution to the hierarchy problem, like many other similar ideas (such as extra dimensions), will gradually become less and less believable, but no, it will not vanish in a single night.

      Believe me, I wish it would, so we’d just know. But we can’t get that kind of knowledge so quickly.

  17. If you’re looking for your keys, and you know they often fall out when you sit, then you might suspect they are in your house in the rooms with chairs. When you don’t find them there, the hypothesis that they are in the house at all becomes less promising, because now they might just as likely be somewhere in the great outdoors. If SUSY doesn’t solve the hierarchy problem, it may as likely be halfway to the GUT scale as “just around the corner”.

    1. 🙂 well we can all overdo it on the analogies — but yes, if SUSY doesn’t show up at the LHC, then it is far from clear if or where it might be — and it is way beyond my lifetime in any case, most likely.

  18. Matt, I was with you right up to the point where you singled out technicolor as a counter-example of a general theoretical idea that was dealt a blow by the discovery of a new boson at 125 GeV. To define “technicolor” so narrowly is like equating general low-scale SUSY with 100+ parameters to a minimal model like MSUGRA. In fact, the discovery of a new boson is fine for proponents of electroweak symmetry breaking due to new strong dynamics because it gives us a new handle on what kinds of strong dynamics are interesting: those that break electroweak symmetry AND have a 125 GeV boson with properties similar to what’s being observed at the LHC. To say those kinds of strongly-coulpled theories are not also “technicolor” is to take a rather narrow view of one general theoretical idea, just so you can say it’s been dealt a mortal blow, right at the moment you’re advocating taking a broader view of another. Guess Fox News would call that “Fair and Balanced.”

    1. George — I slightly resent the political tone, and especially the cheap shot regarding Fox News.

      I do define the term “Technicolor” narrowly (to not confuse it with more general versions of composite Higgs scenarios that do predict a light Higgs particle.) I think this is just semantics, but if you have an example of a theory that naturally has a 125 GeV Higgs-like particle and has the classic technicolor physics whereby strong dynamics pions provide the longitudinal W and Z modes, please enlighten me.

      1. I would define “technicolor” as an asymptotically-free gauge theory which undergoes spontaneous symmetry breaking producing Nambu-Goldstone bosons which become the longitudinal W and Z modes. According to this broad definition, “composite Higgs” scenarios are “technicolor” models since the longitudinal W and Z modes are Nambu-Goldstone bosons. As you said, it’s just semantics. I took the point of your article as one ought to be careful with the semantics before one uses language like “huge blow” because it’s easy to kill a specific model but it’s hard to kill a general theoretical idea. Of course, you were talking about SUSY but I believe the same is true of “technicolor”.

        1. I’m happy to redefine terms as you like. But in most composite-Higgs scenarios that people talk about today, the Nambu-Goldstone modes are perturbative modes in the effective theory, not non-perturbative ones as they are in QCD — there’s a separation of scales. That separation of scales leads to profoundly different phenomenology than does standard technicolor. If you’d like me to put the term “standard” or “classic” in front of technicolor in my critique of the theory, I’m happy to do that. But I really do view the physics of composite Higgs models as qualitatively different from that of classic technicolor — it’s not a minor change in the idea — and I believe that is the widely held view in the community right now.

      2. Not to pile on here, but I have to admit that comment about “Technicolor” gave me a bit of a jolt too, as the “fifth force” paper by Bjorken you recommended a few months ago mooted the idea that an SM-like Higgs might be a manifestation of some deeper theory that matches the predictions of a rather distinct approximate theory, the way the pion matches Yukawa’s strong force carrier but doesn’t in itself provided much insight into QCD.

        I appreciate the clarification about not ruling out composite Higgs models, though I’m interested to know whether any models that are reasonably “Technicolor”-like models can produce a light Higgs.

  19. And to be clear, I wasn’t trying to characterize blog discussions such as yours as “hand-wringing”; I was referring mainly to the media coverage.

  20. From a non-particle physicist’s perspective, I wonder how much of this hand-wringing going on regarding the lack of SUSY and other BSM signals in the LHC data is due simply to mounting frustration at not finding *anything* that might point beyond our current understanding? It seems that every search or every tantalizing hint in recent years has evaporated after further scrutiny. If, on the other hand, the LHC or other experiments (such as the DM detectors) do find something unambiguously BSM, I would suspect that most physicists would turn their energy to studying that phenomenon, and the past frustrations will evaporate. As an interested outsider, I have found myself hopelessly caught up in the emotion and anticipation of new particle physics results (I guess you could call me a particle physics “groupie”), so I can only imagine how much more magnified it is for people whose livelihood and careers are devoted to the field.

    1. A lot of the hand-wringing isn’t hand-wringing at all — it’s gleeful “look they were wrong” silliness. It’s silly because everyone sensible knew that although the Standard Model has serious problems, all the other ideas have serious problems of their own which require work-arounds. (You can look at my own lectures on these subjects; I’ve had no confidence in our most popular speculative ideas for many years.) And it’s silly because it’s too early to know what’s going on — the LHC experiments haven’t even analyzed 5% of the data that they are going to accumulate this decade.

      First things first: we need to know what’s in the LHC data. Is it completely consistent with the Standard Model, or is it not? that’s a technical question. We can’t know what we should feel until we know the answer to this question. And we’re a long way from the answer.

    2. Speaking of those whose livelihoods and careers are not only devoted to but dependent on, the field: it was pointed out to me that after years of searching and millions of dollars, the Higgs discovery occurred in the 11th hour, when their funding was to end.
      Informed comments welcomed.

      1. What? The LHC is due to run for many years, and it found the Higgs at an early stage. However, it was the 11th hour in the overall quest for the (standard model version of) Higgs, because most of the possible values of the mass had already been ruled out at other colliders.

      2. My informed comment: that bit about funding is a pretty uninformed statement. The funding was most certainly NOT slated to end — LHC funding has long been planned through the end of the decade, not through 2012. And as I emphasized many times before the Higgs was discovered, there was of course no guarantee that there was a Standard Model Higgs in nature; whatever Higgs or Higgses there were in nature could have been much harder to discover, and it might have taken many years to find what was actually there. See for example http://profmattstrassler.com/2011/12/04/why-10-years-to-be-sure-theres-no-higgs-particles/

Leave a Reply

Search

Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC

Related

Recently, the first completed search for what is nowadays known as SUEP — a Soft-Unclustered-Energy Pattern, in which large numbers of low-energy particles explode outward

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/15/2024

About a month ago, there was a lot of noise, discussion and controversy concerning CERN‘s proposal to build a giant new tunnel and put powerful

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/08/2024