Of Particular Significance

Why Theories Don’t Go Into Hospitals

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 11/14/2012

I’m always amused at how very reasonable remarks so often generate attacks from unreasonable people.  I wrote a perfectly ordinary post about what one does and doesn’t learn from LHCb’s important new measurement at the Large Hadron Collider [LHC] (and in fact I overstated the significance of the result — more on that later), and somehow I touched off a mini-firestorm.  Well, that just indicates how essential it is to have calm people expressing sensible points of view.  When people become so politicized that they can’t distinguish propaganda from science, that’s not good.

Forget supersymmetry — because none of my remarks have anything to do with this theory in particular, and the theory doesn’t deserve the excessive attention it’s getting.  Take any theory: call it Theory X.  Extra dimensions; compositeness of quarks and leptons; non-commutative spacetime; grand unification; your-theory-here.  The idea behind theory X may be very clever, but as always, there are many variants of theory X, because an idea is almost never precise enough to permit a unique realization.  Each variant makes definite predictions, but keep in mind that detailed experimental predictions may very well differ greatly from variant to variant.

Now, here is a logical fact:  one of two options is true.

  • Option A: One variant of theory X is “correct” (its predictions agree with nature) while all other variants are “wrong” (disagree with nature)
  • Option B: All variants of theory X are wrong.

Nature is what it is; there are no other options (and this is not the place for a discussion about this basic scientific assumption, so pace, please, philosophers.). [More precisely about option A: the space of variants is continuous, so the correct statement is that an arbitrary small region in this space is correct; you can put in the correct calculus vocabulary as you like.  I’ll stick with the imprecise language for brevity.]

For either option, as more and more data is collected, more and more variants of theory X will become “dead” — excluded because of a disagreement with data.  Therefore — obviously! — a reduction in the number of live (i.e. unexcluded) models always takes place over time.  And this has absolutely no bearing on whether, at the end, all variants of X will be dead, or one (or perhaps several very similar ones) are still alive.

And thus it makes absolutely no sense to describe, as a “blow to theory X” — in particular, to the idea behind theory X — a measurement that excludes (“kills”) even a big fraction, but not virtually all, of the variants of theory X.  It’s certainly a blow to those variants; in fact, it is a fatal blow for them.  But it does nothing to distinguish between Option A and Option B.  It only tells us that if Option A is true, the variant of X that will be alive at the end is not among the ones that have just been killed.

This isn’t rocket science, folks.  It’s logic.  [Well – As a commenter points out, it’s  not “logic” in the strictest sense; but it is basic scientific reasoning.] And if we take theory X to be the Standard Model itself, I’ve just described its history.

The counter-argument?  “Well, but if we know which variants of theory X are more likely, then if we rule out the likely ones, we can say the theory X is now less likely.”

There’s no point in arguing about this claim, because the premise is typically wrong: we do not typically know which variants of theory X are more likely.  Neither general scientific reasoning nor scientific history provide a reliable guide.  It’s not necessarily the simplest ones that are more likely; it’s not necessarily the most symmetric ones; it’s not necessarily the ones that most efficiently resolve the problem that the inventors of the idea intended to solve.  Go back and read the history of the Standard Model, and tell me with a straight face that theorists 40 years ago would have guessed that nature is the way it is.  Actually, many are still alive; you can just ask Steve Weinberg, or Sheldon Glashow, or the Higgs particle inventors, what they thought back then, and whether the variant of the Standard Model that we now find ourselves using was considered likely back in, say, 1974, or 1984.  I know from my own experience that it wasn’t considered likely in 1988, when I started graduate school.

Now, a word about the politics. Supersymmetry is kicked around like a football now, and this ridiculous display is an embarrassment to science.  The theory, specifically as something we would observe at the LHC, was wildly over-promoted by certain people.  And this promotion was strongly resented by certain other people.  I’m not interested in this political debate, and I’m not alone among my theory colleagues, in the United States especially.  I’m interested in nature.  The supersymmetry zealots ought to remember that signs of supersymmetry were quite reasonably expected to show up in the 1980s or 1990s, at the LEP II collider if not before; from many points of view, supersymmetry’s been in increasing question ever since then.   The dark matter argument that bolsters it is full of holes; the grand unification argument that bolsters it is compelling but thin, and need not lead to detectable superpartners at the LHC.  The naturalness argument (cf. the hierarchy problem) that motivates it has been under some suspicion since the 1998 discovery that the universe’s expansion is accelerating.   But meanwhile, the anti-supersymmetry zealots ought to remember that the death (or “hospitalization”?!) of supersymmetry has been declared, by physicists and by the press (and nowadays by bloggers), numerous times already over the past few decades; the latest declaration by the BBC’s reporter is nowhere near the first (see here and here, for instance), and it won’t be the last.   Similarly, classic technicolor, which lacks a Higgs-like particle, was declared dead (or hospitalized) many times over my career, at least as far back as the early 1990s, but it really wasn’t until this July, when a particle was discovered resembling a Standard Model Higgs particle, that the evidence against Higgs-particle-less classic technicolor became very strong (and even now there are people who claim a few variants of classic technicolor are still alive.)

In my view, we (scientists) should disregard the politics, and focus on the hard work that actually needs to be done, by experimentalists and by theorists, to make sure we close as many of the remaining loopholes as we can, for as many types of theories as we can.  Certain outsiders who aren’t actually doing anything useful will say we’re wasting our time, and some will say that the Standard Model is “obviously” true, but we have to ignore their absurd and irresponsible statements, and defend our right and our obligation to do our job properly.   And that job is to test the Standard Model at the LHC in every way possible, prodding it and checking it, excluding as many variants of every reasonable theory X that we can think of, until we have squeezed every available drop of information out of the LHC’s data.

Share via:

Twitter
Facebook
LinkedIn
Reddit

73 Responses

  1. Are the Ad Hominem attacks above worthy examples of:
    1. Blogoria to the nth, or
    2. Getting nothing from something?

  2. Boring… That’s not about physics. That’s kind of indulgence for all those who failed to do the right physics.

    But the fundamental idea of SUSY is still correct: there has to be a kind of relationship (they call it symmetry) between electromagnetic and spinorial fields of electron, otherwise this particle cannot be stable. They tried to use (super)group theory approach to establish such relationship, but it didn’t work. There is nothing to discuss here.

  3. Energy conservation and causality nevertheless prevail, of course, but it requires a changed (or “warped”) metric to do so, and a new understanding of the relation between free and bound electromagnetic energy (E = mcc), and between gravitation, space, and time.

    The Higgs boson acts as a mass scalar which determines the masses of the weak force IVBs (Intermediate Vector Bosons), and subsequently, acting through the IVBs, “gauges” the specific masses of the elementary particles (formalized in the mathematics of the electroweak unification of the “standard model”). We might put this more simply by saying the Higgs boson gauges the mass-energy of the electroweak unified-force symmetric energy state, elementary particles (quarks, leptons) and IVBs included.

    Even though the Higgs may be an attribute of the spacetime metric (acting as the weak force mass scalar), setting the energy scale for the IVBs and by extension for the rest masses of the elementary particles the IVBs produce, this is a one-time high-energy interaction regulating the production of elementary particles; the Higgs field does not continue to interact with particles (as a sort of “ether drag”) to produce their inertial resistance to acceleration. Instead, this role is played by the spacetime metric, interacting with a particle’s gravitational field, an interaction which produces a particle’s inertial resistance to acceleration, and precisely in proportion to its mass.
    The concept that a particle’s inertial mass (resistance to acceleration) is entirely due to the interaction of its gravitational field with the metric field of spacetime.

    We must use analogy to gain some level of understanding concerning the unfamiliar concepts of the Higgs field and the Higgs boson (the latter is the quantum unit of the field). We are familiar with the spacetime metric and the photon. The spacetime metric is the low-energy analog of the Higgs field and the photon is the analog of the Higgs boson.
    A special feature of our spacetime or electromagnetic metric is that it will produce a (limited) spectrum of elementary particles (in particle-antiparticle pairs) when it is supplied with sufficient energy. This phenomenon tells us that electromagnetic energy exists in two forms – free (light) and bound (particles). The Higgs metric is a particle metric rather than a dimensional metric, and its symmetries are between particles and forces rather than between dimensions. At the high temperatures of H1, all particles are moving so fast they are essentially the same as photons moving at velocity c. At the H1 energy level, there are no large, empty dimensions: instead, the metric is densely occupied with particles (quarks and leptons); photons cannot move freely under these conditions in any case.

    The weak force “massive IVB” mechanism works because the unified-force symmetric energy states (the energy levels at which the forces join or separate from one another) are discreet, well defined, and invariant. They can therefore be accessed by a quantized high-energy particle (the IVB) whose mass reproduces exactly the necessary unified-force symmetric energy level for a specific transformation.

    As the universe expanded and cooled from the H3 Planck Era, it stepped down to H2, the force-unity symmetry state in which the strong and electroweak forces are unified (the GUT or “Grand Unified Theory” symmetric energy state in which baryons are created and destroyed – the Leptoquark Era resulting in the creation of matter). With further expansion and cooling the universe steps down to the H1 or electroweak force-unity symmetry state (the Hyperon Era under consideration here), in which individual leptons and quarks are created, transformed, and/or destroyed. The final stable ground state of the universe is that of our daily experience, H0 or the electromagnetic ground state (Atomic/Chemical Era) in which only information states (electron shell chemical combinations including living systems) are created and destroyed. (All nuclear reactions/transformations belong to the H1 energy level or higher.).

  4. Spin measurements may be the defining experimental difference?

    /The naturalness argument (cf. the hierarchy problem) that motivates it has been under some suspicion since the 1998 discovery that the universe’s expansion is accelerating./–

    Neutrinos arrive earlier than expected and universe’s expansion is accelerating means, speed of the light is not same everywhere in universe?

    Universe’s expansion had been accelerating for some time. Why this discovery shocked particle physicists?

    If SM Higg’s abandoning of “gauge invariance”(electroweak symmetry breaking) is scalar- ie have even parity with spin “0″- disappearing without being absorbed by other particles. “If that angular distribution of the stuff it breaks into is totally spherically symmetric, that would be spin zero.

    The standard model demanded that in the very early hot universe the electromagnetic and weak nuclear forces were one. It was only when the Higgs field emerged a billionth of a second or less after the big bang that “the pair split”, in a cataclysmic transition known as electroweak symmetry breaking. The W and Z bosons grew fat and retreated to subatomic confines(abandoning of “gauge invariance”); the photon, meanwhile, raced away mass-free and the electromagnetic force gained its current infinite range.

    Parity “was” a conserved quantity(layers of energy) at high temperature. Did LHC achieved this temperature to abandon this conservation of parity?- or the new particle split is something else at low temperature?.

    Photons raced away mass-free, but it bend along with universe also have different speed along with expansion. So the leptons out of confinement will behave like photons- bend along with gravity- spin not conserved?.

    1. The neutrino problem has sadly been solved, it turned out to be a loose cable messing up the measurement. The accelerating expansion of the universe does not affect the speed of light through space. (Since it is space itself that is expanding.) So as far as we know, it’s still the same everywhere.

      The discovery that the universe’s expansion is accelerating was a shock for two reasons. Firstly it’s hard to tell; we know how fast it is expanding today, but seeing how fast it expanded in the past is another matter. (If someone has blown up a balloon it is hard to tell how fast they blew it up.) So it is only recently that we discovered this. The second reason is that it was expected that nothing was driving the expansion (Nothing is simpler than something, so we went with that.) It was rather surprising to find something there.

      I am not sure about the rest of your post.

      1. If the the speed of light through space is the same everywhere, my post reduced to –
        A component of spin can be increased or decreased with “raising” and “lowering” operators, and the change is always in natural units of 1. (This is just a result of the universe having three spatial dimensions, so if the answer was any different then the universe would look very different!).

  5. There is an Option C: More than one variant of theory X is “correct.” As an analogy, if you have a finite number of points in the xy-plane, more than one curve will fit them, if your curve-fitting functions have enough degrees of freedom. Likewise, it is conceivable that more than one variant of theory X could agree with the Standard Model at low energies.

  6. While I agree with the general sentiment, haven’t scientists in the past, when thinking about new theories taken criteria like “beauty” and “simplicity” into account? If we cannot have a prior on which theories are true, then such heuristic criteria would not be very effective…

  7. No one ever mentions time. As if, it is irrelevant? Maybe we should think about this rather than just ponder the physical interactions of infinitesimals. It may have some acute relevance – for myself myself I am sure of it.

  8. The post is certainly very reasonable. However, it does not do justice to those who never believed in SuSy and were ridiculed for that. It does not do justice to those who did jump off the band wagon when SuSy was not found two and three decades ago. String theory was the successor. It is a method that may apply in certain situations, no more, no less. But what a bogus we get over us from the proponents. “Theory of everything” ,
    “multiverse”. I can understand why certain people now ask the SuSy proponents for account. The trick: “We move the goal posts, you send the money” has become too obvious.

    1. This comment does injustice up to insult to the whole particle physics community.
      The LHC has not even run at the full energy, only part of the up to now collected data are analized, and the experimental searches are far from over.
      And you want already to push the game over button. The remark about the money makes me vomit, it is completely off the mark. Science and education (including experimental particle or other large physics) is always such a large fraction of the whole budget of every country in the world …
      In the US, high energy physics is already at the edge of vanishing from the screen, so you should really feel ashamed about making such a scornful comment.

      it is exactly people like you that I was talking about in my above comment and that scare me and that make me suspect that even if we have a chance from the technological point of view to find answers to fundamental questions we never will because of people like you 🙁

      1. I find it better explained, “the big bang split of electroweak forces and the LHCs abandoning of electroweak symmetry breaking is not of same logic”

      2. This reaction is a few bits overdone. To attack the messenger rather than discuss the message, is an old, proven trick. SuSy does not do what was promised, so modesty and self-reflection of proponents is appropriate. It can still work in principle, but for now sign are: no SuSy at the LHC. Perhaps it exists only beyond the unification point, as some of the proponents believe. String theory does not reproduce the standard model, and if it does, then by about as many assumptions as there are parameters in the theory. The multiverse is an unscientific (untestable) concept arrived at after the claim that String theory explains everything. This is unfortunate reasoning, a simpler way out is consider that it is a tool, useful for some questions, and not for other questions. The multiverse was inspired by the Everett (many worlds) interpretation of quantum mechanics, which has no role in solvable models for the full measurement process, so it can and should be abandoned without any loss.

      3. @Theo Nieuwenhuizen

        Prof. Strassler has in this and in about 20 articles before, where he had to put back what the media or zealot bloggers etc have messed up into the right scientific context, beautifully explained what can be concluded from the date and knowledge we have gained so far and what not in accordance with the scientific method concerning SUSY or other new physics. He is a very clear headed and honest scientist who is interested in nothing but the scientific truth and finding out how nature works at the deepest level. Concerning this reasonable attitude I completely agree with what Prof. Strassler uses to say, as every sane and reasonable person does and therefore have nothing to add.

        People who come here to disagree with Prof. Strassler about what he very clearly and nicely explains about how science is done in accordance with the scientific method, are obviously zealots who have no clue about how the scientific method works and is applied by real physicists. They are either pro or anti SUSY/new physics zealots as Prof. Strassler calls them.

        It is baffling to observe how quickly the comment sections of such articles, where Prof. Strassler just puts by the public and in the media widely misunderstood things straight and defends the right and the duty of honest and serious physicists to follow the scientific method, are overtaken by tons of crazy Anti-SUSY-new-physics zealots who understand absolutely nothing about what is explained in Prof. Strassler’s articles or even worse, attack and insult him and his colleagues. You are obviously among them.

  9. Concerning the higgs, most of its mass range has been excluded before it was finally discovered in its last small hiding place.

    I’m happy that the many commenters here who disagree with what Prof. Strassler nicely explained, did not have a say about what physicists are allowed to do. Otherwise we would never have discovered the higgs, because the search for it would have been given up way to early.

    I really hope that such impatient and short sighted people do not unjustifiedly prevent the discovery of additional interesting things by forcing particle physicists to give up too early.

    Most comments I read here are very worrysome …

  10. Dear Prof Strassler:

    I want to go back to your earlier analogy for a moment, you said, “If you’re looking for your lost keys, failing to find them in the kitchen, living room and bedroom is not evidence against their being somewhere else in the house.”

    Just to expand on that a little, let’s imagine a situation where you’ve lost your keys and you think they are either in the kitchen, the living room, the bedroom, the bathroom, or we’ve lost our keys. Let’s say that you don’t know which is more likely, but you feel that each possibility is equally likely.

    So when we go to search the first room, there is a 20% chance to find our keys. However, after a search, no keys turn up. So we go to the second room. What is the chance to find our keys? There are only 4 possibilities left, so 25%. Notably, the chance that we’ve lost our keys has gone up from 20 to 25%. So fast forward: there’s only the bathroom left to search, what’s the chance we’ve lost our keys? Up to 50%.

    Conclusion: In a situation where probabilities can be reasonably assigned or estimated and finding nothing is a possibility, as we carry out a search the probability of finding nothing goes up. This is the Bayesian statistics which others have referenced and of which you are no doubt aware. So I think this is non-controversial and we can all agree on this.

    OK but you say: “[Their counter-argument is] ‘Well, but if we know which variants of theory X are more likely, then if we rule out the likely ones, we can say the theory X is now less likely.’ There’s no point in arguing about this claim, because the premise is typically wrong: we do not typically know which variants of theory X are more likely.”

    In other words, because we can’t assign probabilities, Bayesian statistics don’t work. OK, so let’s go to another example: you’re still looking for your keys, they can still be in the kitchen, the living room, the bedroom, the bathroom, or lost. However, we have no idea how likely they are to be in any of those places, we can’t even guess, we know only that the facts suggest that there is a non-zero chance of them being in any of those places. In other words, we’re only going to look in places where we think have a chance to find something.

    OK so we again begin our search. We search the first room and find nothing. What happens? The chance to find something in the first room has gone from some positive number to zero. We have no reason to believe that finding something or nothing has become more likely as a result of this search. Therefore, the probability of all the other outcomes has gone up by a non-zero amount, including the probability of finding nothing. Fast forward: we’ve searched the whole house except the bathroom. Are you equally confident of finding your keys at this point as at the start of your search? Of course not, because you only have one chance left to find your keys. Additionally, the places you’ve already searched where you had reason to find your keys turned up nothing. The probability of losing your keys has gone up since the start of your search, you can’t say by how much, only that it has increased.

    Conclusion: if you’ve been looking for your lost keys for 40 years and searched dozens of rooms without finding anything, and are still as optimistic as on the first day you set out that you will find your keys, you may be a deluded optimist or a superstring theorist 😉

  11. Dear Matt,
    I like your articles on physics a lot, and I agree with your point that there are a lot of non-scientific, exaggerated statements around about theories that are “in” or “dead”.
    However, I think in this blog entry you entered into a philosophical debate that mixes things up and is not scientifically productive. The things that are mixed up are (1) LHC results and (2) general theories about (the phyiscs of) nature in a very broad sense.
    The LHC is running now and we are talking about the results of its experiments. In this context, we can only meaningfully consider theories that make predictions about observations (or the lack of observed effects) that are accessible to these experiments. In this restricted sense, the experiments may very well sort out the theories that are in agreement with the experimental results, and those that disagree so strongly that the idea should be discarded.
    On the other side, it makes no sense (in my view) to talk about the LHC experiments and then ponder about the broadest range of theories that make no specific prediction that anything observable in the LHC experiments should show up (or not show up).
    There should be a clear idea of what kinds of effects are observable in the LHC, and which set of theories can be tested against them. Some of the current discussions lack such a clear, scientifically developed specification of what to test. This led to a situation where only a few months or years ago some people promised that the LHC will show observable effects for some theory X or Y and be able to “test” the theory, and now, that some of these effects have not been observed, they do not accept the “negative” test outcome but retreat from the testable range of ideas to purely speculative ones that maybe some variant of the theory be correct but so extremely far away from the energy scales of the LHC that we have no chance to observe any effect in our life times. If a theory makes no prediction that it expects to see a certain effect in the LHC experiments, it should be left out of the discussion here.

  12. If a version of theory X is right in nature, all variants of theory X must be partially right, verifiable by some (not all) data.

    If the Y-point in the correct theory X is not a part of all other variants of theory X, then it is no long a theory X but a new theory Y.

  13. Matt, here’s the application of Bayes’ Theorem in this case:

    P(A when B) = [ P(B when A) / P(B) ] P(A)

    P(A) is your prior probability (degree of belief) that a superparticle exists.
    P(A when B) is your probability of a superparticle existing after a negative experiment.
    P(B when A) is the probability of a negative experiment even though a superparticle exists.
    P(B) is the probability of a negative experiment.

    Summing over the possibilities,
    P(B) = P(B when A) P(A) + P(B when not A) P(not A)
    = P(B when A) P(A) + 1 ( 1 – P(A) )

    My claim is that the factor in brackets,
    [ P(B when A) / ( P(B when A) P(A) + 1 ( 1 – P(A) ) ) ]
    is always less than 1, provided P(A)<1 (you are not a fanatic) and P(B when A)<1.

    This implies that, after an experiment has excluded some range in which a superparticle might have been seen, the likelihood that a superparticle exists decreases.

    Do you now agree?

    1. Interestingly, you can see why there’s contention here by looking at how the change in probability depends on your prior belief, P(A). For a small P(A), P(A when B) drops in direct proportion to P(B when A), while for a large P(A) it barely drops at all, until P(B when A) gets large.

      1. (Sorry, meant “until P(B when A) gets small.” That’s the probability of not seeing a superparticle when it exists — i.e. how much of the possible range is left.)

      2. Oh great, Motl and I agree on something. Now I know I’m in trouble.

        Matt, care to comment on the Bayesian analysis, and the implication that the likelihood that a superparticle exists decreases with a negative result, even if only a little?

  14. I think you’re missing an aspect of the argument here. Supersymmetry (or any other Theory X) may have a vast parameter space, but often only a subset of those parameters accomplish what people want them to. If you are someone who argues for supersymmetry on the basis of the hierarchy problem, then an experiment that excludes SUSY up to 10^16 GeV (blatant fantasy, yes, but makes the argument easier) then supersymmetry may well still exist, but it certainly does not solve the problem that you set out to solve. _For your purposes_, supersymmetry is in the hospital, even if _a priori_ there might still be SUSY broken at the string scale. Since any scientific theory only exists in terms of an (admittedly changeable) range of purposes, the theory is dead in a completely meaningful sense if it cannot achieve that which its advocates want it to achieve.

    1. Except that the B-meson-tu-muons decay measurement doesn’t prove that SUSY can’t solve the hierarchy problem so your comment isn’t relevant.

      Whether the branching ratio agrees with the Standard Model and whether the parameters of a SUSY model make the light Higgs mass natural are two different questions that are not “overwhelmingly correlated”. Even with light enough superpartners, the branching ratio may also see cancellations between various loop diagrams etc. and even if the chance of such cancellation were 1-in-10 or 1-in-100 to agree with the inaccurate measurement we have, it’s still not a “punishment” that would allow a fair person to make far-reaching conclusions about the whole natural SUSY framework.

      The continuing absence of new physics at the LHC may be viewed as a path towards proving that the lightness of the Higgs doesn’t have a “dynamical explanation” (and one should perhaps get ready to adopt an anthropic explanation). But if it’s so, and I think we’re not there yet, the LHC is proving this thing not only about SUSY but about any candidate solution to the hierarchy problem. Even outside SUSY model building, attempts to solve the hierarchy problem represent an important subclass of phenomenology because, frankly speaking, the lightness of the Higgs is one of the few hints that may direct people to “predict” what new physics expects us if any. There are also model building efforts based on other ideas than the hierarchy problem but most of them are much less motivated.

      1. “Except that the B-meson-tu-muons decay measurement doesn’t prove that SUSY can’t solve the hierarchy problem so your comment isn’t relevant.”

        Except that Matt’s post is about the general ability of a theory to be “in a hospital”, not that particular case, so your rebuttal isn’t relevant.

        My comment was that it is quite easy for a theory to be “in the hospital” if it is posited as a means to avoid fine-tuning and the experimental results rule out less fine-tuned regions. You can have a continuous distribution of damage to the utility of the theory, far from the simplicity of the “office/cemetery” binary.

        But yes, the particular case that inspired Matt’s post is not such a situation.

      2. Dear Graviton,

        the fact that I haven’t presented a general argument doesn’t mean that such a general argument doesn’t exist.

        I agree with Matt that theories can’t be in a hospital – in general – for a simple reason: at the end, a well-defined particular theory/model has to be either wrong, or (temporarily) right. So if a solution to the hierarchy problem doesn’t solve it as “fully naturally” as we may have expected, and if there’s some annoying modest fine-tuning left, it may be a hint that the model is wrong, or it may be just a proof of our misplaced or excessive expectations about naturalness (our expectations may deserve a hospital).

        In the first case, a cemetery awaits the model and it will get there because the model must make some predictions that are “indisputably wrong” (because the model is wrong); in the latter case, the theory stays in the office but our expectations how much things should be natural must be adjusted. You know, if there’s a number that’s adjusted to a small value whose probability seems to be just 1 in 100 or 1 in 1,000, it may still happen. It’s not impossible. Such an observation of a residual fine-tuning may be a 3-sigma-equivalent argument against a theory but it can’t get stronger, unlike measurements of well-defined and predictable quantities that may get arbitrarily strong and precise.

        So there are no hospitals for theories – in general. There can only be an undetermined state which is a probabilistic mixture of “office” and “cemetery” but there’s no “hospital” eigenvalue of the truth value in between.

        Best wishes
        Lubos

        1. Ah I see. I don’t tend to be a fan of naturalness arguments, so I was trying to be charitable in assuming that people who use them have good reasons for their expectations. But you’re right, it’s important to differentiate those expectations from the theory proposed itself.

      3. Right. The whole expectations of “naturalness” may be flawed in some way. But even if they’re right and correctly describing Nature, they’re still just probabilistic arguments. So one may say that within a theory, some combination of values of parameters seems unlikely – the probability of those (or more extreme) values seems to be a small number “p” (like 0.01 or 0.001 for some SUSY or other models on the market). But there’s no way how to reduce (make more extreme) the value of “p” further. So such an observation of some fine-tuning (or residual fine-tuning) is always just a “finite amount of an argument” against a theory (like a single 3-sigma bump or deficit) and it’s always possible that some stronger evidence – in principle, evidence of unlimited strength (because bumps may grow with luminosity etc.) – in favor of the theory accumulates, making the probabilistic argument against the theory negligible and irrelevant even if the naturalness philosophy is right (as a probabilistic tool to direct research).

  15. “In my view, we (scientists) should disregard the politics, and focus on the hard work that actually needs to be done, by experimentalists and by theorists, to make sure we close as many of the remaining loopholes as we can, ”

    “for as many types of theories as we can”

    Absolutely and look under all stones if necessary and forget the political doctrine, which prevents this, then spend all summer digging around just one because of it.

  16. Quite a balancing act to play host to those whose interest is rhetoric, or antagonism, as well as to those whose interest is nature, and it’s attendant mathematics.

  17. “There’s no point in arguing about this claim, because the premise is typically wrong: we do not typically know which variants of theory X are more likely.”

    You are obviously not consciously a Bayesian. Despite all the difficulties associated with working out the likehoods of various theories I’m sure you’ll agree that some sort of prioritization has to take place in the real world. Should we devote resources to testing theory A or theory B? Should you devote your career to studying/testing/advancing theory A or B? Subjective or not, these decisions inherently MUST involve judgements of the likehood of a theory being true (along with many other factors such as cost, utility, etc). The alternative would be to fund every crackpot notion as well as those more widely accepted theories.

    1. I don’t think that Matt has refused the Bayesian inference in any way. He’s just saying that different realizations of a framework such as supersymmetry are often “comparably motivated” so it makes no sense to be too sure that one of the versions is surely more likely.

      And even if one of them were more likely than another version, it doesn’t mean that the less likely version of supersymmetry is impossible. It doesn’t even mean that one should spend less research time with it than with some completely different type of theories.

      It depends on the context but if I were asked to divide SUSY models to 10 distinct subclasses and compare it with 10 classes of typical non-SUSY model building meant to address the observations at the LHC, I would choose any SUSY model over any particular non-SUSY BSM class. So even the 10th most likely realization of SUSY may be more motivated, sensible, or reasonable than some competitors. Your implicit assumption that the “2nd most likely SUSY model or class of SUSY models” is already less likely or less motivated than some completely different models is absolutely unjustified and, as far as I can say, it’s wrong.

      Moreover, this whole focus on SUSY appears because SUSY is the most sensible framework to incorporate many kinds of new signatures that could be seen at the LHC. But the very suggestion that a SM-like branching ratio of a B_s meson only “hurts supersymmetry” is incorrect by itself. It “hurts” (in the same way) pretty much any class other beyond the Standard Model with certain mundane properties. In fact, SUSY still allows the “generically strongly coupled” new particles to be lighter than pretty much any other framework, so it remains the “least constrained” framework by the LHC data.

  18. I cannot help but notice the parallels between Particle Physics and Genetic Genealogy, which I’ve been pursuing for the last three years. On my first foray into Genetic Genealogy, I had my Y-DNA tested. Y-DNA is carried only by males, is passed relatively unchanged for generations, and can determine the direct paternal lineage’s geographical origin in the distant past. At FamilyTreeDNA, where I tested, they analyze 12, 25, 37, 67, or 111 markers on the Y chromosome, depending how much you want to spend. The more markers you match to another person, the closer in time you, and that person, share a common male ancestor. So it’s a bit like the confidence level of identifying a new particle, where a discovery is characterized as being at such and such a Sigma level.

    The testing company confidently predicted I would be in haplogroup R1b, the most common for Western European descended males. So it was a surprise to find I was actually in haplogroup E, which runs at only a few percent in northern Germany, where my paternal line originates. Amazingly, my closest match at 37 markers traced his paternal lineage to a north German village only 6 miles from my paternal lineage’s village. Moreover, we could trace our ancestors back to the same time period – 1685 and 1705 – in these adjacent villages. The combination of a rare haplogroup, proximity in time and space for our respective ancestors, made it appear to be a slam-dunk. Surely we shared a recent, common ancestor, despite unrelated surnames?

    Wrong! When my match upgraded to the 67 marker level he was 9 markers different from me. We were not recently related, despite initial strong signals hinting at that. The new appreciation is something like 500 to 600 years ago for the common ancestor. So the moral of the story is never jump to a premature conclusion in any scientific endeavor, be it particle physics, genetic genealogy, or whatever, until sufficient data is at hand to back your belief or claim.

  19. Greatly appreciate your objective, unbiased point of view, that is guided by actual data, and doesn’t close the door completely, or mostly, shut on alternative ideas until there is overwhelming evidence against them.

  20. There was a time when learned people thought the world was flat and prepared argue this to their dogmatic death. In a way it still is, because we are not in complete possession of all its secrets so we are not qualified to say it is round just yet. What vaguely frustrates me is the target fixation that the politics of science has. If a pilot was to remain fixed on his target he will accidentally abandon the reality that he is flying an aeroplane with a disastrous outcome. I wish it were a part of this political scientific club where other notions could be considered as well as the fashionable ones. Which determine whether a scientist can get a research grant or not…

    1. I’m very glad that you are NOT part of this political scientific club that has a say about whether a scientist can get a research grant or not !

      You have obviously not understood a single word of what Prof. Strassler has tried patiently and repeatedly to explain here.
      Because of people like you, he gets permanently distracted from other things he wants or needs to do, since he has to put things right and back in order many people just refuse to understand.

      Please stop it

      1. actually i was defending Prof Strassler, and commend him highly – but regretabley you miss the point. And regarding the club I have been a member of participating in the Bright Euram projects decades ago etc. No offence intended – just that ‘my youth’ has long since moved out of this old shell. My meaning is that ‘Generally’ the politics of science is lead by popular and fashionable ( and not always logical precepts & theories). In the case of Galileo he was not part of the club and he solved the problem. Leaving the club to change their mind. SS & HB may not actually be the right direction, and in perusing this direction the correct direction is totally ignored and remains invisible because, someone who may posses such a notion also remains invisible as will the concealed route of understanding. An easier way to describe it is blinkered vision. It does not matter if SS cannot be validated. What does matter is that fact that we have wasted time based on a invented notion they maybe it does? The philosophy of research is just as important as the research itself, because without it how do we know what to research in the first place? I did not realise my small comment was so emotive – I apologise for that.

      2. @ewj9 don’t worry about Dilaton, he appears to be someone who wants to control what people say on Matt Strassler’s blog, despite it not being his, nor is he a professional physicist by training, unlike Peter Woit who has a PhD in particle physics from Princeton.

        I think it’s reasonable to assume that if there was something wrong with your comment then Matt would have said so. It looks to me that Dilaton is just trying to make noise for the sake of seeing his name in print here, as usual.

      3. @ewj9 ok I see now that you mean no harm, conversely to many other stubbornly unreasonable commenters here and below the related posts.
        So I’m sorry that I exploded against you.

        @Dopey_John
        You are right that my comment targetted at ewj9 was not appropriate but you are quite blatantly wrong about my education 😉

        Cheers

  21. “Supersymmetry’s been in increasing question ever since then.”

    How does that not map onto

    “We can say the theory X is now less likely.”

    It seems to me that by disregarding the particularities of supersymmetry and its history from the start of the post you are exactly excluding the point that the BBC article was making, if in exagerated form.

    Only to then turn around and agree that the latest results continue to put question marks behind low energy susy, rather than the exclamaition marks that some people were again so sure to have by now.

    Nothing against SUSY, it’s just that the parameter space that made it interesting has mostly been ruled out. Just scientific reasoning.

  22. Hi Matt, I would like to know what remarks caused your upset? I assume they remain in your blog? Also, I wish I could draw you into a serious consideration of my hypothesis. Commencing with the introduction and consideration to quantum relativity …….>

      1. Dear anonymous, I am saying “sorry” in the real life quite often and if you talk about my blog, you will find 295 blog entries containing the word sorry.

        http://motls.blogspot.com/search?q=Sorry&m=1&by-date=true

        The last time I said sorry was in the post about the multijets today – when I learned that the background prediction was 6.5 rather than 4, as I originally learned from Matt.

        It’s normal to say sorry and decent people say it when needed. It’s also right not to say sorry when there’s no reason.

  23. I can see what you are saying but not quite sure I fully understand.

    If I was to call my keys ‘nature’ and my house ‘SUSY’. Then if I’ve searched and did not find my keys in the bedroom, kitchen, dinning room, then maybe I begin to think it is more likely that I really left my keys behind at the office ( called ‘composite’ ) or dropped them on the road ( called ‘extra dimensions’). Perhaps I’m mistakenly thinking that the continuous parameter space of possibilities should somehow cover all known theories.

    I wonder if most physicists felt disappointment that indications of BSM were not found in the recent results on Bs decay. So maybe the recent news headlines could have said it was ‘a disappointment’ rather than ‘a blow’.

    1. This is the problem with metaphors. The logic you use assumes that the probability of finding your keys in any given area in your house is the same, say a 25% chance of them being in the bedroom, a 15% chance in the kitchen, 50% in the large lounge area, etc. Thus as you eliminated each room you would lessen the chances that your keys *were* in your house.

      But we are judging things based on evidence supporting the house as a whole. We have evidence that suggests the keys are in your house (Or at least doesn’t prove they are not.) Say a friend said you left your keys there. The evidence you get by looking applies only to that room, not the house as a whole. As you eliminate rooms you have a better idea of where the keys may be, and closer to finding out if they are or aren’t in your house, but unless you uncover evidence that applies to *all* the rooms in your house (Your friend calls up and says no wait, he was thinking of your hat, not your keys.) then you can’t change the probability of your keys being in it.

      Once you have searched the *entire^ house top to bottom thoroughly you can be *quite* sure the keys aren’t there. (But hey, maybe you overlooked them repeatedly, it can happen.)

      The best thing would be to find some evidence that supported only one single, specific location. (A note you left yourself about leaving your keys under the doormat.) which would immediately rule out all the other possibilities, but the LHC is a million room mansion with keys the size of a grain of sand, and we have to look in the dark with a solar powered torch and it;s very unlikely indeed that something so drastic will appear.

      The metaphor has now been stretched to breaking point.

      1. @kudzu thanks . I understand now after reading your reply (and Matt’s post once again more carefully).

  24. Aah, here is the very nice, reasonable and much needed reply to same very annoying and wrong comments below the earlier post 🙂

      1. A very good description by Philip Gibbs, which I agree with. Note that, in his description, as you exclude the search space, the probability increases that your particle isn’t in the space. (It does not increase directly in proportion to the excluded space, but it does increase.)

  25. Hi Matt,
    Does a measurement that excludes “virtually all” of the variants of theory X increase the probability that all variants of theory X are wrong? And by “virtually all” can we agree that you mean something like 99%?

    1. No. Imagine that a version of theory X was right. Excluding almost all version of theory X as wrong would not be taking us closer to discovering the idea of X was wrong, but that it was right.

      It is quite possible that the story of supersymmetry will be a narrowing down until only the right theory is left, or a narrowing down to nothing. It’s even possible an outstanding new measurement will confirm SS, at an instant disproving all other variants (And many other theories as well.)

      1. Matts logic only holds true if the variants of theory X are treated as unique theories that have an equal probability of being correct with non-variant theories. This isn’t true. Ruling out a large class of variants of theory X increases the likelihood that the common underlying concept for all these variants is false and thus theory X itself is false.
        I have a theory that the moon is made of cheese, variants of this theory posit green cheese and others that is might be brie. We launch a probe that disintegrates on impact. This rules out brie but the moon might still be made of a cheese hard enough to destory our probe. It is more likely however, that it is not cheese at all.

        1. That’s true, however in this case most theories (Or types of theories) *are* quite independent. The LHC is not telling us about the underlying physics directly, but giving us data that we can then use to work things out.

          Your analogy is apt, we are crashing probes into the moon to see how hard it is, only we’ve just started doing so and can’t rule out any but the softest cheeses. (And the alternative compositions are chalk or marshmallow.)

          What we would dearly love is some evidence relating to the basic workings of SUSY itself (For or against.) which would then affect *all* SUSY theories. (Like sending a probe that measures the actual elementary composition.) All we have at the moment is evidence of the ‘lowest’ sort, that tests the final results of theories. And there are always more ways to tweak a theory to make it fit that.

  26. Actually, as presented, it is neither rocket science nor logic, you use the term “logic” too informally. It is metaphysics, it is about the testing of ideas, it is about beautiful lies. It is not about questioning the epistemic premises upon which they rely and must also be tested, an inquiry that may (eventually) make it physics. It all has the flavor of flinging enough theory at the wall to see how much of it will stick, its the method of desperate decorators – not scientists. Physicists need to return to a time (before 1950) when epistemology was at least one of the things on their radar, then it’ll be logic.

    1. I accept the first part of what you say as a fair critique; I could have been more linguistically and philosophically consistent in my level of formality.

      The second part of your comment I don’t understand. What makes you think that before 1950 things were so much better? And I don’t think I agree with your “desperate decorators vs scientists” analogy. When you have a huge amount of data, and you know that there are many types of new phenomena which could hide in that data, you have a very serious data mining problem — and that is our situation at the LHC. If we knew the best way to do the data mining, that’s what we would do. But we *don’t* know. So we have no choice but to look broadly and thoroughly. The science (and the art) is in being efficient in ruling out hypotheses.

      Perhaps you should, instead, think of us as like FBI officers who’ve received an anonymous tip that there may be a bomb somewhere in the city. How do you deal with that information? There’s no obviously optimal way. The best you can do is look everywhere you can think of in as systematic a way as you can, given finite time and finite number of people — even though you know that the bomb may not even be there.

      1. Before 1950, esp. at the advent of quantum mechanics there was a closer relationship between leading logicians and physicists – with good reason. The epistemic questions were to the fore. The Copenhagen interpretation being the most well-known example of such an inquiry (I’m not advocating it, I simply point to it as an example). Numerous leading physicists engaged in the question. People like Rudolf Carnap and Hans Reichenbach, John von Neuman, etc… exchanged ideas – a notable conversation between Carnap and Einstein comes to mind. Einstein’s work drove many new epistemological questions. Earlier Mach, Peirce (both Charles and Benjamin) and others put these fundamental questions to the fore.

        In part because it is intrinsically hard and, in part, because just doing the calculation is easier. Ask an epistemologist what they think of the Higgs mechanism, for example.

        It seems to me, that for all the wonder of modern mathematical physics, it has fallen (over the past 60 years) back into metaphysics (admitted, and even accepted, by some) and the epistemic issues are now widely neglected in physics, their relevance poorly understood. And, further, I believe this is the reason that physics has made little theoretical progress in recent decades (i.e., politics aside, it is “The Trouble With Physics”).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.

Related

If you’re curious to know what my book is about and why it’s called “Waves in an Impossible Sea”, then watching this video is currently

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 11/04/2024

Recently a reader, having read my post about why the speed of light seems so fast, sent me two questions that highlight important cosmic issues.

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 10/28/2024