As soon as the discovery of that famous new particle was announced at the Large Hadron Collider [LHC] last year, there were already very good reasons to think it was a Higgs particle of some type. I described them to you back then, as part of my “Higgs Discovery” series. But, as I cautioned, those arguments relied partly on data and partly on theoretical reasoning.
Over the past nine months, with additional data collected through December and analyzed through the present day, it has become more and more convincing that this particle behaves very much like a Higgs particle, along the lines I described following the Edinburgh conference back in January. One by one the doubters have been giving up, and few remain. This is a Higgs particle. That’s my point of view (see last week’s post — you heard it here first), the point of view of most experts I talk to [in a conference I’m currently attending, not one person out of about forty theorists and experimenters has dissented], and now the official point of view of the CERN laboratory which hosts the LHC.
Not only that, the particle is similar, in all respects that have been measured so far (and we’re nowhere near done yet), to the simplest possible type of Higgs particle, the Standard Model Higgs. It is therefore natural to call this a Standard Model-like Higgs particle, shifting the “-like” over a step. That wording emphasizes that although confidence is very high that this is a Higgs particle, we do not have confidence that it is a Standard Model Higgs, even though it resembles one. This is for two reasons.
First, with the data currently available, the measurements are not precise enough to rule out deviations from a Standard Model Higgs as large as 10%, 20%, or even as much as 50% in certain of the particle’s properties.
Second, many interesting speculative theories, despite being dramatically different from the Standard Model, nevertheless predict nature will exhibit a Standard Model-like Higgs particle — one that may be distinguishable from a true Standard Model Higgs only after the LHC has gathered much more data.
The reason this happens in so many theories is due to something called the decoupling theorem, which I mentioned in this article and this article; it demonstrates that even when the Standard Model is not the complete theory of physics at the LHC, Standard Model-like Higgs particles can arise in many different ways. And thus the fact that this Higgs particle is Standard Model-like, though obviously strong evidence against many theories that would have predicted something else, is not compelling evidence in favor of the Standard Model.
So when particle theorists like me wonder, “Does the Standard Model describe all phenomena at the LHC?”, we know the answer will not come easily.
84 thoughts on “From “Higgs-like Particle” to “Standard Model-like Higgs””
So there are many “SM-like” theories with “SM-like” Higgs particles and finally we do not know what we deal with, but it is certainly a Higgs particle.
No, actually there are many theories that are NOT very SM-like but have an SM-like Higgs particle anyway. An SM-like Higgs might mean the SM is right, or that an SM-like theory is right, or that a rather non-SM-like theory is right. That’s what makes it all so tricky.
Of course no matter how the SM has to be adjusted, it certainly works beautifully below a couple of hundred GeV in energy, and for some types of processes, it works well up to energies 10 times higher. Still, there could be particles that aren’t affected by the strong nuclear and/or electromagnetic forces that we could have missed, and some of these could be very lightweight indeed.
I’d call it just Higgs. Even though there is some chance that we may be wrong about something I think we can live with it. Native americans are still called Indians after all and nobody has problem with that.
I hope you’re joking (hard to tell…) since I believe that most Native Americans, and many non-native Americans, DO have a problem with that. They are also not fond of being told that “Columbus discovered America”, which many textbooks still say.
As for the name — the long term name, I personally believe, will be the “H particle”. Most particles have Greek names (electron), English names (charm), or Greek or Roman letter names (W and Z, muon, tau) and are not named after people. Names do shift and settle after particles are discovered, and I suspect that the younger generations will use H, which recalls Higgs but does not so clearly do a disservice to Englert and Brout and to Guralnik, Hagen and Kibble. Well, I guess Hagen wouldn’t mind.
By ‘nobody has problem’ I meant ‘everybody understands’. I was not intending to discuss issues of political correctness.
If you mention Columbus, why is the continent still called America anyway? Isn’t that the same story?
Fair enough. Sometimes names do stick.
So has Marcela Carena slid into a deep dark depression over all this?? We both know that a decoupled Higgs is likely beyond the ability of the LHC to distinguish from a vanilla SM Higgs….
Tesla would be looking pretty good right now….
The last link is broken, there is a http: too much in it…
Hi Professor Strassler,
Exciting times, although I’m disappointed about the photon-photon signal fading away. I know things are flying about fast and furious now, but I’d like to request a future topic.
At my science museum I often demonstrate the Meissner effect with a warm temperature superconductor. I was very excited, then, when I read Frank Wilczek’s book Lightness of Being where he compares the Higgs field to a superconductor. However, I found his discussion unable to answer some of my questions. Could you fill in the blanks?
First, I think I understand in a general way how an “ordinary” warm-temperature superconductor causes a photon to move as if it had mass. But Wilczek describes only a small effect, adding only a small mass to photons and slowing them down just a little. Why, then, is the resistance of a superconductor exactly zero and not just a little less when it reaches critical temperature?
Second, I see the connection to the weak force and the Higgs, describing how W and Z particles obtain mass. But can this analogy be extended at all to leptons and quarks? If so, how? I’ve looked for descriptions of this, but have come up empty. Since the Meissner effect is something I can actually show, it gives me a possible avenue to link the Higgs field to something tangible for people who visit science museums. Unfortunately, for these people W and Z particles are just as esoteric as the Higgs. But their own bodies . . .
Any help you can offer would be appreciated.
Steve — this is an excellent set of questions and requests. It’s not entirely obvious how to link all of these things together intelligibly, and I haven’t thought about how to do it in a way that would be convincing. I’m not sure that there *is* a process in a superconductor that can be made analogous to what happens to lepton and quark masses; I’ve never heard of one, but perhaps one of my colleagues who are more expert on superconductivity can suggest one. But your point that there is an opportunity here to engage the public in science museums is very interesting.
I will put this (along with a few other challenges) on my agenda of issues deserving of better pedagogical exposition. It is quite possible I won’t come up with anything for a few months; this doesn’t sound too easy, and it needs to be done well if it is going to work. Feel free to send me a reminder note in a few months if you do not hear from me (and/or you do not see an article about it appear on this website.)
You might be aware that there is ALSO a link between the superconductivity in Type II superconductors (where you have magnetic flux tubes) and the confinement of quarks and gluons inside of protons and neutrons. So actually there are multiple opportunities here…
I can only speak by generalization. QGP used in context of superconductor. Are we not questioning the relevance here by using the idea of viscosity.
http://physics.usc.edu/%7Ejohnson1/pt_johnson0510.pdf- What black holes teach about strongly coupled particles”
In a sense, particle decay chains necessitate a realization toward what has driven perspective with regard to correlated states of correspondence, with discovery. This has in essence been talked about in terms of what QGP indicates since 2005. This was a leading perspective, while settling decision QGP in 2010, not as a gas, but as a fluid.
try this link sorry…
“… many interesting speculative theories, despite being dramatically different from the Standard Model, nevertheless predict nature will exhibit a Standard Model-like Higgs particle — one that may be distinguishable from a true Standard Model Higgs only after the LHC has gathered much more data.”
A very fair statement.
To what extent SM-like is valid ? i mean can we be 100% sure that it is a SM higgs without -like…… or we will always remain with the possibility that another theory is the reality-corresponding one even if the SM matches observations ?
You have to precisely measure the coupling between this Higgs and the W boson. If the coupling is smaller than the SM value you know that the W is gettting a fraction of its mass from another Higgs scalar…..
My second point (see the other reply) is that this is correct, though incomplete. There are many other measurements that could go awry. Any confirmed deviation from *any* of the Standard Model’s predictions invalidates the theory. All parameters in the Standard Model are now measured, so all predictions are definitive (up to the level of accuracy that current computational techniques allow) and so even a single things goes awry, at high statistical confidence and confirmed by two or more experiments, will invalidate the Standard Model. Conversely, to validate the model we have to check that EVERYTHING we measure is consistent with the theory. So far, that’s the case, but not everything we can and want to measure with 2011-2012 data has been measured yet.
Yep, but you know if you are missing something if the LHC higgs does not saturate the SM HWW coupling and that was my point…
My recollection is that the effectve gamma Z H vertex is the most sensitive to new physics. Problem is the LHC can’t pin down BRs very well and even the ratio of BR(gamma Z)/ BR( ZZ ) will be systematics limited before it gets real interesting….
We’re not disagreeing; I was just amplifying what you’d said. There are other processes that you’re forgetting about that can also be extremely sensitive to new phenomena.
Two points to make here. Let me make the general one in this reply.
No, one will never be certain that a theoretical structure like the Standard Model is correct. One can only (a) show it is false by finding data that disagrees with its predictions, or (b) show that it is consistent with all known data.
Given this, when does one stop? The answer is: each experiment has an end, and at that point a community of experts will evaluate what they know and decide what makes the most sense. Experiments end when they are no longer able to much improve their sensitivity to new phenomena, which happens soon after they are no longer able to increase the rate with which they gather data or improve the methods that they use. When the LHC reaches this stage, we will have to evaluate the information we have and decide where we are. (We’re nowhere near that point — at least eight years off, probably ten, possibly twenty if the decision is made to increase the data rate of the machine… but there will be a gradual diminishing of returns if nothing new is found in the coming decade.)
During that decade, it may easily happen that something in LHC data shows that the Standard Model is wrong. To show that the Standard Model is consistent with all the LHC data is much, much harder, but it will be done to the best of our ability. At some point during or after that process, if there are no particles discovered, which means that a great variety of alternatives to the Standard Model will then be known to be false, a wide consensus may start to emerge about the plausibility that the Standard Model is correct as far as LHC physics. (Remember as I mentioned yesterday, we know the Standard Model is incomplete, since it doesn’t describe things like dark matter; but it might describe everything at the LHC.)
But given that (i) there are many speculative theories that predict a Standard Model-like Higgs particle, and would not yet have been seen, and (ii) the LHC has taken less than 10% of its data and at only 60% of its maximum collision energy, the experiment is far from its end, and it is far too early to draw conclusions that we can only start to draw as the experiment approaches its era of diminishing returns.
Well, we know the SM is incomplete (no gravity) so even if it is a perfect match for the SM eventually the SM will need to be replaced. The qualifier will likely be dropped and it will be a Higgs or H particle. A bit like how we don’t normally have to distinguish between Newtonian mass and relativistic mass.
The link to the second decoupling post cited seems to be broken.
Thanks; it is fixed now.
Entia non sunt multiplicanda sine necessitate.
Would dear William of Occam have felt it necessary to introduce BSM effects at this point?
Many people misunderstand and misinterpret Occam’s razor, just as you are doing here.
When you have clear data and you have two possible interpretations of it, choose the simpler one. That’s a good use of Occam’s razor.
When you have unclear data and you have two possible guesses as to how reality is going to turn out, DO NOT CHOOSE. DO NOT USE OCCAM’S RAZOR. Collect more data and find out from nature.
It has many times been the case that the simpler guess was right. It has many times been the case that the simpler guess was wrong. It has often turned out that what people thought was the simpler guess actually wasn’t when the full situation was understood. Occam’s razor in the second context is not a reliable guide, and its use represents a conceptual error.
Furthermore, and even more important, the Standard Model is much more radical than most of the BSM alternatives. It may look simpler, but it is far more troubling scientifically. Stay tuned for an article on that.
At least one HIggs was needed to formulate a renormalizable theory where meaningful calculations could be made (with the infinities cancelling).
For the same problem low-energy SUSY was proposed since it was felt that merely cancelling infinities through renormalization is unsatisfactory because of what is called the Hierarchy problem.
The Hierarchy problem is a theoretical prejudice and SUSY and SM are two solutions to the cancelling of infinities.
I think it is fair to say that Occam’s razor would have chose SM as opposed to SUSY for all the data we have had — not only at LHC but even decades before LHC. It is really a triumph of Occam’s Razor over the hierarchy problem. Now we have to accept 1% fine-tuning of the EW scale and maybe even 0.1% in a few more years.
I’m sorry, but as a professional in this field, I can’t allow you to make statements like this on my website and mislead my readers. These statements are all incorrect. You need to go back and study from one of the masters.
(a) The addition of one Higgs is not about renormalizability; it is about unitarity, which is much more serious. You can add interactions to replace the Higgs, as in technicolor. Even if you take the renormalizable Standard Model, perturbative unitarity will break down if you raise the Higgs mass too far… renormalizability survives but calculations break down anyway.
(b) The problem with the hierarchy problem and the attempt to use supersymmetry to fix it has nothing to do with infinities. (If you do not trust me, look at the talk yesterday by Nathan Seiberg at the Aspen Winter conference, in which he specifically reminded his listeners that this is a misconception. He noted that Weinberg made this point first, back in the 70s. Similar points were made by Nima Arkani-Hamed in his talk.) It has to do with the fact that the effective potential for the Higgs field is sensitive to the ultraviolet parameters of the theory. The infinities are a technical sign of this problem, but even if you use a finite theory with no infinities at all, the sensitivity to the ultraviolet parameters will remain.
(c) Neither the Standard Model nor Supersymmetry cancels infinities, nor were they intended to do so. There are infinities in both (in naive perturbation theory) and in neither (when renormalization is properly understood). The Standard Model without a Higgs and without additional interactions is inconsistent. The Standard Model is consistent, but the Higgs effective potential has strong sensitivity to ultraviolet parameters. Supersymmetry at the TeV energy scale is suggested as a way to remove the sensitivity to ultraviolet parameters.
d) You are misusing Occam’s razor. Occam’s razor would NOT suggest that a theory with ultraviolet sensitivity at one part in 10^32 is better than a theory with a few extra particles and parameters. By that argument you would have said there should be no observable distortions in the cosmic microwave background, there should be no second and third generation of fermions, etc. Occam’s razor is to select between two theories that predict the same physics; i.e., take Newton over Ptolemy even though both give the same results. It is not intended to select between two theories that predict different physics; you use experiment for that.
I was hoping you’d point out that William didn’t write that quote… a more accurate quote is:
Frustra fit per plura, quod potest fieri per pauciora.
Flavor changing currents, monojets, anomalous single photons, unexpected rates in rare decays, inconsistency in the unitarity triangle, large direct CP violation, large D0-D0bar mixing, and then a whole host of missing energy/leptons/jets etc… all were signals strongly advocated (oh yes, I remember well, having been there) as evidence of SUSY and/or BSM physics. Did I forget to mention direct or indirect detection of dark matter?
Not one has panned out. Maybe g-2 is a bit off, but that evidence is not nearly as convincing to experimentalists as the Lamb Shift was, or atomic parity violation, or polarized electron scattering.
Either Occam quote has some sense of `need’ or `doing with fewer’. At this point in history, from an empirical perspective, there isn’t much need for BSM physics. Whoops, did I forget neutrino mass/mixing? I did, didn’t I! OK, a great case for double beta decay, neutrino mass probes in cosmology, LBNE, KATRIN, etc. But I also remember attending many, many talks on the search for neutrino mass/mixing, I remember seeing Ray Davis derided in seminars as a kook by Very Serious theorists in the 1970’s, and also eminent experimentalists.
Sure, maybe there is strong *theoretical* need for BSM physics. Great! But the theory community seems to forget that their record over the past 30 years hasn’t been great. Perhaps the theorists have Ray Davis’ fortitude, in which case, hooray. But I knew Ray Davis, Ray Davis was a friend of mine, and today’s theorists don’t remind me much of him.
Nonetheless, the 2015 LHC run could be the one that sees very clear BSM effects. Wonderful if it does, terrific.
As you’ll see once I manage to write the article, the “need” for Beyond-Standard-Model (BSM) physics is far greater than you suppose. It’s a very subtle but crucial point. The Standard Model is arguably the most radical theory ever taken seriously as a theory of nature… much more radical than supersymmetry or extra dimensional theories. So arguably, even if you tried to apply Occam’s razor in this case, you would choose one of the non-Standard-Model theories.
More radical than Newton’s laws? Of course Newton never met Maxwell, how could Newton build in relativity…
But you are missing my main point…. theorists over the past 30 years have been surprisingly unsuccessful at designating where deviations from a SM at attainable energies might show up. Maybe because they don’t show up at all at attainable energies. Or maybe because theorists have lost connection with true experimental capabilities.
If it turns out that the SM is radical at energies that will never in practice be probed empirically, who cares? File that displeasure with the SM away with `how many angels can dance on the head of a pin’.
Maybe more effort needs to go into edgy experimental work. Hilarious to hear Lenny dis the holometer recently… as if Lenny has ever suggested an experimentally useful innovation. I’m no big fan of the holometer, but jeez, the holometer is far less kooky than string theory of the past 20 years.
If and when BSM effects (beyond neutrinos) break out in experimental work, the experimenters are likely to be dissed as badly as Ray Davis was.
A different sort of “radical” than Newton, I think… but it’s a good question.
As for your first (“main”) point: the line in the sand is that deviations SHOULD show up at the TeV scale, because if they don’t, the theory becomes very radical. That’s because a light spin-0 particle with nothing protecting its mass from quantum corrections has never before turned up in nature previously (whereas ALL previously discovered particles DO have physical effects protecting their masses from large corrections), and so we’re dealing with completely uncharted territory. The SM with nothing else is radical AT the TeV scale, and we’re probing that scale now. So when you say “If it turns out that the SM is radical at energies that will never in practice be probed empirically” — sorry, we’re there already. If the LHC turns up nothing additional, that’s extremely radical.
But when you say “Maybe more effort needs to go into edgy experimental work”, there we agree. If over time the LHC rules out more and more options, and the Standard Model becomes more and more plausible, radical as it is, then we will have to widen our minds very greatly in terms of the types of experiments we ought to do. It’s already time to start that kind of thinking, just in case. I will say more about this in the promised article.
Radical, really? If I had, well, $10,000 dollars for every time I’d heard a theorist over the last 40 years swear up and down that new effects were just around the corner, no doubt, just go look, you’ll find anomalous single photons, monojets, FCNC, unitarity triangle mismatch, direct dark matter, missing energy/leptons/jets, rising cross sections, etc, etc, I’d be a multimillionaire.
If nothing is found at the next LHC run you’ll just find a new reason to say it is not that radical and not that surprising. The SUSY endeavor will just find a new epicycle to add. And you’ll never again mention the wrong statements you made once upon a time.
I’ve seen it 100 times or so. Maybe the SUSY scale is, say, 10^10 GeV….
I’m sorry, but you are so sure of yourself, and so patronizing about it, that you aren’t listening. I’m saying something completely different, and you’re arguing with a point that I’m not making.
a) I can’t speak for other theorists. I personally have never made any statements similar to the ones you are referring to. You are invited to look through my papers and my talks, and find any one place that I have ever made any statement like this.
b) Nowhere in my replies to you did I tell you that new effects were just around the corner. What I said is that if new effects do not appear, we are in a very radical situation, one we have never been in before — that the Standard Model by itself, with nothing else, is extremely radical. That does not mean anything else will show up; nature may be very radically different from what theorists currently expect. And if new effects do not show up at the LHC, I do not have any idea where they will show up, or even which experiments to do.
c) Your statement “If nothing is found at the next LHC run you’ll just find a new reason to say it is not that radical and not that surprising” is defamatory, and false. I will do no such thing. I never said anything like it before the LHC started; I only started to say it after the Higgs was discovered at 125 GeV and nothing else was yet found, and I will not change my tune after the LHC. If you understood the physics I’m referring to, you’d understand why it is so difficult to move this line in the sand.
d) Yes, maybe the SUSY scale is 10^10 GeV. Have I contradicted this statement in my replies to you? Can you find any of my papers where I insisted this was not the case?
Please stop projecting other people’s technical, philosophical and conceptual limitations onto me. I don’t appreciate that kind of stereotyping.
“the line in the sand is that deviations SHOULD show up at the TeV scale, because if they don’t, the theory becomes very radical.” – your post, not my words.
OK, I’m not sure what `very radical’ means as a scientific term. Are you saying that if the SM describes all physics seen in the future at the LHC that would be 5-sigma off from some predefined prediction you have made?
If not, what does `very radical’ mean? Seems to me it is an emply sociological term. `Radical Republicans’ were the name for folks 150 years ago who advocated the end of slavery and equality among the races. Emma Goldman in her time was radical for advocating free dissemination of information on birth control.
Neither very radical today, but the Maxwell’s equations of the 1860’s and GR developed when Emma Goldman was deported mean just about the same thing as they meant way back then.
Sociological terms like `radical’ aren’t really applicable to hard core science.
But maybe you are really not trying to do hard core science, just sell books or web site visits.
More seriously: theoretical particle physics has largely been a flop since the SM itself was developed. The prediction of the Higgs mass was great. The prediction of the top quark mass was checkered… I saw that go from 20 GeV to 175 GeV bit by bit. But predictions of non-SM effects by the theoretical community have been a thumping failure.
Meanwhile some theorists even proposed centers, well funded, to teach LHC experimentalists how to discover new physics. My goodness.
We agree on this: guidance provided by theoretical particle physics over the last few decades is at risk of being a complete failure. ***And I have been saying this in my talks for the last decade.*** (Correction: 15 years, since the discovery that the universe is accelerating.)
“Very radical” is indeed not a scientific term; this is a public outreach website, not a rigorous scientific website, and I try to write using words that my readers can understand. But here’s what I mean by very radical: when the luminiferous ether turned out not to be detectable, it was very radical, because nobody knew what it meant, and it violated basic intuition about what people thought waves and fields were. That’s where we are if the Standard Model is right; it violates basic intuition about how quantum field theory works, and nobody knows what it means.
As for my words: ““the line in the sand is that deviations SHOULD show up at the TeV scale, because if they don’t, the theory becomes very radical.” “, you are right, the word “should” is misplaced. That is a shorthand for “would be expected (based on everything we know from theory and previous experiment about quantum field theory)”. Mea culpa. I do not mean “MUST”. I mean “WOULD BE EXPECTED UNLESS SOMETHING PROFOUND WE THOUGHT WE KNEW IS WRONG”. And right now there is a good chance (as I say in my talks that you obviously never bother to go to) that indeed, something profound that we thought we know (and had good reason, from theory AND experiment, to think we knew) is wrong. I don’t know what it is.
“But maybe you are really not trying to do hard core science, just sell books or web site visits.” Now you show your true colors as a human being. An extremely low personal blow, made in total ignorance. Why don’t you reveal your name? Are you afraid of what others will think of you?
“Meanwhile some theorists even proposed centers, well funded, to teach LHC experimentalists how to discover new physics. My goodness.” Again you show your true colors. In contrast to your undifferentiated scorn for all theorists, I have the highest respect for my experimental colleagues. We — experimenters and theorists at Rutgers and other places — proposed a theory-experiment Center to promote better communication, collaboration and understanding between theorists and experimentalists. That includes calculating backgrounds which (I remind you) experimentalists regularly request theorists calculate, because they improve the precision and sensitivity of the measurements. Without a Center, it was impossible to get the funding to support the development of further U.S. expertise in Standard Model backgrounds. So congratulations, you got your wish: theorists (and their experimental colleagues) didn’t get the funding to train more young theorists to help (not “teach”) experimenters to search for new physics. And so the US program is weaker as a result. Well done, Dark Halo; well done. You have indeed darkened this community.
Forgot to say… not much radical on this recent talk…
Don’t believe everything you read.
`true colors as a human being?’ …overreact much?
`undifferentiated scorn?’…I complimented the Higgs mass estimate for sure. But the proof of the pudding is in the eating, and there has not been one accurate prediction of BSM physics since… well… before the SM had the Z0 and W’s in it. And nothing, I mean nothing I say remotely approaches the scorn heaped on Ray Davis by the leaders of the theory community. I saw it firsthand.
`calculating background’… between the cancellation of the SSC and the start of the LHC, the US theory community *chose* not to work on computing LHC backgrounds. Europe was a bit different. So the US setback was a free choice of the US theory community.
`reveal your name’… do you send letters to the reviewers of your papers requesting them to reveal their names?
Perhaps you are ignorant of the long string of setbacks and cancellations the US experimental particle physics community has suffered, starting with the SSC, but including RSVP, BTeV, DUSEL, SLAC becoming BES, etc. The theory community largely didn’t help (the SSC is a notable exception), and in a few cases was actively hostile. We’ll see how LBNE goes, but boy, a lot of scorn has been already heaped on that by the theory community.
See if you can find Luis Alvarez essay, circa 1971, on the theory/experiment divide. He was right. And the advancement of science has suffered as a result.
Also… is said `*maybe*” your goal is to sell books/website visits. Maybe not… I don’t know. But self aggrandizement of prominent theorists into the book community (Greene, Randall, Suskind, Not Even Wrong, etc) is a dismal development. Lederman too, but at least he is funny.
1) Stop the personal insults and disgusting insinuations…
2) Or reveal your name… (you’re not a referee of a paper, you’re on a public website delivering abuse to its host and hiding from the consequences by abusing your anonymity — this is called “cowardice”)
3) Or you’re banned from this website for violating basic norms of internet exchange, and all past and future comments that you have made will be deleted.
That goes for all users of this site. You’re one of the worst I’ve had on this site so far, but you aren’t the first and won’t be the last to be told to take a hike.
Meanwhile, do you really think you’re doing a valuable service with these remarks? Let me remind you of the term “bigot”: a person who collects people of a category together without regard for how they differ from one another. I personally am innocent of most of your charges. I stood up in public at Snowmass 2001 and called for more resources to be given to those who work on Standard Model backgrounds. I tried to found a theory-experiment center that would allow me the resources needed to hire, support and train such people at my institution. I tried to attract such people as faculty and postdocs. At every institution where I have been employed, since I was a student, I have worked to improve the communication and collaboration between theorists and experimentalists, and I invite you to talk to the LHC experimentalists at Washington and Rutgers, to learn their views on whether I was a net scientific benefit to them. I did my best to change the things you complain about; I failed, due partly to people like you, who held stereotypes, didn’t listen, lumped me into categories I didn’t belong in, and dismissed everything I had to say without giving it any serious consideration. I do not deserve to be abused in this way by anyone, and certainly not by an anonymous author.
Matt… great. My apologies for unjustly lumping you in with the herd of the US theoretical community of the past 30 years… there are certainly other exceptions I’m aware of, quite a few in fact. Somehow my path hasn’t crossed yours. I don’t remember you speaking in the Snowmass town hall meeting in 2001; I certainly stood up and said exactly what I have posted here, however.
Whatever you want to call me, the US experimental particle physics community is a shambles; Fermilab hangs by a thread and SLAC is gone. It is likely that all underground experiments will move overseas, commencing next year. Ban me for saying that and it is a badge of honor.
professor Matt Strassler
I heard (about 5 years ago) UCSD physicist Kim Griest saying that the higgs field has a value of one trillion tons per cc., all over the universe. Can you please comment on that?
thanks and regards
What that number means that if you wanted to turn the Higgs field OFF, so that it’s value was zero, that’s roughly (within a few orders of magnitude) about how much energy you would need. ACtually I didn’t check the number but I have no reason to disbelieve Greist. I can check it later if you want… it would take five minutes.
So is it fair to conclude that there always remain the possibility that some unknown theory is the correct one , meaning the one corresponding to reality as it is ?no matter how high our –may be correct theory — match data ?
Hello Matt, as I understand it, the confidence level of declaring properties for the SM-like Higgs depends on a certain number of events being captured by experiments as CMS and ATLAS.
What I’m curious about is what are the actual numbers. I read somewhere that one in a trillion proton collisions produces a Higgs. The Higgs then decays and properties of its decay products are measured. If we take the 2 photons decay channel as an example: will CMS and ATLAS capture all of the photons, are just a small fraction? How many actual 2 photon decay events have been captured by CMS and ATLAS? A similar question can be asked for the other decay channels.
I think the numbers are roughly the following:
About 2.5 thousand trillion collisions at CMS (or ATLAS)
About 1 in 5 billion produces a Higgs — so about 500,000 Higgs particles produced per experiment.
Of these about 1/1000 decay to two photons: about 500 or so. Actually probably a bit larger. Most of these are detected, and I think that’s consistent with the size of the peaks you see in the ATLAS and CMS data.
About 1/10,000 decay to two lepton/anti-lepton pairs — about 50 or so. Some of these are lost because one of the four particles isn’t measured well or has too low energy. So figure about 30 events.
There are other search channels but these are the cleanest ones and dominate the conversation. I can get the other numbers for you if you insist.
p.s. the confidence levels are not merely about how many signal events there are but about how many background events from other processes are also present. There are far larger background for two photons than for two lepton/anti-lepton pairs; in the first case the background is larger than the signal, while in the second the signal is larger than the background. That is why the two measurements have rather similar statistical significance… and with more statistics the decays to two lepton/anti-lepton pairs will become more and more powerful and useful.
Hi Matt, thanks for the answer. I will not insist on the other numbers :-).
A last question: the 2.5 thousand trillion, that covers all of the LHC science run history to date (until end of 2012) of proton-proton collisions?
The experiments quote their amount of data in “inverse fb” or “per fb”, where “b” is “barn” and “f” is “femto”, or 10^(-15). A single proton-proton collision is 110 mb, or 0.1 barn. Right now ATLAS and CMS each have about 25 per fb (just under 5 per fb, at 7 TeV of energy per collision, in 2011, and just over 20 at 8 TeV in 2012). 25 per fb * 0.1 barn = 2.5 * (barn/fb) = 2.5 * 10^(15).
The amount of data will go up by about a factor of 10 (and the energy by another 60-80%, which is just as important if not more so for many purposes) before the LHC would need a major overhaul to allow further advances.
In conclusion of what you said i understand the situation as follows :
1- We assume that building blocks of reality are fields and vibrations/ripples………as per now .
2- We construct experiments to see if results match our assumptions .
3- We build a math. structure based on 1 and 2 .
4- We are never certain that 3 describes reality as it is .
5-We can never reach a description that is confirmed as reality not assumed model of reality.
Now allow me to ask : what is the demarcation line between imagination and science if data confirmed assumptions will always remain assumptions ?
Am i right to join those great scientists who declare that science is our imagination of reality
Science is a tool for making accurate predictions, and along the way a picture (but not a unique one, as we have discussed many times) of the world is developed. Imagination is something in your head. There is no line between these two things, because they are not in the same category; it is like asking about the demarcation line between England and Scotch tape. Imagination does not begin where science ends; science does not begin where imagination ends. It’s a bad question.
I would say that the phrase “science is our imagination of reality” is glib and wrong. Science is a tool by which we predict how things that we view as part of reality will behave. It is also a tool by which we develop a non-unique vision of why and how those things behave in this way… remembering that this vision is inevitably non-unique and incomplete. Science can affect our imagination of reality, but is not equivalent to it, nor does it give us sufficient information to determine it. So I would say: science helps to guide and shape our imagination of reality, by telling us what isn’t true, and suggesting to us what might be true.
If you ask more of science than it can possibly provide, that’s your mistake, not a problem with science.
What you are describing is not a problem with science or scientific theories. It is the fundamental nature of human knowledge. If some proposal is stated in a way that is never testable, it forever remains speculation. If something is stated in a way that is verifiable, a single piece of evidence to the contrary can disprove the statement. But to complicate matters, our explanations and our experiments of them are often hierarchical and inter-related. Measurement is never exact, never complete, and can never control all possible complicating variables: as long as things are causal and time continues to tick in one direction, we will never know everything. We accumulate information, and check proposed explanations against evidence. The broadest consistent explanation of known experimental results, that can be proved false but hasn’t been, gets provisionally “accepted”, subject to known limitations and discrepancies. If there is a discrepancy, we try to figure out why. Explanations may not ultimately even match our current ideas of logic and therefore of mathematics, but that’s up to us to find that out while still explaining everything that has been explained so far, to the same or better precision and accuracy. As long as things are somehow causally related, that’s just the way the universe (and information about it) works. And the only way we can ever accumulate knowledge of it. If you don’t like it, the Universe says “Tough shit.”
It is not fair Matt. to insist that i always must be wrong !, …….
The darwinian ( tree of life ) IS imagination…
The 10^500 universes landscape are imagination…..
The strings are imagination…..
The multi/meta/extra/hyper universe is imagination……etcetcetc
So science is not always about predictions , science is about our assumed imaginations which is not bad at all , why would you desist ?
Accepting my point No. 5 = accepting that science is imagination as there is no such thing as science WITHOUT worldview.
Even in this particular post you mentioned imagination when talking about speculative theories , speculation is imagination and imagination in science can be very creative so why should we be ashamed of it.
Kindly be aware that in no way i equate imagination with hallucination.
I am happy to see that unlike previous Higgs-like particle name has now been upgraded to Higgs particle. What a relief.. 🙂
aa. sh.: “The 10^500 universes landscape are imagination…..
The strings are imagination…..
The multi/meta/extra/hyper universe is imagination……etcetcetc
So science is not always about predictions , science is about our assumed imaginations which is not bad at all , why would you desist ?”
There are two types of physics, the nature physics (N-physics) and the human physics (H-physics). The H-physics is the human enterprise, with invention, imagination and testing. Although the imagination is in fact the foundation of H-physics, most of physicists shunt away from it but emphasizing the testing part. The key point is that some H-physics can be viewed as N-physics when they are verified with many tests.
Matt: “No, one will never be certain that a theoretical structure like the Standard Model is correct.
One can only (a) show it is false by finding data that disagrees with its predictions, or (b) show that it is consistent with all known data.”
By knowing the two physics, the validity of a theoretical model can be discussed in more way than the above two choices [a) and b)]. There are some known N-physics.
1. The expansion of the universe is accelerating.
2. The visible mass of the universe is not enough to describe the structure of the universe.
3. The proton’s half-life is longer than the life of this universe.
4. Neutrino has some rest masses.
5. many, many more.
As the SM failed on addressing the above N-physics, its correctness is limited to a small scope regardless of any additional data. While this verdict is already given, today’s issue is whether one part of the SM is correct or not. Is the Higgs mechanism correct? There are some other alternatives to replace the Higgs mechanism. Then, more data can do the job as the Occam’s razor. But, if the other model is able to encompass the above N-physics, a verdict can already be given. But I agree to allow more data to be the jury.
Nima Arkani-Hamad has promised that if the Higgs found at the LHC turns out to be spin 2 (or a techni-dilaton), he’ll quit physics. So is there now no hope?
(He’s also said that particle “fields” don’t exist; they are a figment of particle physicists’ imagination. Would like your response some day…)
I’m not sure exactly what Arkani-Hamed said (though I can guess the context.) He has a vision as to where particle physics might be going. But it hasn’t gotten there yet. With Arkani-Hamed (as with certain other speakers) you have to be careful to distinguish when what he says represents hopes and speculative ideas, versus when he says things that are widely accepted. Right now, fields are still the core notion underlying particle physics, gravitational physics, and cosmology. That started being true in the 19th century and has been essential since the 1970s. There might come a time when that gets updated, but it hasn’t happened yet. Arkani-Hamed’s current work suggests one possible way that might eventually happen, but again, it’s premature to say that it will happen. Lots of brilliant ideas don’t pan out…
If Arkani-Hamed’s current work leads to a reformulation of the world where fields are no longer useful concepts, then you’ll read about it on this blog. I don’t think we’ll see this happen over the next year or two.
Always remember that there is no unique way to look at the world. Any formulation, using fields or anything else, can be rewritten in terms of other concepts. (For instance, Newton’s laws can be rewritten using the action principle or the Hamilton-Jacobi equations, with very different appearances and concepts.) On this blog I give you *a* way. It’s the most common way and in my mind the most intuitive way to look at what we know. But it is surely not *the only* way.
Thanks, Matt. Arkani-Amed’s light-hearted comment about quitting physics was in a talk at KITP on Naturalness online at http://online.kitp.ucsb.edu/online/higgs_m12/arkanihamed/ He’s basically saying if nature is so illogical and arbitrary as to have that particle be spin 2, it’s not worth trying to figure anything out about why. And if nature is so ridiculously perverse that it’s a techni-dilaton, he’d consider suicide. 🙂
Your analysis of the way he mixes hopes and conjectures with established theory in his talks is the reason for my question [Triple :-)]
As to the existence of fields vs “particles” [whatever they are]: my guess (and it’s only that) is that neither are correct. The “fields” you refer to are probability amplitudes in an infinite-dimension complex-valued Hilbert space, that then have to be projected to 3+1 Minkowsky spacetime to get to physical location of a probability distribution. The questions people have been asking about “What is it that’s vibrating in these fields?” is actually only secondary: the first question to ask is “in what bizarre phase space is the field oscillation actually taking place?”. We seem to be a long way from answering those questions. But little billiard ball-like lumps of something don’t behave the way we see things behaving quantum mechanically, either. Throw in the apparent impossibility of point-like ( or any highly localized) particles existing in any relativistic field theory, and add the fundamental mismatch between quantum field theory and general relativity :
– non-linear time as a dependent variable rather than a linear independent one
– quantum fluctuations [or just the localized interaction of one quantum field with another] affecting the structure of spacetime itself in a 4D manifold with a metric non-linearly related to the energy/stress distribution within it,
– non-renormalizable fields in curved 4D spacetime [note that the “spin 2 graviton” quantization can only be derived in 2D!]
– the requirement for general covariance
So, where’s “reality” in all that? We just simply don’t yet know.
Given the ego involvement of some of the brilliant people doing this incredible work, we need to be wary of mistaking the model (and explanations of it) for the actual thing.
On this website (and in my classes for students) I do not aim to give a picture of nature that is “correct”. That would be impossible. I am sure our picture of nature is not “correct”, in the sense that although quantum field theory gives very successful predictions, it is likely someday to be conceptually revised. The goal of this website is to explain the picture of nature used currently in the mainstream of scientific thought about the elementary laws that govern the universe. This picture does indeed have at least one inconsistency (it does not include a quantum theory of gravity in a coherent and consistent way). It may have more. So I think we agree in general. No model — no explanatory framework, even a very successful one — should be mistaken for nature, truth, reality, etc.
Matt: “… in the sense that although quantum field theory gives very successful predictions, it is likely someday to be conceptually revised.”
Welcome to the club of truth. Steven Weinberg did.
Hi Matt, Nima wasn’t referring to his work on Grassmanians when he made that remark. Instead he was making a point about fields vs particles that I believe mimics Weinberg’s philosophy in QFT volume 1. (eg use fields as a a convenient fiction to build up an action that respects Lorentz invariance and the cluster decomposition principle). Slightly more subtle point of course.
Thanks. But I think Nima Arkani-Hamed *is* referring to his work on Grassmanians (or better said, is influenced by it) in taking this point of view. I do not find it useful to think about fields as a convenient fiction when explaining (a) the Higgs field (b) conformal field theories (c) instanton-based phenomena (d) duality transformations. It is only useful when perturbation theory is applicable, which in general it is not, or when one tries to describe the world purely in terms of observables— things actually measured in experiments — which is technically possible but conceptually impossible.
Still, no viewpoint is unique; there are always alternatives. And all viewpoints involve convenient fictions. What I do not yet see in Weinberg (or in Arkani-Hamed’s work) is a replacement that allows a clear view of the full range of phenomena that arise in Quantum Field Theory.
Did Nima Hamed really said that ” fields ” are figment of particle physicists IMAGINATION ? so the humble me agrees with hamed concerning imagination!!!!!
I’m certainly not saying imagination has no role to play in science; far from it. I’m saying that there isn’t a point at which science begins and imagination starts; they are two different types of things. Imagination is used in speculation; imagination is used in non-scientific contexts; and imagination is used in science.
If you want to understand what you (and others) are saying, it is essential to carry on a conversation carefully with as rigorous use of words as possible. The word “imagination” has several different meanings and you are tossing it around rather carelessly, in my opinion. Sometimes you seem to mean “creativity” and sometimes you seem to mean “speculation”. Until you are clearer about what you are trying to say, your statements are neither true nor false, and certainly difficult to respond to.
I addressed what Arkani-Hamed is saying in a separate reply. (I know him very well.)
Gong : Allow me to add that Hphysics will always go nearer to reality /Nphysics but never reaching it……..remember Godel ???
aa. sh.: Godel is the most fundamental as most of models are formal systems. But, this N/H-physics is about epistemology on physics, and the Godel argument is not very important here. For example, the Cabibbo – Weinberg angles and the electron fine structure constant can be safely viewed as N-physics, but those two angles are free parameters in the Standard Model. If the other theory can provide theoretical calculation for them, then this other theory has a scope much bigger than SM. This will be a very important material fact in comparison to any gadget data which is always limited by the capacity of the gadget.
We can always come up an equation to produce any given number, and it is called numerology. But, if the three numbers above (Cabibbo – Weinberg angles and the electron fine structure constant) are derived from a single “physics” concept of the other theory, this again is a supremely powerful material fact.
My point is exactly what Einstein said : imagination is more important than knowledge .
Knowledge is temporal , limited , while imagination is unlimited.
My interpretation of that remark is: “for the purposes of making advances in science, imagination is more important than knowledge”. Notice that science is not identified with knowledge, nor does it end where imagination begins. The point is that scientific creativity requires imagination as well as knowledge; being the most learned in a subject will not necessarily lead to breakthroughs, but being able, based on less knowledge, to imagine how things might work sometimes does lead to breakthroughs. (Indeed that is the story of my own career.) So I do not think, as you seem to, that there is a boundary line between science and imagination.
A few days after the 4 July CERN announcements, there was a press conference at Edinburgh with Peter Higgs, Victoria Martin, Alan Walker and the current head of physics at Edinburgh (whose name I have forgotten.) It’s probably still on line at Edinburgh’s web site. Peter was asked about the name of the boson, and he felt that it would be simply “the h” (or maybe “the H”), but that it would not bear anyone’s name. I did my thesis under him and I know he’s very modest; he said that no particle bears anyone’s name. He’s right about past practice (we don’t call the electron the “Thomson” for example, nor the neutron the “Chadwick”, nor the neutrino the “Pauli”), but on the other hand, the name “Higgs” is pretty much linked to the boson. This is not to take away from the cast of thousands (Englert, Brout, Guralnik, Hagen, Kibble, Anderson, Goldstone, Gilbert, Nambu, Jona-Lasinio, …) also linked to the mechanism. 🙂
I have posted on this discussion today (error statistics.com) in relation to some problems of philosophy of science and statistical inference in science. I will continue this in a later blog on some criticisms that have recently been raised in the popular press claiming that experimental particle physicists erroneously interpret the 5 sigma results as giving Bayesian posterior probabilities.Comments very welcome.
Not sure of your background, but one of the attractions of SUSY over the past 30 years is that it was the only paradigm where deviations in SM observables could be calculated, as opposed to having to fine tune away flavor violations….
Everything else blew things up and the hope was that if X was behind EWSB then when it was observed we would elucidate the mechanism that kept all the precision EW variables in line…
So SUSY in some sense was the modern equivalent of the drunk looking for his keys under the lamp post, i.e. it was the only place where he knew how to look….
Dear Matt Strassler,
I have been avidly reading your posts – and find the notions presented utterly fascinating. I will openly admit that I know little about quantum physics, or rather the complexities of it – as I am a student of philosophy. Please would you be so kind as to explain the state of SUSY (and physics beyond the standard model) to date – in layman’s terms (because there are a lot of us, who crave understanding, but whom don’t have the academic framework to embody such wonderful knowledge). I have been reading about quantum physics & string theory, in my spare time for the last 3 years, alongside my study of Philosophy…
From what I understand (please excuse the basicness of my points ^‿^ here-forth…) –> the standard model, is incomplete because it doesn’t account for such things as dark matter & gravity –> however so far the standard model accurately personifies the quantitative data produced (so far..) by the LHC. However SUSY could still be discovered, because SUSY as a theoretical conception, can in fact encompass the standard model and extend upon it.
Please would you be so kind as to elaborate upon and correct what I have said…I would be truly grateful…
Your own write-up has established helpful to myself. It’s quite useful and you really are obviously
quite well-informed of this type. You have opened my own sight to be able to numerous views on this kind of subject using intriguing,
notable and solid content.
How/why can “decoupling limit/theorem” demonstrate that even when the Standard Model is not the complete theory of physics at the LHC, Standard Model-like Higgs particles can arise in many different ways? What does “decoupling limit/theorem” say and where does it come from?
This is technical and not obvious, which is why Haber and Nir’s paper is famous. I certainly can’t explain it in an entirely non-technical way. You actually have to take the equations and show that this is how they behave.
There is a way to understand it intuitively, but that also is not trivial to explain without math.
But since this is such an important point for understanding what the LHC is doing and why particle physicists do not believe that the LHC’s data shows that the Standard Model is necessarily correct, I will think about whether this can be explained without much math. I can’t see how to do it today, but I will try over the next few months to come up with something.
Greetings from Carolina! I’m bored to death at work so I decided to check out your blog on my iphone during lunch
break. I enjoy the info you provide here and can’t wait to take a look when I get home.
I’m surprised at how fast your blog loaded on my phone ..
I’m not even using WIFI, just 3G .. Anyhow, awesome site!
Comments are closed.