Tag Archives: StandardModel

Physics is Broken!!!

Last Thursday, an experiment reported that the magnetic properties of the muon, the electron’s middleweight cousin, are a tiny bit different from what particle physics equations say they should be. All around the world, the headlines screamed: PHYSICS IS BROKEN!!! And indeed, it’s been pretty shocking to physicists everywhere. For instance, my equations are working erratically; many of the calculations I tried this weekend came out upside-down or backwards. Even worse, my stove froze my coffee instead of heating it, I just barely prevented my car from floating out of my garage into the trees, and my desk clock broke and spilled time all over the floor. What a mess!

Broken, eh? When we say a coffee machine or a computer is broken, it means it doesn’t work. It’s unavailable until it’s fixed. When a glass is broken, it’s shattered into pieces. We need a new one. I know it’s cute to say that so-and-so’s video “broke the internet.” But aren’t we going a little too far now? Nothing’s broken about physics; it works just as well today as it did a month ago.

More reasonable headlines have suggested that “the laws of physics have been broken”. That’s better; I know what it means to break a law. (Though the metaphor is imperfect, since if I were to break a state law, I’d be punished, whereas if an object were to break a fundamental law of physics, that law would have to be revised!) But as is true in the legal system, not all physics laws, and not all violations of law, are equally significant.

What’s a physics law, anyway? Crudely, physics is a strategy for making predictions about the behavior of physical objects, based on a set of equations and a conceptual framework for using those equations. Sometimes we refer to the equations as laws; sometimes parts of the conceptual framework are referred to that way.

But that story has layers. Physics has an underlying conceptual foundation, which includes the pillar of quantum physics and its view of reality, and the pillar of Einstein’s relativity and its view of space and time. (There are other pillars too, such as those of statistical mechanics, but let me not complicate the story now.) That foundation supports many research areas of physics. Within particle physics itself, these two pillars are combined into a more detailed framework, with concepts and equations that go by the name of “quantum effective field theory” (“QEFT”). But QEFT is still very general; this framework can describe an enormous number of possible universes, most with completely different particles and forces from the ones we have in our own universe. We can start making predictions for real-world experiments only when we put the electron, the muon, the photon, and all the other familiar particles and forces into our equations, building up a specific example of a QEFT known as “The Standard Model of particle physics.”

All along the way there are equations and rules that you might call “laws.” They too come in layers. The Standard Model itself, as a specific QEFT, has few high-level laws: there are no principles telling us why quarks exist, why there is one type of photon rather than two, or why the weak nuclear force is so weak. The few laws it does have are mostly low-level, true of our universe but not essential to it.

I’m bringing attention to these layers because an experiment might cause a problem for one layer but not another. I think you could only fairly suggest that “physics is broken” if data were putting a foundational pillar of the entire field into question. And to say “the laws of physics have been violated”, emphasis on the word “the“, is a bit melodramatic if the only thing that’s been violated is a low-level, dispensable law.

Has physics, as a whole, ever broken? You could argue that Newton’s 17th century foundation, which underpinned the next two centuries of physics, broke at the turn of the 20th century. Just after 1900, Newton-style equations had to be replaced by equations of a substantially different type; the ways physicists used the equations changed, and the concepts, the language, and even the goals of physics changed. For instance, in Newtonian physics, you can predict the outcome of any experiment, at least in principle; in post-Newtonian quantum physics, you often can only predict the probability for one or another outcome, even in principle. And in Newtonian physics we all agree what time it is; in Einsteinian physics, different observers experience time differently and there is no universal clock that we all agree on. These were immense changes in the foundation of the field.

Conversely, you could also argue that physics didn’t break; it was just remodeled and expanded. No one who’d been studying steam engines or wind erosion or electrical circuit diagrams had to throw out their books and start again from scratch. In fact this “broken” Newtonian physics is still taught in physics classes, and many physicists and engineers never use anything else. If you’re studying the physics of weather, or building a bridge, Newtonian physics is just fine. The fact that Newton-style equations are an incomplete description of the world — that there are phenomena they can’t describe properly — doesn’t invalidate them when they’re applied within their wheelhouse.

No matter which argument you prefer, it’s hard to see how to justify the phrase “physics is broken” without a profound revolution that overthrows foundational concepts. It’s rare for a serious threat to foundations to arise suddenly, because few experiments can single-handedly put fundamental principles at risk. [The infamous case of the “faster-than-light neutrinos” provides an exception. Had that experiment been correct, it would have invalidated Einstein’s relativity principles. But few of us were surprised when a glaring error turned up.]

In the Standard Model, the electron, muon and tau particles (known as the “charged leptons”) are all identical except for their masses. (More fundamentally, they have different interactions with the Higgs field, from which their rest masses arise.) This almost-identity is sometimes stated as a “principle of lepton universality.” Oh, wow, a principle — a law! But here’s the thing. Some principles are enormously important; the principles of Einsteinian relativity determine how cause and effect work in our universe, and you can’t drop them without running into big paradoxes. Other principles are weak, and could easily be discarded without making a mess of any other part of physics. The principle of lepton universality is one of these. In fact, if you extend the Standard Model by adding new particles to its equations, it can be difficult to avoid ruining this fragile principle. [In a sense, the Higgs field has already violated the principle, but we don’t hold that against it.]

All the fuss is about a new experimental result which confirms an older one and slightly disagrees with the latest theoretical predictions, which are made using the Standard Model’s equations. What could be the cause of the discrepancy? One possibility is that it arises from a previously unknown difference between muons and electrons — from a violation of the principle of lepton universality. For those who live and breathe particle physics, breaking lepton universality would be a big deal; there’d be lots of adventure in trying to figure out which of the many possible extensions of the Standard Model could actually explain what broke this law. That’s why the scientists involved sound so excited.

But the failure of lepton universality wouldn’t come as a huge surprise. From certain points of view, the surprise is that the principle has survived this long! Since this low-level law is easily violated, its demise may not lead us to a profound new understanding of the world. It’s way too early for headlines that argue that what’s at stake is the existence of “forms of matter and energy vital to the nature and evolution of the cosmos.” No one can say how much is at stake; it might be a lot, or just a little.

In particular, there’s absolutely no evidence that physics is broken, or even that particle physics is broken. The pillars of physics and QEFT are not (yet) threatened. Even to say that “the Standard Model might be broken” seems a bit melodramatic to me. Does adding a new wing to a house require “breaking” the house? Typically you can still live in the place while it’s being extended. The Standard Model’s many successes suggest that it might survive largely intact as a recognizable part of a larger, more complete set of equations.

In any case, right now it’s still too early to say anything so loudly. The apparent discrepancy may not survive the heavy scrutiny it is coming under. There’s plenty of controversy about the theoretical prediction for muon magnetism; the required calculation is extraordinarily complex, elaborate and difficult.

So, from my perspective, the headlines of the past week are way over the top. The idea that a single measurement of the muon’s magnetism could “shake physics to its core“, as claimed in another headline I happened upon, is amusing at best. Physics, and its older subdisciplines, have over time become very difficult to break, or even shake. That’s the way it should be, when science is working properly. And that’s why we can safely base the modern global economy on scientific knowledge; it’s unlikely that a single surprise could instantly invalidate large chunks of its foundation.

Some readers may view the extreme, click-baiting headlines as harmless. Maybe I’m overly concerned about them. But don’t they implicitly suggest that one day we will suddenly find physics “upended”, and in need of a complete top-to-bottom overhaul? To imply physics can “break” so easily makes a mockery of science’s strengths, and obscures the process by which scientific knowledge is obtained. And how can it be good to claim “physics is broken” and “the laws of physics have been broken” over and over and over again, in stories that almost never merit that level of hype and eventually turn out to have been much ado about nada? The constant manufacturing of scientific crisis cannot possibly be lost on readers, who I suspect are becoming increasingly jaded. At some point readers may become as skeptical of science journalism, and the science it describes, as they are of advertising; it’s all lies, so caveat emptor. That’s not where we want our society to be. As we are seeing in spades during the current pandemic, there can be serious consequences when “TRUST IN SCIENCE IS BROKEN!!!

A final footnote: Ironically, the Standard Model itself poses one of the biggest threats to the framework of QEFT. The discovery of the Higgs boson and nothing else (so far) at the Large Hadron Collider poses a conceptual challenge — the “naturalness” problem. There’s no sharp paradox, which is why I can’t promise you that the framework of QEFT will someday break if it isn’t resolved. But the breakdown of lepton universality might someday help solve the naturalness problem, by requiring a more “natural” extension of the Standard Model, and thus might actually save QEFT instead of “breaking” it.


Day 2 of the SEARCH workshop will get a shorter description than it deserves, because I’ve had to spend time finishing my own talk for this morning. But there were a lot of nice talks, so let me at least tell you what they were about.

Both ATLAS and CMS presented their latest results on searches for supersymmetry. (I should remind you that “searches for supersymmetry” are by no means actually limited to supersymmetry — they can be used to discover or exclude many other new particles and forces that have nothing to do with supersymmetry at all.) Speakers Pascal Pralavorio and Sanjay Padhi gave very useful overviews of the dozens of searches that have been done so far as part of this effort, including a few rather new results that are very powerful. (We should see even more appear at next week’s Supersymmetry conference.) My short summary: almost everything easy has been done thoroughly; many challenging searches have also been carried out; if superpartner particles are present, they’re either

  • so heavy that they aren’t produced very often (e.g. gluinos)
  • rather lightweight, but still not so often produced (e.g. top squarks, charginos, neutralinos, sleptons)
  • produced often, but decaying in some way that is very hard to detect (e.g. gluinos decaying only to quarks, anti-quarks and gluons)

Then we had a few talks by theorists. Patrick Meade talked about how unknown particles that are affected by weak nuclear and electromagnetic forces, but not by strong nuclear forces, could give signs that are hiding underneath processes that occur in the Standard Model. (Examples of such particles are the neutralinos and charginos or sleptons of supersymmetry.) To find them requires increased precision in our calculations and in our measurements of processes where pairs of W and/or Z and/or Higgs particles are produced. As a definite example, Meade noted that the rate for producing pairs of W particles disagrees somewhat from current predictions based on the Standard Model, and emphasized that this small disagreement could be due to new particles (such as top squarks, or sleptons, or charginos and neutralinos) although at this point there’s no way to know.

Matt Reece gave an analogous talk about spin-zero quark-like particles that do feel strong nuclear forces, the classic example of which are top squarks. Again, the presence of these particles can be hidden underneath the large signals from production of top quark/anti-quark pairs, or other common processes. ATLAS and CMS have been working hard to look for signals of these types of particles, and have made a lot of progress, but there are still quite a few possible signals that haven’t been searched for yet. Among other things, Reece discussed some methods invented by theorists that might be useful in contributing to this effort. As with the previous talk, the key to a complete search will be improvements in calculations and measurements of top quark production, and of other processes that involve known particles.

After lunch there was a more general discussion about looking for supersymmetry, including conversation about what variants of supersymmetry haven’t yet been excluded by existing ATLAS and CMS searches.  (I had a few things to say about that in my talk, but more on that tomorrow.)

Jesse Thaler gave a talk reviewing the enormous progress that has been made in understanding how to distinguish ordinary jets arising from quarks and gluons versus jet-like objects made from a single high-energy W, Z, Higgs or top quark that decays to quarks and anti-quarks. (The jargon is that the trick is to use “jet substructure” — the fact that inside a jet-like W are two sub-jets, each from a quark or anti-quark.) At SEARCH 2012, the experimenters showed very promising though preliminary results using a number of new jet substructure methods that had been invented by (mostly) theorists. By now, the experimenters have shown definitively that these methods work — and will continue to work as the rate of collisions at the LHC grows — and have made a number of novel measurements using them. Learning how to use jet substructure is one of the great success stories of the LHC era, and it will continue to be a major story in coming years.

Two talks by ATLAS (Leandro Nisanti) and CMS (Matt Hearndon) followed, each with a long list of careful measurements of what the Standard Model is doing, mostly based so far only on the 2011 data set (and not yet including last year’s data). These measurements are crucially important for multiple reasons:

  • They provide important information which can serve as input to other measurements and searches.
  • They may reveal subtle problems with the Standard Model, due to indirect or small effects from unknown particles or forces.
  • Confirming that measurements of certain processes agree with theoretical predictions gives us confidence that those predictions can be used in other contexts, in particular in searches for unknown particles and forces.

Most, but not all, theoretical predictions for these careful measurements have worked well. Those that aren’t working so well are of course being watched and investigated carefully — but there aren’t any discrepancies large enough to get excited about yet (other than the top quark forward-backward asymmetry puzzle, which wasn’t discussed much today). In general, the Standard Model works beautifully — so far.

The day concluded with a panel discussion focused on these Standard Model measurements. Key questions discussed included: how do we use LHC data to understand the structure of the proton more precisely, and how in turn does that affect our searches for unknown phenomena? In particular, a major concern is the risk of circularity; that a phenomenon from an unknown type of particle could produce a subtle effect that we would fail to recognize for what it is, instead misinterpreting it as a small misunderstanding of proton structure, or as a small problem with a theoretical calculation. Such are the challenges of making increasingly precise measurements, and searching for increasingly rare phenomena, in the complicated environment of the LHC.