Tag Archives: StandardModel

A Big Think Made of Straw: Bad Arguments Against Future Colliders

Here’s a tip.  If you read an argument either for or against a successor to the Large Hadron Collider (LHC) in which the words “string theory” or “string theorists” form a central part of the argument, then you can conclude that the author (a) doesn’t understand the science of particle physics, and (b) has an absurd caricature in mind concerning the community of high energy physicists.  String theory and string theorists have nothing to do with whether such a collider should or should not be built.

Such an article has appeared on Big Think. It’s written by a certain Thomas Hartsfield.  My impression, from his writing and from what I can find online, is that most of what he knows about particle physics comes from reading people like Ethan Siegel and Sabine Hossenfelder. I think Dr. Hartsfield would have done better to leave the argument to them. 

An Army Made of Straw

Dr. Hartsfield’s article sets up one straw person after another. 

  • The “100 billion” cost is just the first.  (No one is going to propose, much less build, a machine that costs 100 billion in today’s dollars.)  
  • It refers to “string theorists” as though they form the core of high-energy theoretical physics; you’d think that everyone who does theoretical particle physics is a slavish, mindless believer in the string theory god and its demigod assistant, supersymmetry.  (Many theoretical particle physicists don’t work on either one, and very few ever do string theory. Among those who do some supersymmetry research, it’s often just one in a wide variety of topics that they study. Supersymmetry zealots do exist, but they aren’t as central to the field as some would like you to believe.)
  • It makes loud but tired claims, such as “A giant particle collider cannot truly test supersymmetry, which can evolve to fit nearly anything.”  (Is this supposed to be shocking? It’s obvious to any expert. The same is true of dark matter, the origin of neutrino masses, and a whole host of other topics. Its not unusual for an idea to come with a parameter which can be made extremely small. Such an idea can be discovered, or made obsolete by other discoveries, but excluding it may take centuries. In fact this is pretty typical; so deal with it!)
  • “$100 billion could fund (quite literally) 100,000 smaller physics experiments.”  (Aside from the fact that this plays sleight-of-hand, mixing future dollars with present dollars, the argument is crude. When the Superconducting Supercollider was cancelled, did the money that was saved flow into thousands of physics experiments, or other scientific experiments?  No.  Congress sent it all over the place.)  
  • And then it concludes with my favorite, a true laugher: “The only good argument for the [machine] might be employment for smart people. And for string theorists.”  (Honestly, employment for string theorists!?!  What bu… rubbish. It might have been a good idea to do some research into how funding actually works in the field, before saying something so patently silly.)

Meanwhile, the article never once mentions the particle physics experimentalists and accelerator physicists.  Remember them?  The ones who actually build and run these machines, and actually discover things?  The ones without whom the whole enterprise is all just math?

Although they mostly don’t appear in the article, there are strong arguments both for and against building such a machine; see below.  Keep in mind, though, that any decision is still years off, and we may have quite a different perspective by the time we get to that point, depending on whether discoveries are made at the LHC or at other experimental facilities.  No one actually needs to be making this decision at the moment, so I’m not sure why Dr. Hartsfield feels it’s so crucial to take an indefensible position now.

Continue reading

Fourth Step in the Triplet Model is up.

Advanced particle physics today:

Today we move deeper into the reader-requested explanation of the “triplet model,”  (a classic and simple variation on the Standard Model of particle physics, in which the W boson mass can be raised slightly relative to Standard Model predictions without affecting other current experiments.) The math required is still pre-university level, though slowly creeping up as complex numbers start to appear.

The firstsecond and third webpages in this series provided a self-contained introduction that concluded with a full cartoon of the triplet model, showing how a small modification of the Higgs mechanism of the Standard Model can shift a “W” particle’s mass upward.

Next, we begin a new phase in which the cartoon is gradually replaced with the real thing. In the new fourth webpage, I start laying the groundwork for understanding how the Standard Model works — in particular how the Higgs boson gives mass to the W and Z bosons, and what SU(2) x U(1) is all about — following which it won’t be hard to explain the triplet model.

Please send your comments and suggestions!

The Simplest Way to Shift the W Boson Mass?

Some technical details on particle physics today…

Papers are pouring out of particle theorists’ offices regarding the latest significant challenge to the Standard Model, namely the W boson mass coming in about 0.1% higher than expected in a measurement carried out by the Tevatron experiment CDF. (See here and here for earlier posts on the topic.) Let’s assume today that the measurement is correct, though possibly a little over-stated. Is there any reasonable extension to the Standard Model that could lead to such a shift without coming into conflict with previous experiments? Or does explaining the experiment require convoluted ideas in which various effects have to cancel in order to be acceptable with existing experiments?

Continue reading

A Few Remarks on the W Boson Mass Measurement

Based on some questions I received about yesterday’s post, I thought I’d add some additional comments this morning.

A natural and persistent question has been: “How likely do you think it is that this W boson mass result is wrong?” Obviously I can’t put a number on it, but I’d say the chance that it’s wrong is substantial. Why? This measurement, which took several many years of work, is probably among the most difficult ever performed in particle physics. Only first-rate physicists with complete dedication to the task could attempt it, carry it out, convince their many colleagues on the CDF experiment that they’d done it right, and get it through external peer review into Science magazine. But even first-rate physicists can get a measurement like this one wrong. The tiniest of subtle mistakes will undo it.

And that mistake, if there is one, might not even be their own, in a sense. Any measurement like this has to rely on other measurements, on simulation software, and on calculations involving other processes, and even though they’ve all been checked, perhaps they need to be rechecked.

Another question about the new measurement is that it seems inconsistent not only with the Standard Model but also with previous, less precise measurements by other experiments, which were closer to the Standard Model’s result. (It is even inconsistent with CDF’s own previous measurement.) That’s true, and you can see some evidence in the plot in yesterday’s post. But

  • it could be that one or more of the previous measurements has an error;
  • there is a known risk of unconscious experimental bias that tends to push results toward the Standard Model (i.e. if the result doesn’t match your expectation, you check everything again and tweak it and then stop when it better matches your expectation. Performing double-blinded experiments, as this one was, helps mitigate this risk, but it doesn’t entirely eliminate it.);
  • CDF has revised their old measurement slightly upward to account for things they learned while performing this new one, so their internal inconsistency is less than it appears, and
  • even if the truth lies between this new measurement and the old ones, that would still leave a big discrepancy with the Standard Model, and the implication for science would be much the same.

I’ve heard some cynicism: “Is this just an old experiment trying to make a name for itself and get headlines?” Don’t be absurd. No one seeking publicity would go through the hell of working on one project for several years, running down every loose end multiple times and checking it twice and cross-checking it three times, spending every living hour asking oneself “what did I forget to check?”, all while knowing that in the end one’s reputation will be at stake when the final result hits the international press. There would be far easier ways to grab headlines if that were the goal.

Someone wisely asked about the Z boson mass; can one study it as well? This is a great question, because it goes to the heart of how the Standard Model is checked for consistency. The answer is “no.” Really, when we say that “the W mass is too large,” what we mean (roughly) is that “the ratio of the W mass to the Z mass is too large.” One way to view it (not exactly right) is that certain extremely precise measurements have to be taken as inputs to the Standard Model, and once that is done, the Standard Model can be used to make predictions of other precise measurements. Because of the precision with which the Z boson mass can be measured (to 2 MeV, two parts in 100,000), it is effectively taken as an input to the Standard Model, and so we can’t then compare it against a prediction. (The Z boson mass measurement is much easier, because a Z boson can decay (for example) to an electron and a positron, which can both be observed directly. Meanwhile a W boson can only decay (for example) to an electron and a neutrino, but a neutrino can only be inferred indirectly, making determination of its energy and momentum much less precise.)

In fact, one of the ways that the experimenters at CDF who carried out this measurement checked their methods is that they remeasured the Z boson mass too, and it came out to agree with other, even more precise measurements. They’d never have convinced themselves, or any of us, that they could get the W boson mass right if the Z boson mass measurement was off. So we can even interpret the CDF result as a measurement of the ratio of the W boson mass to the Z boson mass.

One last thing for today: once you have measured the Z boson mass and a few other things precisely, it is the consistency of the top quark mass, the Higgs boson mass and the W boson mass that provide one of the key tests of the Standard Model. Because of this, my headline from yesterday (“The W Boson isn’t Behaving”) is somewhat misleading. The cause of the discrepancy may not involve the W boson at all. The issue might turn out to be a new effect on the Z boson, for instance, or perhaps even the top quark. Working that out is the purview of theoretical physicists, who have to understand the complex interplay between the various precise measurements of masses and interactions of the Standard Model’s particles, and the many direct (and so far futile) searches for unknown types of particles that could potentially shift those masses and interactions. This isn’t easy, and there are lots of possibilities to consider, so there’s a lot of work yet to be done.

Physics is Broken!!!

Last Thursday, an experiment reported that the magnetic properties of the muon, the electron’s middleweight cousin, are a tiny bit different from what particle physics equations say they should be. All around the world, the headlines screamed: PHYSICS IS BROKEN!!! And indeed, it’s been pretty shocking to physicists everywhere. For instance, my equations are working erratically; many of the calculations I tried this weekend came out upside-down or backwards. Even worse, my stove froze my coffee instead of heating it, I just barely prevented my car from floating out of my garage into the trees, and my desk clock broke and spilled time all over the floor. What a mess!

Broken, eh? When we say a coffee machine or a computer is broken, it means it doesn’t work. It’s unavailable until it’s fixed. When a glass is broken, it’s shattered into pieces. We need a new one. I know it’s cute to say that so-and-so’s video “broke the internet.” But aren’t we going a little too far now? Nothing’s broken about physics; it works just as well today as it did a month ago.

More reasonable headlines have suggested that “the laws of physics have been broken”. That’s better; I know what it means to break a law. (Though the metaphor is imperfect, since if I were to break a state law, I’d be punished, whereas if an object were to break a fundamental law of physics, that law would have to be revised!) But as is true in the legal system, not all physics laws, and not all violations of law, are equally significant.

Continue reading

SEARCH Day 2

Day 2 of the SEARCH workshop will get a shorter description than it deserves, because I’ve had to spend time finishing my own talk for this morning. But there were a lot of nice talks, so let me at least tell you what they were about.

Both ATLAS and CMS presented their latest results on searches for supersymmetry. (I should remind you that “searches for supersymmetry” are by no means actually limited to supersymmetry — they can be used to discover or exclude many other new particles and forces that have nothing to do with supersymmetry at all.) Speakers Pascal Pralavorio and Sanjay Padhi gave very useful overviews of the dozens of searches that have been done so far as part of this effort, including a few rather new results that are very powerful. (We should see even more appear at next week’s Supersymmetry conference.) My short summary: almost everything easy has been done thoroughly; many challenging searches have also been carried out; if superpartner particles are present, they’re either

  • so heavy that they aren’t produced very often (e.g. gluinos)
  • rather lightweight, but still not so often produced (e.g. top squarks, charginos, neutralinos, sleptons)
  • produced often, but decaying in some way that is very hard to detect (e.g. gluinos decaying only to quarks, anti-quarks and gluons)

Then we had a few talks by theorists. Patrick Meade talked about how unknown particles that are affected by weak nuclear and electromagnetic forces, but not by strong nuclear forces, could give signs that are hiding underneath processes that occur in the Standard Model. (Examples of such particles are the neutralinos and charginos or sleptons of supersymmetry.) To find them requires increased precision in our calculations and in our measurements of processes where pairs of W and/or Z and/or Higgs particles are produced. As a definite example, Meade noted that the rate for producing pairs of W particles disagrees somewhat from current predictions based on the Standard Model, and emphasized that this small disagreement could be due to new particles (such as top squarks, or sleptons, or charginos and neutralinos) although at this point there’s no way to know.

Matt Reece gave an analogous talk about spin-zero quark-like particles that do feel strong nuclear forces, the classic example of which are top squarks. Again, the presence of these particles can be hidden underneath the large signals from production of top quark/anti-quark pairs, or other common processes. ATLAS and CMS have been working hard to look for signals of these types of particles, and have made a lot of progress, but there are still quite a few possible signals that haven’t been searched for yet. Among other things, Reece discussed some methods invented by theorists that might be useful in contributing to this effort. As with the previous talk, the key to a complete search will be improvements in calculations and measurements of top quark production, and of other processes that involve known particles.

After lunch there was a more general discussion about looking for supersymmetry, including conversation about what variants of supersymmetry haven’t yet been excluded by existing ATLAS and CMS searches.  (I had a few things to say about that in my talk, but more on that tomorrow.)

Jesse Thaler gave a talk reviewing the enormous progress that has been made in understanding how to distinguish ordinary jets arising from quarks and gluons versus jet-like objects made from a single high-energy W, Z, Higgs or top quark that decays to quarks and anti-quarks. (The jargon is that the trick is to use “jet substructure” — the fact that inside a jet-like W are two sub-jets, each from a quark or anti-quark.) At SEARCH 2012, the experimenters showed very promising though preliminary results using a number of new jet substructure methods that had been invented by (mostly) theorists. By now, the experimenters have shown definitively that these methods work — and will continue to work as the rate of collisions at the LHC grows — and have made a number of novel measurements using them. Learning how to use jet substructure is one of the great success stories of the LHC era, and it will continue to be a major story in coming years.

Two talks by ATLAS (Leandro Nisanti) and CMS (Matt Hearndon) followed, each with a long list of careful measurements of what the Standard Model is doing, mostly based so far only on the 2011 data set (and not yet including last year’s data). These measurements are crucially important for multiple reasons:

  • They provide important information which can serve as input to other measurements and searches.
  • They may reveal subtle problems with the Standard Model, due to indirect or small effects from unknown particles or forces.
  • Confirming that measurements of certain processes agree with theoretical predictions gives us confidence that those predictions can be used in other contexts, in particular in searches for unknown particles and forces.

Most, but not all, theoretical predictions for these careful measurements have worked well. Those that aren’t working so well are of course being watched and investigated carefully — but there aren’t any discrepancies large enough to get excited about yet (other than the top quark forward-backward asymmetry puzzle, which wasn’t discussed much today). In general, the Standard Model works beautifully — so far.

The day concluded with a panel discussion focused on these Standard Model measurements. Key questions discussed included: how do we use LHC data to understand the structure of the proton more precisely, and how in turn does that affect our searches for unknown phenomena? In particular, a major concern is the risk of circularity; that a phenomenon from an unknown type of particle could produce a subtle effect that we would fail to recognize for what it is, instead misinterpreting it as a small misunderstanding of proton structure, or as a small problem with a theoretical calculation. Such are the challenges of making increasingly precise measurements, and searching for increasingly rare phenomena, in the complicated environment of the LHC.